From ci at centos.org Sat Mar 2 04:37:03 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 2 Mar 2019 04:37:03 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #287 In-Reply-To: <829000095.1105.1551377238162.JavaMail.jenkins@jenkins.ci.centos.org> References: <829000095.1105.1551377238162.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1704938623.1367.1551501423820.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 36.58 KB...] ================================================================================ Install 3 Packages (+43 Dependent packages) Total download size: 141 M Installed size: 413 M Downloading packages: warning: /var/cache/yum/x86_64/7/epel/packages/distribution-gpg-keys-1.28-1.el7.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 352c64e5: NOKEY Public key for distribution-gpg-keys-1.28-1.el7.noarch.rpm is not installed -------------------------------------------------------------------------------- Total 17 MB/s | 141 MB 00:08 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : apr-1.4.8-3.el7_4.1.x86_64 1/46 Installing : mpfr-3.1.1-4.el7.x86_64 2/46 Installing : libmpc-1.0.1-3.el7.x86_64 3/46 Installing : apr-util-1.5.2-6.el7.x86_64 4/46 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/46 Installing : python-six-1.9.0-2.el7.noarch 6/46 Installing : cpp-4.8.5-36.el7.x86_64 7/46 Installing : elfutils-0.172-2.el7.x86_64 8/46 Installing : pakchois-0.4-10.el7.x86_64 9/46 Installing : perl-srpm-macros-1-8.el7.noarch 10/46 Installing : unzip-6.0-19.el7.x86_64 11/46 Installing : dwz-0.11-3.el7.x86_64 12/46 Installing : zip-3.0-11.el7.x86_64 13/46 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 14/46 Installing : pigz-2.3.4-1.el7.x86_64 15/46 Installing : usermode-1.111-5.el7.x86_64 16/46 Installing : python2-distro-1.2.0-1.el7.noarch 17/46 Installing : patch-2.7.1-10.el7_5.x86_64 18/46 Installing : python-backports-1.0-8.el7.x86_64 19/46 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 20/46 Installing : python-urllib3-1.10.2-5.el7.noarch 21/46 Installing : python-requests-2.6.0-1.el7_1.noarch 22/46 Installing : bzip2-1.0.6-13.el7.x86_64 23/46 Installing : libmodman-2.0.1-8.el7.x86_64 24/46 Installing : libproxy-0.4.11-11.el7.x86_64 25/46 Installing : gdb-7.6.1-114.el7.x86_64 26/46 Installing : perl-Thread-Queue-3.02-2.el7.noarch 27/46 Installing : golang-src-1.11.5-1.el7.noarch 28/46 Installing : python2-pyroute2-0.4.13-1.el7.noarch 29/46 Installing : nettle-2.7.1-8.el7.x86_64 30/46 Installing : mercurial-2.6.2-8.el7_4.x86_64 31/46 Installing : distribution-gpg-keys-1.28-1.el7.noarch 32/46 Installing : mock-core-configs-29.4-1.el7.noarch 33/46 Installing : kernel-headers-3.10.0-957.5.1.el7.x86_64 34/46 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 35/46 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 36/46 Installing : gcc-4.8.5-36.el7.x86_64 37/46 Installing : trousers-0.3.14-2.el7.x86_64 38/46 Installing : gnutls-3.3.29-8.el7.x86_64 39/46 Installing : neon-0.30.0-3.el7.x86_64 40/46 Installing : subversion-libs-1.7.14-14.el7.x86_64 41/46 Installing : subversion-1.7.14-14.el7.x86_64 42/46 Installing : golang-1.11.5-1.el7.x86_64 43/46 Installing : golang-bin-1.11.5-1.el7.x86_64 44/46 Installing : mock-1.4.13-1.el7.noarch 45/46 Installing : rpm-build-4.11.3-35.el7.x86_64 46/46 Verifying : trousers-0.3.14-2.el7.x86_64 1/46 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/46 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/46 Verifying : kernel-headers-3.10.0-957.5.1.el7.x86_64 4/46 Verifying : rpm-build-4.11.3-35.el7.x86_64 5/46 Verifying : distribution-gpg-keys-1.28-1.el7.noarch 6/46 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/46 Verifying : mpfr-3.1.1-4.el7.x86_64 8/46 Verifying : nettle-2.7.1-8.el7.x86_64 9/46 Verifying : gnutls-3.3.29-8.el7.x86_64 10/46 Verifying : cpp-4.8.5-36.el7.x86_64 11/46 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 12/46 Verifying : golang-src-1.11.5-1.el7.noarch 13/46 Verifying : subversion-1.7.14-14.el7.x86_64 14/46 Verifying : gcc-4.8.5-36.el7.x86_64 15/46 Verifying : golang-1.11.5-1.el7.x86_64 16/46 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 17/46 Verifying : apr-1.4.8-3.el7_4.1.x86_64 18/46 Verifying : golang-bin-1.11.5-1.el7.x86_64 19/46 Verifying : gdb-7.6.1-114.el7.x86_64 20/46 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/46 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/46 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 23/46 Verifying : libmodman-2.0.1-8.el7.x86_64 24/46 Verifying : mock-1.4.13-1.el7.noarch 25/46 Verifying : bzip2-1.0.6-13.el7.x86_64 26/46 Verifying : python-backports-1.0-8.el7.x86_64 27/46 Verifying : apr-util-1.5.2-6.el7.x86_64 28/46 Verifying : patch-2.7.1-10.el7_5.x86_64 29/46 Verifying : libmpc-1.0.1-3.el7.x86_64 30/46 Verifying : python2-distro-1.2.0-1.el7.noarch 31/46 Verifying : usermode-1.111-5.el7.x86_64 32/46 Verifying : python-six-1.9.0-2.el7.noarch 33/46 Verifying : libproxy-0.4.11-11.el7.x86_64 34/46 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 35/46 Verifying : neon-0.30.0-3.el7.x86_64 36/46 Verifying : python-requests-2.6.0-1.el7_1.noarch 37/46 Verifying : pigz-2.3.4-1.el7.x86_64 38/46 Verifying : zip-3.0-11.el7.x86_64 39/46 Verifying : python-ipaddress-1.0.16-2.el7.noarch 40/46 Verifying : dwz-0.11-3.el7.x86_64 41/46 Verifying : unzip-6.0-19.el7.x86_64 42/46 Verifying : perl-srpm-macros-1-8.el7.noarch 43/46 Verifying : mock-core-configs-29.4-1.el7.noarch 44/46 Verifying : pakchois-0.4-10.el7.x86_64 45/46 Verifying : elfutils-0.172-2.el7.x86_64 46/46 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.13-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7 distribution-gpg-keys.noarch 0:1.28-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-8.el7 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.5.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:29.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1992 0 --:--:-- --:--:-- --:--:-- 1996 100 8513k 100 8513k 0 0 14.9M 0 --:--:-- --:--:-- --:--:-- 14.9M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1725 0 --:--:-- --:--:-- --:--:-- 1722 68 38.3M 68 26.3M 0 0 25.9M 0 0:00:01 0:00:01 --:--:-- 25.9M100 38.3M 100 38.3M 0 0 27.6M 0 0:00:01 0:00:01 --:--:-- 32.0M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 571 0 --:--:-- --:--:-- --:--:-- 570 0 0 0 620 0 0 1862 0 --:--:-- --:--:-- --:--:-- 1862 100 10.7M 100 10.7M 0 0 17.6M 0 --:--:-- --:--:-- --:--:-- 17.6M ~/nightlyrpmB4Xzo5/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmB4Xzo5/glusterd2-v6.0-dev.145.git994aaa0-vendor.tar.xz Created dist archive /root/nightlyrpmB4Xzo5/glusterd2-v6.0-dev.145.git994aaa0-vendor.tar.xz ~ ~/nightlyrpmB4Xzo5 ~ INFO: mock.py version 1.4.13 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmB4Xzo5/rpmbuild/SRPMS/glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.13 INFO: Mock Version: 1.4.13 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmB4Xzo5/rpmbuild/SRPMS/glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 30 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M f1a2f1e8b837481d995fcbcc46280d3f -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.9T1ifK:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins3723126759442432102.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 0afd88ad +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 92 | n28.pufty | 172.19.3.92 | pufty | 3254 | Deployed | 0afd88ad | None | None | 7 | x86_64 | 1 | 2270 | None | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sat Mar 2 06:09:49 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 2 Mar 2019 06:09:49 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #91 Message-ID: <1237552602.1386.1551506989116.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 465.53 KB...] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Saturday 02 March 2019 05:05:00 +0000 (0:00:01.563) 0:20:31.515 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Saturday 02 March 2019 05:05:00 +0000 (0:00:00.341) 0:20:31.856 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Saturday 02 March 2019 05:05:02 +0000 (0:00:01.574) 0:20:33.431 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Saturday 02 March 2019 05:05:02 +0000 (0:00:00.334) 0:20:33.765 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Saturday 02 March 2019 05:05:04 +0000 (0:00:01.725) 0:20:35.491 ******** changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Saturday 02 March 2019 05:05:06 +0000 (0:00:02.058) 0:20:37.550 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Saturday 02 March 2019 05:05:06 +0000 (0:00:00.304) 0:20:37.854 ******** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Add devices] ***************************************** Saturday 02 March 2019 05:05:56 +0000 (0:00:50.095) 0:21:27.950 ******** included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 TASK [GCS | GD2 Cluster | Add devices | Set facts] ***************************** Saturday 02 March 2019 05:05:57 +0000 (0:00:00.330) 0:21:28.280 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Add devices | Add devices for kube2] ***************** Saturday 02 March 2019 05:05:57 +0000 (0:00:00.377) 0:21:28.658 ******** FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (1 retries left). failed: [kube1] (item=/dev/vdc) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdc", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.27.233:24007/v1/devices/4a271b2e-ca8e-4758-a00a-af57ba905606"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (1 retries left). failed: [kube1] (item=/dev/vdd) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdd", "msg": "Status code was -1 and not [201]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.27.233:24007/v1/devices/4a271b2e-ca8e-4758-a00a-af57ba905606"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (1 retries left). failed: [kube1] (item=/dev/vde) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vde", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.27.233:24007/v1/devices/4a271b2e-ca8e-4758-a00a-af57ba905606"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=426 changed=119 unreachable=0 failed=1 kube2 : ok=320 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Saturday 02 March 2019 06:09:48 +0000 (1:03:51.141) 1:25:19.800 ******** =============================================================================== GCS | GD2 Cluster | Add devices | Add devices for kube2 -------------- 3831.14s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 84.79s download : container_download | download images for kubeadm config images -- 51.59s GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready --------- 50.10s kubernetes/master : kubeadm | Initialize first master ------------------ 40.76s kubernetes/master : kubeadm | Init other uninitialized masters --------- 39.57s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 35.26s etcd : Gen_certs | Write etcd master certs ----------------------------- 33.22s Install packages ------------------------------------------------------- 30.23s Wait for host to be available ------------------------------------------ 20.92s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.53s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 19.15s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 17.33s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 13.32s gather facts from all instances ---------------------------------------- 13.31s etcd : Gen_certs | Gather etcd master certs ---------------------------- 12.61s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.15s etcd : reload etcd ----------------------------------------------------- 11.86s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests -- 11.70s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.46s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun Mar 3 17:12:37 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 3 Mar 2019 17:12:37 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #288 In-Reply-To: <1704938623.1367.1551501423820.JavaMail.jenkins@jenkins.ci.centos.org> References: <1704938623.1367.1551501423820.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1396477534.1603.1551633157483.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.36 KB...] Total 65 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : apr-1.4.8-3.el7_4.1.x86_64 1/49 Installing : mpfr-3.1.1-4.el7.x86_64 2/49 Installing : libmpc-1.0.1-3.el7.x86_64 3/49 Installing : apr-util-1.5.2-6.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : perl-srpm-macros-1-8.el7.noarch 10/49 Installing : unzip-6.0-19.el7.x86_64 11/49 Installing : dwz-0.11-3.el7.x86_64 12/49 Installing : zip-3.0-11.el7.x86_64 13/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 14/49 Installing : pigz-2.3.4-1.el7.x86_64 15/49 Installing : usermode-1.111-5.el7.x86_64 16/49 Installing : python2-distro-1.2.0-1.el7.noarch 17/49 Installing : patch-2.7.1-10.el7_5.x86_64 18/49 Installing : python-backports-1.0-8.el7.x86_64 19/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 20/49 Installing : python-urllib3-1.10.2-5.el7.noarch 21/49 Installing : python-requests-2.6.0-1.el7_1.noarch 22/49 Installing : bzip2-1.0.6-13.el7.x86_64 23/49 Installing : distribution-gpg-keys-1.29-1.el7.noarch 24/49 Installing : mock-core-configs-30.1-1.el7.noarch 25/49 Installing : python-babel-0.9.6-8.el7.noarch 26/49 Installing : libmodman-2.0.1-8.el7.x86_64 27/49 Installing : libproxy-0.4.11-11.el7.x86_64 28/49 Installing : python-markupsafe-0.11-10.el7.x86_64 29/49 Installing : python-jinja2-2.7.2-2.el7.noarch 30/49 Installing : gdb-7.6.1-114.el7.x86_64 31/49 Installing : kernel-headers-3.10.0-957.5.1.el7.x86_64 32/49 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 33/49 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 34/49 Installing : gcc-4.8.5-36.el7.x86_64 35/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 36/49 Installing : golang-src-1.11.5-1.el7.noarch 37/49 Installing : python2-pyroute2-0.4.13-1.el7.noarch 38/49 Installing : nettle-2.7.1-8.el7.x86_64 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-8.el7.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : mock-1.4.14-2.el7.noarch 48/49 Installing : rpm-build-4.11.3-35.el7.x86_64 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 5/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 6/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/49 Verifying : mpfr-3.1.1-4.el7.x86_64 8/49 Verifying : gnutls-3.3.29-8.el7.x86_64 9/49 Verifying : nettle-2.7.1-8.el7.x86_64 10/49 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 11/49 Verifying : golang-src-1.11.5-1.el7.noarch 12/49 Verifying : subversion-1.7.14-14.el7.x86_64 13/49 Verifying : gcc-4.8.5-36.el7.x86_64 14/49 Verifying : golang-1.11.5-1.el7.x86_64 15/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 16/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : kernel-headers-3.10.0-957.5.1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : mock-core-configs-30.1-1.el7.noarch 23/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 24/49 Verifying : libmodman-2.0.1-8.el7.x86_64 25/49 Verifying : cpp-4.8.5-36.el7.x86_64 26/49 Verifying : python-babel-0.9.6-8.el7.noarch 27/49 Verifying : distribution-gpg-keys-1.29-1.el7.noarch 28/49 Verifying : bzip2-1.0.6-13.el7.x86_64 29/49 Verifying : python-backports-1.0-8.el7.x86_64 30/49 Verifying : apr-util-1.5.2-6.el7.x86_64 31/49 Verifying : patch-2.7.1-10.el7_5.x86_64 32/49 Verifying : libmpc-1.0.1-3.el7.x86_64 33/49 Verifying : python2-distro-1.2.0-1.el7.noarch 34/49 Verifying : usermode-1.111-5.el7.x86_64 35/49 Verifying : python-six-1.9.0-2.el7.noarch 36/49 Verifying : libproxy-0.4.11-11.el7.x86_64 37/49 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 38/49 Verifying : neon-0.30.0-3.el7.x86_64 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : pigz-2.3.4-1.el7.x86_64 41/49 Verifying : zip-3.0-11.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : perl-srpm-macros-1-8.el7.noarch 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7 distribution-gpg-keys.noarch 0:1.29-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-8.el7 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.5.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.1-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1821 0 --:--:-- --:--:-- --:--:-- 1827 100 8513k 100 8513k 0 0 14.5M 0 --:--:-- --:--:-- --:--:-- 14.5M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2203 0 --:--:-- --:--:-- --:--:-- 2200 100 38.3M 100 38.3M 0 0 42.0M 0 --:--:-- --:--:-- --:--:-- 42.0M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 562 0 --:--:-- --:--:-- --:--:-- 564 0 0 0 620 0 0 1825 0 --:--:-- --:--:-- --:--:-- 1825 100 10.7M 100 10.7M 0 0 18.0M 0 --:--:-- --:--:-- --:--:-- 18.0M ~/nightlyrpm2PpqGe/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpm2PpqGe/glusterd2-v6.0-dev.145.git994aaa0-vendor.tar.xz Created dist archive /root/nightlyrpm2PpqGe/glusterd2-v6.0-dev.145.git994aaa0-vendor.tar.xz ~ ~/nightlyrpm2PpqGe ~ INFO: mock.py version 1.4.14 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpm2PpqGe/rpmbuild/SRPMS/glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.14 INFO: Mock Version: 1.4.14 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpm2PpqGe/rpmbuild/SRPMS/glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 22 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 5dee3d88ad6a4710a759e9caede81f83 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.o5dGBV:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins3837812433001637047.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 03f1b70e +---------+-----------+------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 131 | n4.crusty | 172.19.2.4 | crusty | 3260 | Deployed | 03f1b70e | None | None | 7 | x86_64 | 1 | 2030 | None | +---------+-----------+------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sun Mar 3 18:19:47 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 3 Mar 2019 18:19:47 +0000 (UTC) Subject: [CI-results] Jenkins build is back to normal : gluster_anteater_gcs #92 In-Reply-To: <1237552602.1386.1551506989116.JavaMail.jenkins@jenkins.ci.centos.org> References: <1237552602.1386.1551506989116.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2000407946.1618.1551637187139.JavaMail.jenkins@jenkins.ci.centos.org> See From ci at centos.org Mon Mar 4 19:09:29 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 4 Mar 2019 19:09:29 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #93 Message-ID: <1121835926.1857.1551726569471.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ Started by timer [EnvInject] - Loading node environment variables. Building remotely on gluster-ci-slave07 (gluster) in workspace No credentials specified Wiping out workspace first. Cloning the remote Git repository Cloning repository https://github.com/gluster/centosci.git > git init # timeout=10 Fetching upstream changes from https://github.com/gluster/centosci.git > git --version # timeout=10 > git fetch --tags --progress https://github.com/gluster/centosci.git +refs/heads/*:refs/remotes/origin/* > git config remote.origin.url https://github.com/gluster/centosci.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 > git config remote.origin.url https://github.com/gluster/centosci.git # timeout=10 Fetching upstream changes from https://github.com/gluster/centosci.git > git fetch --tags --progress https://github.com/gluster/centosci.git +refs/heads/*:refs/remotes/origin/* > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10 Checking out Revision c63d990415e215c34c4398c58a968f89db3692d5 (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f c63d990415e215c34c4398c58a968f89db3692d5 Commit message: "Merge pull request #55 from kshlm/new-glusterd2-admin" > git rev-list --no-walk c63d990415e215c34c4398c58a968f89db3692d5 # timeout=10 [gluster_anteater_gcs] $ /bin/sh -xe /tmp/jenkins5740916083487181072.sh + set +x The requested operation failed as no inventory is available. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Mon Mar 4 19:13:54 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 4 Mar 2019 19:13:54 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #289 In-Reply-To: <1396477534.1603.1551633157483.JavaMail.jenkins@jenkins.ci.centos.org> References: <1396477534.1603.1551633157483.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <606407812.1865.1551726834893.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.33 KB...] Total 68 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : apr-1.4.8-3.el7_4.1.x86_64 1/49 Installing : mpfr-3.1.1-4.el7.x86_64 2/49 Installing : libmpc-1.0.1-3.el7.x86_64 3/49 Installing : apr-util-1.5.2-6.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : perl-srpm-macros-1-8.el7.noarch 10/49 Installing : unzip-6.0-19.el7.x86_64 11/49 Installing : dwz-0.11-3.el7.x86_64 12/49 Installing : zip-3.0-11.el7.x86_64 13/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 14/49 Installing : pigz-2.3.4-1.el7.x86_64 15/49 Installing : usermode-1.111-5.el7.x86_64 16/49 Installing : python2-distro-1.2.0-1.el7.noarch 17/49 Installing : patch-2.7.1-10.el7_5.x86_64 18/49 Installing : python-backports-1.0-8.el7.x86_64 19/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 20/49 Installing : python-urllib3-1.10.2-5.el7.noarch 21/49 Installing : python-requests-2.6.0-1.el7_1.noarch 22/49 Installing : bzip2-1.0.6-13.el7.x86_64 23/49 Installing : distribution-gpg-keys-1.29-1.el7.noarch 24/49 Installing : mock-core-configs-30.1-1.el7.noarch 25/49 Installing : python-babel-0.9.6-8.el7.noarch 26/49 Installing : libmodman-2.0.1-8.el7.x86_64 27/49 Installing : libproxy-0.4.11-11.el7.x86_64 28/49 Installing : python-markupsafe-0.11-10.el7.x86_64 29/49 Installing : python-jinja2-2.7.2-2.el7.noarch 30/49 Installing : gdb-7.6.1-114.el7.x86_64 31/49 Installing : kernel-headers-3.10.0-957.5.1.el7.x86_64 32/49 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 33/49 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 34/49 Installing : gcc-4.8.5-36.el7.x86_64 35/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 36/49 Installing : golang-src-1.11.5-1.el7.noarch 37/49 Installing : python2-pyroute2-0.4.13-1.el7.noarch 38/49 Installing : nettle-2.7.1-8.el7.x86_64 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-8.el7.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : mock-1.4.14-2.el7.noarch 48/49 Installing : rpm-build-4.11.3-35.el7.x86_64 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 5/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 6/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/49 Verifying : mpfr-3.1.1-4.el7.x86_64 8/49 Verifying : gnutls-3.3.29-8.el7.x86_64 9/49 Verifying : nettle-2.7.1-8.el7.x86_64 10/49 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 11/49 Verifying : golang-src-1.11.5-1.el7.noarch 12/49 Verifying : subversion-1.7.14-14.el7.x86_64 13/49 Verifying : gcc-4.8.5-36.el7.x86_64 14/49 Verifying : golang-1.11.5-1.el7.x86_64 15/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 16/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : kernel-headers-3.10.0-957.5.1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : mock-core-configs-30.1-1.el7.noarch 23/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 24/49 Verifying : libmodman-2.0.1-8.el7.x86_64 25/49 Verifying : cpp-4.8.5-36.el7.x86_64 26/49 Verifying : python-babel-0.9.6-8.el7.noarch 27/49 Verifying : distribution-gpg-keys-1.29-1.el7.noarch 28/49 Verifying : bzip2-1.0.6-13.el7.x86_64 29/49 Verifying : python-backports-1.0-8.el7.x86_64 30/49 Verifying : apr-util-1.5.2-6.el7.x86_64 31/49 Verifying : patch-2.7.1-10.el7_5.x86_64 32/49 Verifying : libmpc-1.0.1-3.el7.x86_64 33/49 Verifying : python2-distro-1.2.0-1.el7.noarch 34/49 Verifying : usermode-1.111-5.el7.x86_64 35/49 Verifying : python-six-1.9.0-2.el7.noarch 36/49 Verifying : libproxy-0.4.11-11.el7.x86_64 37/49 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 38/49 Verifying : neon-0.30.0-3.el7.x86_64 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : pigz-2.3.4-1.el7.x86_64 41/49 Verifying : zip-3.0-11.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : perl-srpm-macros-1-8.el7.noarch 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7 distribution-gpg-keys.noarch 0:1.29-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-8.el7 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.5.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.1-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1119 0 --:--:-- --:--:-- --:--:-- 1120 100 8513k 100 8513k 0 0 10.8M 0 --:--:-- --:--:-- --:--:-- 10.8M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1824 0 --:--:-- --:--:-- --:--:-- 1827 28 38.3M 28 10.7M 0 0 15.7M 0 0:00:02 --:--:-- 0:00:02 15.7M100 38.3M 100 38.3M 0 0 35.8M 0 0:00:01 0:00:01 --:--:-- 71.0M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 534 0 --:--:-- --:--:-- --:--:-- 534 0 0 0 620 0 0 1657 0 --:--:-- --:--:-- --:--:-- 1657 100 10.7M 100 10.7M 0 0 17.8M 0 --:--:-- --:--:-- --:--:-- 17.8M ~/nightlyrpmjvSQJ3/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmjvSQJ3/glusterd2-v6.0-dev.145.git994aaa0-vendor.tar.xz Created dist archive /root/nightlyrpmjvSQJ3/glusterd2-v6.0-dev.145.git994aaa0-vendor.tar.xz ~ ~/nightlyrpmjvSQJ3 ~ INFO: mock.py version 1.4.14 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmjvSQJ3/rpmbuild/SRPMS/glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.14 INFO: Mock Version: 1.4.14 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmjvSQJ3/rpmbuild/SRPMS/glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 30 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 62f6794dc776438182657a85552749bf -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.jJSlgp:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins4940936626773596791.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done dd10b91f +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 281 | n26.gusty | 172.19.2.154 | gusty | 3265 | Deployed | dd10b91f | None | None | 7 | x86_64 | 1 | 2250 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Tue Mar 5 04:52:51 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 5 Mar 2019 04:52:51 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #290 In-Reply-To: <606407812.1865.1551726834893.JavaMail.jenkins@jenkins.ci.centos.org> References: <606407812.1865.1551726834893.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <589404527.2091.1551761571941.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.37 KB...] Total 57 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : apr-1.4.8-3.el7_4.1.x86_64 1/49 Installing : mpfr-3.1.1-4.el7.x86_64 2/49 Installing : libmpc-1.0.1-3.el7.x86_64 3/49 Installing : apr-util-1.5.2-6.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : perl-srpm-macros-1-8.el7.noarch 10/49 Installing : unzip-6.0-19.el7.x86_64 11/49 Installing : dwz-0.11-3.el7.x86_64 12/49 Installing : zip-3.0-11.el7.x86_64 13/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 14/49 Installing : pigz-2.3.4-1.el7.x86_64 15/49 Installing : usermode-1.111-5.el7.x86_64 16/49 Installing : python2-distro-1.2.0-1.el7.noarch 17/49 Installing : patch-2.7.1-10.el7_5.x86_64 18/49 Installing : python-backports-1.0-8.el7.x86_64 19/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 20/49 Installing : python-urllib3-1.10.2-5.el7.noarch 21/49 Installing : python-requests-2.6.0-1.el7_1.noarch 22/49 Installing : bzip2-1.0.6-13.el7.x86_64 23/49 Installing : distribution-gpg-keys-1.29-1.el7.noarch 24/49 Installing : mock-core-configs-30.1-1.el7.noarch 25/49 Installing : python-babel-0.9.6-8.el7.noarch 26/49 Installing : libmodman-2.0.1-8.el7.x86_64 27/49 Installing : libproxy-0.4.11-11.el7.x86_64 28/49 Installing : python-markupsafe-0.11-10.el7.x86_64 29/49 Installing : python-jinja2-2.7.2-2.el7.noarch 30/49 Installing : gdb-7.6.1-114.el7.x86_64 31/49 Installing : kernel-headers-3.10.0-957.5.1.el7.x86_64 32/49 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 33/49 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 34/49 Installing : gcc-4.8.5-36.el7.x86_64 35/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 36/49 Installing : golang-src-1.11.5-1.el7.noarch 37/49 Installing : python2-pyroute2-0.4.13-1.el7.noarch 38/49 Installing : nettle-2.7.1-8.el7.x86_64 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-8.el7.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : mock-1.4.14-2.el7.noarch 48/49 Installing : rpm-build-4.11.3-35.el7.x86_64 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 5/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 6/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/49 Verifying : mpfr-3.1.1-4.el7.x86_64 8/49 Verifying : gnutls-3.3.29-8.el7.x86_64 9/49 Verifying : nettle-2.7.1-8.el7.x86_64 10/49 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 11/49 Verifying : golang-src-1.11.5-1.el7.noarch 12/49 Verifying : subversion-1.7.14-14.el7.x86_64 13/49 Verifying : gcc-4.8.5-36.el7.x86_64 14/49 Verifying : golang-1.11.5-1.el7.x86_64 15/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 16/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : kernel-headers-3.10.0-957.5.1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : mock-core-configs-30.1-1.el7.noarch 23/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 24/49 Verifying : libmodman-2.0.1-8.el7.x86_64 25/49 Verifying : cpp-4.8.5-36.el7.x86_64 26/49 Verifying : python-babel-0.9.6-8.el7.noarch 27/49 Verifying : distribution-gpg-keys-1.29-1.el7.noarch 28/49 Verifying : bzip2-1.0.6-13.el7.x86_64 29/49 Verifying : python-backports-1.0-8.el7.x86_64 30/49 Verifying : apr-util-1.5.2-6.el7.x86_64 31/49 Verifying : patch-2.7.1-10.el7_5.x86_64 32/49 Verifying : libmpc-1.0.1-3.el7.x86_64 33/49 Verifying : python2-distro-1.2.0-1.el7.noarch 34/49 Verifying : usermode-1.111-5.el7.x86_64 35/49 Verifying : python-six-1.9.0-2.el7.noarch 36/49 Verifying : libproxy-0.4.11-11.el7.x86_64 37/49 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 38/49 Verifying : neon-0.30.0-3.el7.x86_64 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : pigz-2.3.4-1.el7.x86_64 41/49 Verifying : zip-3.0-11.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : perl-srpm-macros-1-8.el7.noarch 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7 distribution-gpg-keys.noarch 0:1.29-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-8.el7 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.5.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.1-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2039 0 --:--:-- --:--:-- --:--:-- 2050 100 8513k 100 8513k 0 0 16.7M 0 --:--:-- --:--:-- --:--:-- 16.7M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2199 0 --:--:-- --:--:-- --:--:-- 2200 100 38.3M 100 38.3M 0 0 43.3M 0 --:--:-- --:--:-- --:--:-- 43.3M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 191 0 --:--:-- --:--:-- --:--:-- 192 0 0 0 620 0 0 688 0 --:--:-- --:--:-- --:--:-- 688 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 10.7M 100 10.7M 0 0 8834k 0 0:00:01 0:00:01 --:--:-- 34.8M ~/nightlyrpm7Skp86/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpm7Skp86/glusterd2-v6.0-dev.145.git994aaa0-vendor.tar.xz Created dist archive /root/nightlyrpm7Skp86/glusterd2-v6.0-dev.145.git994aaa0-vendor.tar.xz ~ ~/nightlyrpm7Skp86 ~ INFO: mock.py version 1.4.14 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpm7Skp86/rpmbuild/SRPMS/glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.14 INFO: Mock Version: 1.4.14 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpm7Skp86/rpmbuild/SRPMS/glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 25 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M ea23c2bb955a44839041500dfcf8f737 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.3ocADC:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins8424063584332909526.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done a23ed98d +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 112 | n48.pufty | 172.19.3.112 | pufty | 3267 | Deployed | a23ed98d | None | None | 7 | x86_64 | 1 | 2470 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Tue Mar 5 06:22:03 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 5 Mar 2019 06:22:03 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #94 In-Reply-To: <1121835926.1857.1551726569471.JavaMail.jenkins@jenkins.ci.centos.org> References: <1121835926.1857.1551726569471.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2141710230.2134.1551766923452.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 465.28 KB...] ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Tuesday 05 March 2019 05:18:07 +0000 (0:00:01.643) 0:20:14.492 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Tuesday 05 March 2019 05:18:07 +0000 (0:00:00.357) 0:20:14.849 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Tuesday 05 March 2019 05:18:09 +0000 (0:00:01.747) 0:20:16.597 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Tuesday 05 March 2019 05:18:10 +0000 (0:00:00.295) 0:20:16.892 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Tuesday 05 March 2019 05:18:11 +0000 (0:00:01.505) 0:20:18.398 ********* changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Tuesday 05 March 2019 05:18:12 +0000 (0:00:01.244) 0:20:19.643 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Tuesday 05 March 2019 05:18:13 +0000 (0:00:00.324) 0:20:19.968 ********* FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Add devices] ***************************************** Tuesday 05 March 2019 05:18:40 +0000 (0:00:27.283) 0:20:47.252 ********* included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 TASK [GCS | GD2 Cluster | Add devices | Set facts] ***************************** Tuesday 05 March 2019 05:18:40 +0000 (0:00:00.248) 0:20:47.500 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Add devices | Add devices for kube1] ***************** Tuesday 05 March 2019 05:18:40 +0000 (0:00:00.370) 0:20:47.871 ********* FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (1 retries left). failed: [kube1] (item=/dev/vdc) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdc", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.16.64:24007/v1/devices/044bb9e3-eabd-4138-99a7-bafb333487e4"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (1 retries left). failed: [kube1] (item=/dev/vdd) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdd", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.16.64:24007/v1/devices/044bb9e3-eabd-4138-99a7-bafb333487e4"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (1 retries left). failed: [kube1] (item=/dev/vde) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vde", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.16.64:24007/v1/devices/044bb9e3-eabd-4138-99a7-bafb333487e4"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=426 changed=119 unreachable=0 failed=1 kube2 : ok=320 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Tuesday 05 March 2019 06:22:02 +0000 (1:03:21.971) 1:24:09.842 ********* =============================================================================== GCS | GD2 Cluster | Add devices | Add devices for kube1 -------------- 3801.97s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 83.90s kubernetes/master : kubeadm | Init other uninitialized masters --------- 39.76s kubernetes/master : kubeadm | Initialize first master ------------------ 38.70s download : container_download | download images for kubeadm config images -- 36.46s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 35.51s etcd : Gen_certs | Write etcd master certs ----------------------------- 33.08s Install packages ------------------------------------------------------- 29.83s GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready --------- 27.28s Wait for host to be available ------------------------------------------ 20.98s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.54s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 18.36s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 18.34s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 14.44s gather facts from all instances ---------------------------------------- 12.94s etcd : Gen_certs | Gather etcd master certs ---------------------------- 12.88s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.26s etcd : reload etcd ----------------------------------------------------- 12.14s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests -- 11.78s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.21s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Tue Mar 5 15:08:40 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 5 Mar 2019 15:08:40 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #291 In-Reply-To: <589404527.2091.1551761571941.JavaMail.jenkins@jenkins.ci.centos.org> References: <589404527.2091.1551761571941.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1684528042.2347.1551798520688.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.33 KB...] Total 70 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : perl-srpm-macros-1-8.el7.noarch 10/49 Installing : unzip-6.0-19.el7.x86_64 11/49 Installing : dwz-0.11-3.el7.x86_64 12/49 Installing : bzip2-1.0.6-13.el7.x86_64 13/49 Installing : usermode-1.111-5.el7.x86_64 14/49 Installing : python2-distro-1.2.0-1.el7.noarch 15/49 Installing : patch-2.7.1-10.el7_5.x86_64 16/49 Installing : python-backports-1.0-8.el7.x86_64 17/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/49 Installing : python-urllib3-1.10.2-5.el7.noarch 19/49 Installing : python-requests-2.6.0-1.el7_1.noarch 20/49 Installing : distribution-gpg-keys-1.29-1.el7.noarch 21/49 Installing : mock-core-configs-30.2-1.el7.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : libmodman-2.0.1-8.el7.x86_64 24/49 Installing : libproxy-0.4.11-11.el7.x86_64 25/49 Installing : python-markupsafe-0.11-10.el7.x86_64 26/49 Installing : python-jinja2-2.7.2-2.el7.noarch 27/49 Installing : gdb-7.6.1-114.el7.x86_64 28/49 Installing : kernel-headers-3.10.0-957.5.1.el7.x86_64 29/49 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 30/49 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 31/49 Installing : gcc-4.8.5-36.el7.x86_64 32/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : golang-src-1.11.5-1.el7.noarch 35/49 Installing : python2-pyroute2-0.4.13-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-8.el7.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.14-2.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 5/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 6/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/49 Verifying : zip-3.0-11.el7.x86_64 8/49 Verifying : gnutls-3.3.29-8.el7.x86_64 9/49 Verifying : nettle-2.7.1-8.el7.x86_64 10/49 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 11/49 Verifying : golang-src-1.11.5-1.el7.noarch 12/49 Verifying : pigz-2.3.4-1.el7.x86_64 13/49 Verifying : gcc-4.8.5-36.el7.x86_64 14/49 Verifying : golang-1.11.5-1.el7.x86_64 15/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 16/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : kernel-headers-3.10.0-957.5.1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 23/49 Verifying : libmodman-2.0.1-8.el7.x86_64 24/49 Verifying : cpp-4.8.5-36.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : python-babel-0.9.6-8.el7.noarch 27/49 Verifying : distribution-gpg-keys-1.29-1.el7.noarch 28/49 Verifying : apr-util-1.5.2-6.el7.x86_64 29/49 Verifying : python-backports-1.0-8.el7.x86_64 30/49 Verifying : patch-2.7.1-10.el7_5.x86_64 31/49 Verifying : libmpc-1.0.1-3.el7.x86_64 32/49 Verifying : python2-distro-1.2.0-1.el7.noarch 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : perl-srpm-macros-1-8.el7.noarch 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7 distribution-gpg-keys.noarch 0:1.29-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-8.el7 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.5.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1588 0 --:--:-- --:--:-- --:--:-- 1596 59 8513k 59 5048k 0 0 8339k 0 0:00:01 --:--:-- 0:00:01 8339k100 8513k 100 8513k 0 0 12.9M 0 --:--:-- --:--:-- --:--:-- 89.0M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1398 0 --:--:-- --:--:-- --:--:-- 1399 60 38.3M 60 23.1M 0 0 24.5M 0 0:00:01 --:--:-- 0:00:01 24.5M100 38.3M 100 38.3M 0 0 33.8M 0 0:00:01 0:00:01 --:--:-- 81.0M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 572 0 --:--:-- --:--:-- --:--:-- 573 0 0 0 620 0 0 1240 0 --:--:-- --:--:-- --:--:-- 1240 100 10.7M 100 10.7M 0 0 13.3M 0 --:--:-- --:--:-- --:--:-- 13.3M ~/nightlyrpmPqjzbb/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmPqjzbb/glusterd2-v6.0-dev.145.git994aaa0-vendor.tar.xz Created dist archive /root/nightlyrpmPqjzbb/glusterd2-v6.0-dev.145.git994aaa0-vendor.tar.xz ~ ~/nightlyrpmPqjzbb ~ INFO: mock.py version 1.4.14 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmPqjzbb/rpmbuild/SRPMS/glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.14 INFO: Mock Version: 1.4.14 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmPqjzbb/rpmbuild/SRPMS/glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 24 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 78ca7a89dbaa488b85c0248f60eb569e -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.cygbtZ:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins5707413279065072311.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done ee7b809d +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 178 | n51.crusty | 172.19.2.51 | crusty | 3270 | Deployed | ee7b809d | None | None | 7 | x86_64 | 1 | 2500 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Tue Mar 5 15:38:51 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 5 Mar 2019 15:38:51 +0000 (UTC) Subject: [CI-results] Jenkins build is back to normal : gluster_anteater_gcs #95 In-Reply-To: <2141710230.2134.1551766923452.JavaMail.jenkins@jenkins.ci.centos.org> References: <2141710230.2134.1551766923452.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <235783726.2356.1551800331952.JavaMail.jenkins@jenkins.ci.centos.org> See From ci at centos.org Wed Mar 6 00:47:11 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 6 Mar 2019 00:47:11 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #292 In-Reply-To: <1684528042.2347.1551798520688.JavaMail.jenkins@jenkins.ci.centos.org> References: <1684528042.2347.1551798520688.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1011980734.2585.1551833232116.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.38 KB...] Total 58 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : perl-srpm-macros-1-8.el7.noarch 10/49 Installing : unzip-6.0-19.el7.x86_64 11/49 Installing : dwz-0.11-3.el7.x86_64 12/49 Installing : bzip2-1.0.6-13.el7.x86_64 13/49 Installing : usermode-1.111-5.el7.x86_64 14/49 Installing : python2-distro-1.2.0-1.el7.noarch 15/49 Installing : patch-2.7.1-10.el7_5.x86_64 16/49 Installing : python-backports-1.0-8.el7.x86_64 17/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/49 Installing : python-urllib3-1.10.2-5.el7.noarch 19/49 Installing : python-requests-2.6.0-1.el7_1.noarch 20/49 Installing : distribution-gpg-keys-1.29-1.el7.noarch 21/49 Installing : mock-core-configs-30.2-1.el7.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : libmodman-2.0.1-8.el7.x86_64 24/49 Installing : libproxy-0.4.11-11.el7.x86_64 25/49 Installing : python-markupsafe-0.11-10.el7.x86_64 26/49 Installing : python-jinja2-2.7.2-2.el7.noarch 27/49 Installing : gdb-7.6.1-114.el7.x86_64 28/49 Installing : kernel-headers-3.10.0-957.5.1.el7.x86_64 29/49 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 30/49 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 31/49 Installing : gcc-4.8.5-36.el7.x86_64 32/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : golang-src-1.11.5-1.el7.noarch 35/49 Installing : python2-pyroute2-0.4.13-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-8.el7.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.14-2.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 5/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 6/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/49 Verifying : zip-3.0-11.el7.x86_64 8/49 Verifying : gnutls-3.3.29-8.el7.x86_64 9/49 Verifying : nettle-2.7.1-8.el7.x86_64 10/49 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 11/49 Verifying : golang-src-1.11.5-1.el7.noarch 12/49 Verifying : pigz-2.3.4-1.el7.x86_64 13/49 Verifying : gcc-4.8.5-36.el7.x86_64 14/49 Verifying : golang-1.11.5-1.el7.x86_64 15/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 16/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : kernel-headers-3.10.0-957.5.1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 23/49 Verifying : libmodman-2.0.1-8.el7.x86_64 24/49 Verifying : cpp-4.8.5-36.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : python-babel-0.9.6-8.el7.noarch 27/49 Verifying : distribution-gpg-keys-1.29-1.el7.noarch 28/49 Verifying : apr-util-1.5.2-6.el7.x86_64 29/49 Verifying : python-backports-1.0-8.el7.x86_64 30/49 Verifying : patch-2.7.1-10.el7_5.x86_64 31/49 Verifying : libmpc-1.0.1-3.el7.x86_64 32/49 Verifying : python2-distro-1.2.0-1.el7.noarch 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : perl-srpm-macros-1-8.el7.noarch 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7 distribution-gpg-keys.noarch 0:1.29-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-8.el7 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.5.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2011 0 --:--:-- --:--:-- --:--:-- 2016 100 8513k 100 8513k 0 0 14.9M 0 --:--:-- --:--:-- --:--:-- 14.9M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2121 0 --:--:-- --:--:-- --:--:-- 2125 4 38.3M 4 1735k 0 0 3483k 0 0:00:11 --:--:-- 0:00:11 3483k100 38.3M 100 38.3M 0 0 43.4M 0 --:--:-- --:--:-- --:--:-- 95.3M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 655 0 --:--:-- --:--:-- --:--:-- 656 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 620 0 0 1352 0 --:--:-- --:--:-- --:--:-- 605k 100 10.7M 100 10.7M 0 0 15.2M 0 --:--:-- --:--:-- --:--:-- 15.2M ~/nightlyrpm61afiA/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpm61afiA/glusterd2-v6.0-dev.145.git994aaa0-vendor.tar.xz Created dist archive /root/nightlyrpm61afiA/glusterd2-v6.0-dev.145.git994aaa0-vendor.tar.xz ~ ~/nightlyrpm61afiA ~ INFO: mock.py version 1.4.14 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpm61afiA/rpmbuild/SRPMS/glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.14 INFO: Mock Version: 1.4.14 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpm61afiA/rpmbuild/SRPMS/glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 22 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M fd7d722091834afb8748ad05433698c2 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.R_gJiG:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins6241535205994017104.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done a26cd3cc +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 83 | n19.pufty | 172.19.3.83 | pufty | 3268 | Deployed | a26cd3cc | None | None | 7 | x86_64 | 1 | 2180 | None | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Wed Mar 6 03:04:01 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 6 Mar 2019 03:04:01 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #96 Message-ID: <1504985332.2631.1551841441945.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 465.36 KB...] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Wednesday 06 March 2019 01:59:52 +0000 (0:00:01.804) 0:21:02.486 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Wednesday 06 March 2019 01:59:52 +0000 (0:00:00.562) 0:21:03.048 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Wednesday 06 March 2019 01:59:54 +0000 (0:00:01.782) 0:21:04.831 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Wednesday 06 March 2019 01:59:54 +0000 (0:00:00.513) 0:21:05.344 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Wednesday 06 March 2019 01:59:57 +0000 (0:00:02.450) 0:21:07.795 ******* changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Wednesday 06 March 2019 01:59:58 +0000 (0:00:01.533) 0:21:09.328 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Wednesday 06 March 2019 01:59:59 +0000 (0:00:00.461) 0:21:09.790 ******* FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Add devices] ***************************************** Wednesday 06 March 2019 02:00:38 +0000 (0:00:38.760) 0:21:48.551 ******* included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 TASK [GCS | GD2 Cluster | Add devices | Set facts] ***************************** Wednesday 06 March 2019 02:00:38 +0000 (0:00:00.317) 0:21:48.868 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Add devices | Add devices for kube2] ***************** Wednesday 06 March 2019 02:00:38 +0000 (0:00:00.464) 0:21:49.332 ******* FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (1 retries left). failed: [kube1] (item=/dev/vdc) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdc", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.46.45:24007/v1/devices/0b948a5a-2e44-492c-8a42-8afdd988788e"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (1 retries left). failed: [kube1] (item=/dev/vdd) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdd", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.46.45:24007/v1/devices/0b948a5a-2e44-492c-8a42-8afdd988788e"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (1 retries left). failed: [kube1] (item=/dev/vde) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vde", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.46.45:24007/v1/devices/0b948a5a-2e44-492c-8a42-8afdd988788e"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=426 changed=119 unreachable=0 failed=1 kube2 : ok=320 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Wednesday 06 March 2019 03:04:01 +0000 (1:03:22.648) 1:25:11.981 ******* =============================================================================== GCS | GD2 Cluster | Add devices | Add devices for kube2 -------------- 3802.65s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 83.93s kubernetes/master : kubeadm | Initialize first master ------------------ 40.12s kubernetes/master : kubeadm | Init other uninitialized masters --------- 39.44s GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready --------- 38.76s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 34.84s Install packages ------------------------------------------------------- 34.72s download : container_download | download images for kubeadm config images -- 33.31s etcd : Gen_certs | Write etcd master certs ----------------------------- 32.71s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 21.45s Wait for host to be available ------------------------------------------ 20.99s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 19.77s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 17.64s etcd : Gen_certs | Gather etcd master certs ---------------------------- 13.93s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 13.76s gather facts from all instances ---------------------------------------- 13.06s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.20s etcd : reload etcd ----------------------------------------------------- 11.89s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.38s container-engine/docker : Docker | pause while Docker restarts --------- 10.44s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed Mar 6 11:14:12 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 6 Mar 2019 11:14:12 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #293 In-Reply-To: <1011980734.2585.1551833232116.JavaMail.jenkins@jenkins.ci.centos.org> References: <1011980734.2585.1551833232116.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <16907575.2824.1551870852566.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.33 KB...] Total 63 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : perl-srpm-macros-1-8.el7.noarch 10/49 Installing : unzip-6.0-19.el7.x86_64 11/49 Installing : dwz-0.11-3.el7.x86_64 12/49 Installing : bzip2-1.0.6-13.el7.x86_64 13/49 Installing : usermode-1.111-5.el7.x86_64 14/49 Installing : python2-distro-1.2.0-1.el7.noarch 15/49 Installing : patch-2.7.1-10.el7_5.x86_64 16/49 Installing : python-backports-1.0-8.el7.x86_64 17/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/49 Installing : python-urllib3-1.10.2-5.el7.noarch 19/49 Installing : python-requests-2.6.0-1.el7_1.noarch 20/49 Installing : distribution-gpg-keys-1.29-1.el7.noarch 21/49 Installing : mock-core-configs-30.2-1.el7.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : libmodman-2.0.1-8.el7.x86_64 24/49 Installing : libproxy-0.4.11-11.el7.x86_64 25/49 Installing : python-markupsafe-0.11-10.el7.x86_64 26/49 Installing : python-jinja2-2.7.2-2.el7.noarch 27/49 Installing : gdb-7.6.1-114.el7.x86_64 28/49 Installing : kernel-headers-3.10.0-957.5.1.el7.x86_64 29/49 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 30/49 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 31/49 Installing : gcc-4.8.5-36.el7.x86_64 32/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : golang-src-1.11.5-1.el7.noarch 35/49 Installing : python2-pyroute2-0.4.13-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-8.el7.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.14-2.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 5/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 6/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/49 Verifying : zip-3.0-11.el7.x86_64 8/49 Verifying : gnutls-3.3.29-8.el7.x86_64 9/49 Verifying : nettle-2.7.1-8.el7.x86_64 10/49 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 11/49 Verifying : golang-src-1.11.5-1.el7.noarch 12/49 Verifying : pigz-2.3.4-1.el7.x86_64 13/49 Verifying : gcc-4.8.5-36.el7.x86_64 14/49 Verifying : golang-1.11.5-1.el7.x86_64 15/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 16/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : kernel-headers-3.10.0-957.5.1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 23/49 Verifying : libmodman-2.0.1-8.el7.x86_64 24/49 Verifying : cpp-4.8.5-36.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : python-babel-0.9.6-8.el7.noarch 27/49 Verifying : distribution-gpg-keys-1.29-1.el7.noarch 28/49 Verifying : apr-util-1.5.2-6.el7.x86_64 29/49 Verifying : python-backports-1.0-8.el7.x86_64 30/49 Verifying : patch-2.7.1-10.el7_5.x86_64 31/49 Verifying : libmpc-1.0.1-3.el7.x86_64 32/49 Verifying : python2-distro-1.2.0-1.el7.noarch 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : perl-srpm-macros-1-8.el7.noarch 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7 distribution-gpg-keys.noarch 0:1.29-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-8.el7 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.5.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1987 0 --:--:-- --:--:-- --:--:-- 1990 100 8513k 100 8513k 0 0 13.5M 0 --:--:-- --:--:-- --:--:-- 13.5M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1981 0 --:--:-- --:--:-- --:--:-- 1990 0 38.3M 0 263k 0 0 622k 0 0:01:03 --:--:-- 0:01:03 622k100 38.3M 100 38.3M 0 0 46.6M 0 --:--:-- --:--:-- --:--:-- 95.5M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 528 0 --:--:-- --:--:-- --:--:-- 527 0 0 0 620 0 0 1440 0 --:--:-- --:--:-- --:--:-- 1440 100 10.7M 100 10.7M 0 0 15.0M 0 --:--:-- --:--:-- --:--:-- 15.0M ~/nightlyrpmgcupG0/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmgcupG0/glusterd2-v6.0-dev.145.git994aaa0-vendor.tar.xz Created dist archive /root/nightlyrpmgcupG0/glusterd2-v6.0-dev.145.git994aaa0-vendor.tar.xz ~ ~/nightlyrpmgcupG0 ~ INFO: mock.py version 1.4.14 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmgcupG0/rpmbuild/SRPMS/glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.14 INFO: Mock Version: 1.4.14 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmgcupG0/rpmbuild/SRPMS/glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 21 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 3e138b5db8e34d7eaf0d18f71c41422d -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.TAIuJ6:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins1692139797298576084.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 91d4c44a +---------+----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 262 | n7.gusty | 172.19.2.135 | gusty | 3275 | Deployed | 91d4c44a | None | None | 7 | x86_64 | 1 | 2060 | None | +---------+----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Wed Mar 6 11:46:06 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 6 Mar 2019 11:46:06 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #97 In-Reply-To: <1504985332.2631.1551841441945.JavaMail.jenkins@jenkins.ci.centos.org> References: <1504985332.2631.1551841441945.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1072237838.2846.1551872766979.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 459.54 KB...] changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-fs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-virtblock-csi.yml) changed: [kube1] => (item=gcs-storage-virtblock.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Wednesday 06 March 2019 11:34:11 +0000 (0:00:35.325) 0:17:52.871 ******* included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Wednesday 06 March 2019 11:34:12 +0000 (0:00:00.259) 0:17:53.131 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Wednesday 06 March 2019 11:34:12 +0000 (0:00:00.401) 0:17:53.533 ******* changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Wednesday 06 March 2019 11:34:14 +0000 (0:00:01.972) 0:17:55.505 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Wednesday 06 March 2019 11:34:14 +0000 (0:00:00.426) 0:17:55.932 ******* changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Wednesday 06 March 2019 11:34:16 +0000 (0:00:02.061) 0:17:57.994 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Wednesday 06 March 2019 11:34:17 +0000 (0:00:00.403) 0:17:58.398 ******* changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Wednesday 06 March 2019 11:34:19 +0000 (0:00:02.207) 0:18:00.605 ******* ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Wednesday 06 March 2019 11:34:21 +0000 (0:00:01.661) 0:18:02.267 ******* ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Wednesday 06 March 2019 11:34:22 +0000 (0:00:01.666) 0:18:03.933 ******* FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Wednesday 06 March 2019 11:34:34 +0000 (0:00:12.042) 0:18:15.976 ******* ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Wednesday 06 March 2019 11:34:36 +0000 (0:00:01.659) 0:18:17.635 ******* ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Wednesday 06 March 2019 11:34:37 +0000 (0:00:01.329) 0:18:18.964 ******* ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Wednesday 06 March 2019 11:34:39 +0000 (0:00:01.292) 0:18:20.256 ******* ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Wednesday 06 March 2019 11:34:40 +0000 (0:00:01.721) 0:18:21.978 ******* ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Wednesday 06 March 2019 11:34:42 +0000 (0:00:01.852) 0:18:23.831 ******* changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Wednesday 06 March 2019 11:34:43 +0000 (0:00:01.218) 0:18:25.050 ******* ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Wednesday 06 March 2019 11:34:44 +0000 (0:00:00.339) 0:18:25.389 ******* FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (43 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Wednesday 06 March 2019 11:36:19 +0000 (0:01:35.536) 0:20:00.926 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Wednesday 06 March 2019 11:36:21 +0000 (0:00:01.614) 0:20:02.540 ******* included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Wednesday 06 March 2019 11:36:21 +0000 (0:00:00.207) 0:20:02.748 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Wednesday 06 March 2019 11:36:22 +0000 (0:00:00.368) 0:20:03.116 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Wednesday 06 March 2019 11:36:23 +0000 (0:00:01.684) 0:20:04.801 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Wednesday 06 March 2019 11:36:24 +0000 (0:00:00.326) 0:20:05.127 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Wednesday 06 March 2019 11:36:25 +0000 (0:00:01.607) 0:20:06.735 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Wednesday 06 March 2019 11:36:25 +0000 (0:00:00.332) 0:20:07.067 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Wednesday 06 March 2019 11:36:27 +0000 (0:00:01.491) 0:20:08.559 ******* changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Wednesday 06 March 2019 11:36:28 +0000 (0:00:01.312) 0:20:09.872 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Wednesday 06 March 2019 11:36:29 +0000 (0:00:00.314) 0:20:10.187 ******* FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.47.27:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=421 changed=119 unreachable=0 failed=1 kube2 : ok=320 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Wednesday 06 March 2019 11:46:06 +0000 (0:09:37.423) 0:29:47.610 ******* =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 577.42s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 95.54s kubernetes/master : kubeadm | Initialize first master ------------------ 40.70s kubernetes/master : kubeadm | Init other uninitialized masters --------- 38.41s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 35.33s etcd : Gen_certs | Write etcd master certs ----------------------------- 33.75s download : container_download | download images for kubeadm config images -- 32.34s Install packages ------------------------------------------------------- 30.40s Wait for host to be available ------------------------------------------ 21.26s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.72s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 19.22s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 18.61s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 13.13s etcd : Gen_certs | Gather etcd master certs ---------------------------- 12.88s gather facts from all instances ---------------------------------------- 12.60s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.04s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.46s container-engine/docker : Docker | pause while Docker restarts --------- 10.42s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 10.00s download : file_download | Download item -------------------------------- 9.19s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed Mar 6 22:05:36 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 6 Mar 2019 22:05:36 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #294 In-Reply-To: <16907575.2824.1551870852566.JavaMail.jenkins@jenkins.ci.centos.org> References: <16907575.2824.1551870852566.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1948219012.3097.1551909936590.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.76 KB...] Total 64 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : perl-srpm-macros-1-8.el7.noarch 10/49 Installing : unzip-6.0-19.el7.x86_64 11/49 Installing : dwz-0.11-3.el7.x86_64 12/49 Installing : bzip2-1.0.6-13.el7.x86_64 13/49 Installing : usermode-1.111-5.el7.x86_64 14/49 Installing : python2-distro-1.2.0-1.el7.noarch 15/49 Installing : patch-2.7.1-10.el7_5.x86_64 16/49 Installing : python-backports-1.0-8.el7.x86_64 17/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/49 Installing : python-urllib3-1.10.2-5.el7.noarch 19/49 Installing : python-requests-2.6.0-1.el7_1.noarch 20/49 Installing : distribution-gpg-keys-1.29-1.el7.noarch 21/49 Installing : mock-core-configs-30.2-1.el7.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : libmodman-2.0.1-8.el7.x86_64 24/49 Installing : libproxy-0.4.11-11.el7.x86_64 25/49 Installing : python-markupsafe-0.11-10.el7.x86_64 26/49 Installing : python-jinja2-2.7.2-2.el7.noarch 27/49 Installing : gdb-7.6.1-114.el7.x86_64 28/49 Installing : kernel-headers-3.10.0-957.5.1.el7.x86_64 29/49 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 30/49 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 31/49 Installing : gcc-4.8.5-36.el7.x86_64 32/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : golang-src-1.11.5-1.el7.noarch 35/49 Installing : python2-pyroute2-0.4.13-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-8.el7.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.14-2.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 5/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 6/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/49 Verifying : zip-3.0-11.el7.x86_64 8/49 Verifying : gnutls-3.3.29-8.el7.x86_64 9/49 Verifying : nettle-2.7.1-8.el7.x86_64 10/49 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 11/49 Verifying : golang-src-1.11.5-1.el7.noarch 12/49 Verifying : pigz-2.3.4-1.el7.x86_64 13/49 Verifying : gcc-4.8.5-36.el7.x86_64 14/49 Verifying : golang-1.11.5-1.el7.x86_64 15/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 16/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : kernel-headers-3.10.0-957.5.1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 23/49 Verifying : libmodman-2.0.1-8.el7.x86_64 24/49 Verifying : cpp-4.8.5-36.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : python-babel-0.9.6-8.el7.noarch 27/49 Verifying : distribution-gpg-keys-1.29-1.el7.noarch 28/49 Verifying : apr-util-1.5.2-6.el7.x86_64 29/49 Verifying : python-backports-1.0-8.el7.x86_64 30/49 Verifying : patch-2.7.1-10.el7_5.x86_64 31/49 Verifying : libmpc-1.0.1-3.el7.x86_64 32/49 Verifying : python2-distro-1.2.0-1.el7.noarch 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : perl-srpm-macros-1-8.el7.noarch 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7 distribution-gpg-keys.noarch 0:1.29-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-8.el7 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.5.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1779 0 --:--:-- --:--:-- --:--:-- 1784 56 8513k 56 4843k 0 0 8495k 0 0:00:01 --:--:-- 0:00:01 8495k100 8513k 100 8513k 0 0 12.1M 0 --:--:-- --:--:-- --:--:-- 32.2M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1856 0 --:--:-- --:--:-- --:--:-- 1860 87 38.3M 87 33.5M 0 0 38.5M 0 --:--:-- --:--:-- --:--:-- 38.5M100 38.3M 100 38.3M 0 0 41.4M 0 --:--:-- --:--:-- --:--:-- 89.3M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 553 0 --:--:-- --:--:-- --:--:-- 554 0 0 0 620 0 0 1640 0 --:--:-- --:--:-- --:--:-- 1640 0 10.7M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 10.7M 100 10.7M 0 0 14.2M 0 --:--:-- --:--:-- --:--:-- 45.4M ~/nightlyrpmSx2kuT/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmSx2kuT/glusterd2-v6.0-dev.145.git994aaa0-vendor.tar.xz Created dist archive /root/nightlyrpmSx2kuT/glusterd2-v6.0-dev.145.git994aaa0-vendor.tar.xz ~ ~/nightlyrpmSx2kuT ~ INFO: mock.py version 1.4.14 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmSx2kuT/rpmbuild/SRPMS/glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.14 INFO: Mock Version: 1.4.14 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmSx2kuT/rpmbuild/SRPMS/glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 23 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M d3f6f7c2974446c795b51d44eab398c1 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.jsGS4P:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins5260053940041407075.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 3d54da57 +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 305 | n50.gusty | 172.19.2.178 | gusty | 3279 | Deployed | 3d54da57 | None | None | 7 | x86_64 | 1 | 2490 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Wed Mar 6 22:37:03 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 6 Mar 2019 22:37:03 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #98 In-Reply-To: <1072237838.2846.1551872766979.JavaMail.jenkins@jenkins.ci.centos.org> References: <1072237838.2846.1551872766979.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1819205916.3107.1551911823108.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 454.86 KB...] Wednesday 06 March 2019 22:27:03 +0000 (0:00:00.233) 0:17:28.310 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_region_id] *** Wednesday 06 March 2019 22:27:03 +0000 (0:00:00.221) 0:17:28.531 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_tenancy_id] *** Wednesday 06 March 2019 22:27:04 +0000 (0:00:00.190) 0:17:28.722 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_user_id] *** Wednesday 06 March 2019 22:27:04 +0000 (0:00:00.304) 0:17:29.027 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_user_fingerprint] *** Wednesday 06 March 2019 22:27:04 +0000 (0:00:00.285) 0:17:29.312 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_compartment_id] *** Wednesday 06 March 2019 22:27:04 +0000 (0:00:00.223) 0:17:29.536 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_vnc_id] *** Wednesday 06 March 2019 22:27:05 +0000 (0:00:00.234) 0:17:29.771 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_subnet1_id] *** Wednesday 06 March 2019 22:27:05 +0000 (0:00:00.204) 0:17:29.975 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_subnet2_id] *** Wednesday 06 March 2019 22:27:05 +0000 (0:00:00.196) 0:17:30.172 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_security_list_management] *** Wednesday 06 March 2019 22:27:05 +0000 (0:00:00.219) 0:17:30.392 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Generate Configuration] *** Wednesday 06 March 2019 22:27:05 +0000 (0:00:00.224) 0:17:30.616 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Encode Configuration] *** Wednesday 06 March 2019 22:27:06 +0000 (0:00:00.193) 0:17:30.810 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Apply Configuration To Secret] *** Wednesday 06 March 2019 22:27:06 +0000 (0:00:00.218) 0:17:31.029 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Apply Configuration] *** Wednesday 06 March 2019 22:27:06 +0000 (0:00:00.202) 0:17:31.231 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Download Controller Manifest] *** Wednesday 06 March 2019 22:27:06 +0000 (0:00:00.177) 0:17:31.408 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Apply Controller Manifest] *** Wednesday 06 March 2019 22:27:06 +0000 (0:00:00.224) 0:17:31.633 ******* PLAY [Fetch config] ************************************************************ TASK [Retrieve kubectl config] ************************************************* Wednesday 06 March 2019 22:27:07 +0000 (0:00:00.221) 0:17:31.855 ******* changed: [kube1] PLAY [Copy kube config for vagrant user] *************************************** TASK [Create a directory] ****************************************************** Wednesday 06 March 2019 22:27:08 +0000 (0:00:01.033) 0:17:32.888 ******* changed: [kube1] changed: [kube2] TASK [Copy kube config for vagrant user] *************************************** Wednesday 06 March 2019 22:27:09 +0000 (0:00:01.644) 0:17:34.533 ******* changed: [kube1] changed: [kube2] PLAY [Deploy GCS] ************************************************************** TASK [GCS Pre | Cluster ID | Generate a UUID] ********************************** Wednesday 06 March 2019 22:27:10 +0000 (0:00:01.131) 0:17:35.665 ******* changed: [kube1] TASK [GCS Pre | Cluster ID | Set gcs_gd2_clusterid fact] *********************** Wednesday 06 March 2019 22:27:11 +0000 (0:00:00.891) 0:17:36.557 ******* ok: [kube1] TASK [GCS Pre | Manifests directory | Create a temporary directory] ************ Wednesday 06 March 2019 22:27:12 +0000 (0:00:00.397) 0:17:36.955 ******* changed: [kube1] TASK [GCS Pre | Manifests directory | Set manifests_dir fact] ****************** Wednesday 06 March 2019 22:27:13 +0000 (0:00:01.260) 0:17:38.215 ******* ok: [kube1] TASK [GCS Pre | Manifests | Sync GCS manifests] ******************************** Wednesday 06 March 2019 22:27:13 +0000 (0:00:00.371) 0:17:38.587 ******* changed: [kube1] => (item=gcs-namespace.yml) changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-fs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-virtblock-csi.yml) changed: [kube1] => (item=gcs-storage-virtblock.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Wednesday 06 March 2019 22:27:45 +0000 (0:00:31.671) 0:18:10.258 ******* included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Wednesday 06 March 2019 22:27:45 +0000 (0:00:00.203) 0:18:10.462 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Wednesday 06 March 2019 22:27:46 +0000 (0:00:00.442) 0:18:10.905 ******* changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Wednesday 06 March 2019 22:27:48 +0000 (0:00:01.974) 0:18:12.880 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Wednesday 06 March 2019 22:27:48 +0000 (0:00:00.301) 0:18:13.181 ******* changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Wednesday 06 March 2019 22:27:50 +0000 (0:00:01.916) 0:18:15.098 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Wednesday 06 March 2019 22:27:50 +0000 (0:00:00.394) 0:18:15.492 ******* changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Wednesday 06 March 2019 22:27:52 +0000 (0:00:01.793) 0:18:17.286 ******* ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Wednesday 06 March 2019 22:27:54 +0000 (0:00:01.405) 0:18:18.692 ******* ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Wednesday 06 March 2019 22:27:55 +0000 (0:00:01.537) 0:18:20.229 ******* FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (49 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (48 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (47 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (46 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (45 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (44 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (43 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (42 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (41 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (40 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (39 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (38 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (37 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (36 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (35 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (34 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (33 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (32 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (31 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (30 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (29 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (28 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (27 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (26 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (25 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (24 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (23 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (22 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (21 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (20 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (19 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (18 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (17 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (16 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (15 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (14 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (13 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (12 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (11 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (10 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (9 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (8 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (7 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (6 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (5 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (4 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (3 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (2 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": true, "cmd": ["/usr/local/bin/kubectl", "-ngcs", "-ojsonpath={.status.availableReplicas}", "get", "deployment", "etcd-operator"], "delta": "0:00:00.307870", "end": "2019-03-06 22:37:02.694202", "rc": 0, "start": "2019-03-06 22:37:02.386332", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=400 changed=116 unreachable=0 failed=1 kube2 : ok=320 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Wednesday 06 March 2019 22:37:02 +0000 (0:09:07.189) 0:27:27.419 ******* =============================================================================== GCS | ETCD Operator | Wait for etcd-operator to be available ---------- 547.19s kubernetes/master : kubeadm | Initialize first master ------------------ 39.68s kubernetes/master : kubeadm | Init other uninitialized masters --------- 39.06s download : container_download | download images for kubeadm config images -- 34.19s etcd : Gen_certs | Write etcd master certs ----------------------------- 33.01s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 31.67s Install packages ------------------------------------------------------- 30.48s Wait for host to be available ------------------------------------------ 21.00s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.50s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 19.03s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 17.82s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 13.52s etcd : Gen_certs | Gather etcd master certs ---------------------------- 13.23s gather facts from all instances ---------------------------------------- 13.13s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 12.56s etcd : reload etcd ----------------------------------------------------- 12.07s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 10.99s container-engine/docker : Docker | pause while Docker restarts --------- 10.38s download : file_download | Download item ------------------------------- 10.08s download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 9.25s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu Mar 7 08:04:00 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 7 Mar 2019 08:04:00 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #99 In-Reply-To: <1819205916.3107.1551911823108.JavaMail.jenkins@jenkins.ci.centos.org> References: <1819205916.3107.1551911823108.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1613144379.3339.1551945840238.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ Started by timer [EnvInject] - Loading node environment variables. Building remotely on gluster-ci-slave07 (gluster) in workspace No credentials specified Wiping out workspace first. Cloning the remote Git repository Cloning repository https://github.com/gluster/centosci.git > git init # timeout=10 Fetching upstream changes from https://github.com/gluster/centosci.git > git --version # timeout=10 > git fetch --tags --progress https://github.com/gluster/centosci.git +refs/heads/*:refs/remotes/origin/* > git config remote.origin.url https://github.com/gluster/centosci.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 > git config remote.origin.url https://github.com/gluster/centosci.git # timeout=10 Fetching upstream changes from https://github.com/gluster/centosci.git > git fetch --tags --progress https://github.com/gluster/centosci.git +refs/heads/*:refs/remotes/origin/* > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10 Checking out Revision c63d990415e215c34c4398c58a968f89db3692d5 (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f c63d990415e215c34c4398c58a968f89db3692d5 Commit message: "Merge pull request #55 from kshlm/new-glusterd2-admin" > git rev-list --no-walk c63d990415e215c34c4398c58a968f89db3692d5 # timeout=10 [gluster_anteater_gcs] $ /bin/sh -xe /tmp/jenkins3432493527905796979.sh + set +x The requested operation failed as no inventory is available. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu Mar 7 08:07:28 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 7 Mar 2019 08:07:28 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #295 In-Reply-To: <1948219012.3097.1551909936590.JavaMail.jenkins@jenkins.ci.centos.org> References: <1948219012.3097.1551909936590.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1164435283.3347.1551946048838.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.71 KB...] Total 61 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : perl-srpm-macros-1-8.el7.noarch 10/49 Installing : unzip-6.0-19.el7.x86_64 11/49 Installing : dwz-0.11-3.el7.x86_64 12/49 Installing : bzip2-1.0.6-13.el7.x86_64 13/49 Installing : usermode-1.111-5.el7.x86_64 14/49 Installing : python2-distro-1.2.0-1.el7.noarch 15/49 Installing : patch-2.7.1-10.el7_5.x86_64 16/49 Installing : python-backports-1.0-8.el7.x86_64 17/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/49 Installing : python-urllib3-1.10.2-5.el7.noarch 19/49 Installing : python-requests-2.6.0-1.el7_1.noarch 20/49 Installing : distribution-gpg-keys-1.29-1.el7.noarch 21/49 Installing : mock-core-configs-30.2-1.el7.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : libmodman-2.0.1-8.el7.x86_64 24/49 Installing : libproxy-0.4.11-11.el7.x86_64 25/49 Installing : python-markupsafe-0.11-10.el7.x86_64 26/49 Installing : python-jinja2-2.7.2-2.el7.noarch 27/49 Installing : gdb-7.6.1-114.el7.x86_64 28/49 Installing : kernel-headers-3.10.0-957.5.1.el7.x86_64 29/49 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 30/49 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 31/49 Installing : gcc-4.8.5-36.el7.x86_64 32/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : golang-src-1.11.5-1.el7.noarch 35/49 Installing : python2-pyroute2-0.4.13-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-8.el7.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.14-2.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 5/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 6/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/49 Verifying : zip-3.0-11.el7.x86_64 8/49 Verifying : gnutls-3.3.29-8.el7.x86_64 9/49 Verifying : nettle-2.7.1-8.el7.x86_64 10/49 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 11/49 Verifying : golang-src-1.11.5-1.el7.noarch 12/49 Verifying : pigz-2.3.4-1.el7.x86_64 13/49 Verifying : gcc-4.8.5-36.el7.x86_64 14/49 Verifying : golang-1.11.5-1.el7.x86_64 15/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 16/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : kernel-headers-3.10.0-957.5.1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 23/49 Verifying : libmodman-2.0.1-8.el7.x86_64 24/49 Verifying : cpp-4.8.5-36.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : python-babel-0.9.6-8.el7.noarch 27/49 Verifying : distribution-gpg-keys-1.29-1.el7.noarch 28/49 Verifying : apr-util-1.5.2-6.el7.x86_64 29/49 Verifying : python-backports-1.0-8.el7.x86_64 30/49 Verifying : patch-2.7.1-10.el7_5.x86_64 31/49 Verifying : libmpc-1.0.1-3.el7.x86_64 32/49 Verifying : python2-distro-1.2.0-1.el7.noarch 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : perl-srpm-macros-1-8.el7.noarch 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7 distribution-gpg-keys.noarch 0:1.29-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-8.el7 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.5.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1786 0 --:--:-- --:--:-- --:--:-- 1789 100 8513k 100 8513k 0 0 13.8M 0 --:--:-- --:--:-- --:--:-- 13.8M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2235 0 --:--:-- --:--:-- --:--:-- 2239 59 38.3M 59 22.7M 0 0 35.0M 0 0:00:01 --:--:-- 0:00:01 35.0M100 38.3M 100 38.3M 0 0 47.0M 0 --:--:-- --:--:-- --:--:-- 93.7M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 575 0 --:--:-- --:--:-- --:--:-- 577 0 0 0 620 0 0 1635 0 --:--:-- --:--:-- --:--:-- 1635 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 10.7M 100 10.7M 0 0 14.6M 0 --:--:-- --:--:-- --:--:-- 34.1M ~/nightlyrpml5MnLt/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpml5MnLt/glusterd2-v6.0-dev.145.git994aaa0-vendor.tar.xz Created dist archive /root/nightlyrpml5MnLt/glusterd2-v6.0-dev.145.git994aaa0-vendor.tar.xz ~ ~/nightlyrpml5MnLt ~ INFO: mock.py version 1.4.14 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpml5MnLt/rpmbuild/SRPMS/glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.14 INFO: Mock Version: 1.4.14 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpml5MnLt/rpmbuild/SRPMS/glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 23 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 26bd131b3c1d49c39185b74ce9864e79 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.g56HVF:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins6555076031383097472.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 79ce666d +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 278 | n23.gusty | 172.19.2.151 | gusty | 3283 | Deployed | 79ce666d | None | None | 7 | x86_64 | 1 | 2220 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Fri Mar 8 00:11:54 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 8 Mar 2019 00:11:54 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #296 In-Reply-To: <1164435283.3347.1551946048838.JavaMail.jenkins@jenkins.ci.centos.org> References: <1164435283.3347.1551946048838.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2089174866.3644.1552003914264.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 35.17 KB...] mpfr x86_64 3.1.1-4.el7 base 203 k neon x86_64 0.30.0-3.el7 base 165 k nettle x86_64 2.7.1-8.el7 base 327 k pakchois x86_64 0.4-10.el7 base 14 k patch x86_64 2.7.1-10.el7_5 base 110 k perl-Thread-Queue noarch 3.02-2.el7 base 17 k perl-srpm-macros noarch 1-8.el7 base 4.6 k pigz x86_64 2.3.4-1.el7 epel 81 k python-babel noarch 0.9.6-8.el7 base 1.4 M python-backports x86_64 1.0-8.el7 base 5.8 k python-backports-ssl_match_hostname noarch 3.5.0.1-1.el7 base 13 k python-ipaddress noarch 1.0.16-2.el7 base 34 k python-jinja2 noarch 2.7.2-2.el7 base 515 k python-markupsafe x86_64 0.11-10.el7 base 25 k python-requests noarch 2.6.0-1.el7_1 base 94 k python-six noarch 1.9.0-2.el7 base 29 k python-urllib3 noarch 1.10.2-5.el7 base 102 k python2-distro noarch 1.2.0-1.el7 epel 29 k python2-pyroute2 noarch 0.4.13-1.el7 epel 345 k redhat-rpm-config noarch 9.1.0-87.el7.centos base 81 k subversion x86_64 1.7.14-14.el7 base 1.0 M subversion-libs x86_64 1.7.14-14.el7 base 922 k trousers x86_64 0.3.14-2.el7 base 289 k unzip x86_64 6.0-19.el7 base 170 k usermode x86_64 1.111-5.el7 base 193 k zip x86_64 3.0-11.el7 base 260 k Transaction Summary ================================================================================ Install 3 Packages (+46 Dependent packages) Total download size: 143 M Installed size: 421 M Downloading packages: warning: /var/cache/yum/x86_64/7/epel/packages/distribution-gpg-keys-1.29-1.el7.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 352c64e5: NOKEY Public key for distribution-gpg-keys-1.29-1.el7.noarch.rpm is not installed -------------------------------------------------------------------------------- Total 96 MB/s | 143 MB 00:01 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : perl-srpm-macros-1-8.el7.noarch 10/49 Installing : unzip-6.0-19.el7.x86_64 11/49 Installing : dwz-0.11-3.el7.x86_64 12/49 Installing : bzip2-1.0.6-13.el7.x86_64 13/49 Installing : usermode-1.111-5.el7.x86_64 14/49 Installing : python2-distro-1.2.0-1.el7.noarch 15/49 Installing : patch-2.7.1-10.el7_5.x86_64 16/49 Installing : python-backports-1.0-8.el7.x86_64 17/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/49 Installing : python-urllib3-1.10.2-5.el7.noarch 19/49 Installing : python-requests-2.6.0-1.el7_1.noarch 20/49 Installing : distribution-gpg-keys-1.29-1.el7.noarch 21/49 Installing : mock-core-configs-30.2-1.el7.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : libmodman-2.0.1-8.el7.x86_64 24/49 Installing : libproxy-0.4.11-11.el7.x86_64 25/49 Installing : python-markupsafe-0.11-10.el7.x86_64 26/49 Installing : python-jinja2-2.7.2-2.el7.noarch 27/49 Installing : gdb-7.6.1-114.el7.x86_64 28/49 Installing : kernel-headers-3.10.0-957.5.1.el7.x86_64 29/49 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 30/49 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 31/49 Installing : gcc-4.8.5-36.el7.x86_64 32/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : golang-src-1.11.5-1.el7.noarch 35/49 Installing : python2-pyroute2-0.4.13-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-8.el7.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.14-2.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 5/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 6/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/49 Verifying : zip-3.0-11.el7.x86_64 8/49 Verifying : gnutls-3.3.29-8.el7.x86_64 9/49 Verifying : nettle-2.7.1-8.el7.x86_64 10/49 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 11/49 Verifying : golang-src-1.11.5-1.el7.noarch 12/49 Verifying : pigz-2.3.4-1.el7.x86_64 13/49 Verifying : gcc-4.8.5-36.el7.x86_64 14/49 Verifying : golang-1.11.5-1.el7.x86_64 15/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 16/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : kernel-headers-3.10.0-957.5.1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 23/49 Verifying : libmodman-2.0.1-8.el7.x86_64 24/49 Verifying : cpp-4.8.5-36.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : python-babel-0.9.6-8.el7.noarch 27/49 Verifying : distribution-gpg-keys-1.29-1.el7.noarch 28/49 Verifying : apr-util-1.5.2-6.el7.x86_64 29/49 Verifying : python-backports-1.0-8.el7.x86_64 30/49 Verifying : patch-2.7.1-10.el7_5.x86_64 31/49 Verifying : libmpc-1.0.1-3.el7.x86_64 32/49 Verifying : python2-distro-1.2.0-1.el7.noarch 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : perl-srpm-macros-1-8.el7.noarch 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7 distribution-gpg-keys.noarch 0:1.29-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-8.el7 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.5.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2327 0 --:--:-- --:--:-- --:--:-- 2326 100 8513k 100 8513k 0 0 16.4M 0 --:--:-- --:--:-- --:--:-- 16.4M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 3560 0 --:--:-- --:--:-- --:--:-- 3582 63 38.3M 63 24.4M 0 0 45.1M 0 --:--:-- --:--:-- --:--:-- 45.1M100 38.3M 100 38.3M 0 0 56.4M 0 --:--:-- --:--:-- --:--:-- 100M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 915 0 --:--:-- --:--:-- --:--:-- 916 0 0 0 620 0 0 2663 0 --:--:-- --:--:-- --:--:-- 2663 100 10.7M 100 10.7M 0 0 19.5M 0 --:--:-- --:--:-- --:--:-- 19.5M ~/nightlyrpmoG92nq/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages grouped write of manifest, lock and vendor: error while writing out vendor tree: failed to write dep tree: failed to export golang.org/x/crypto: (1) failed to list versions for https://go.googlesource.com/crypto: fatal: remote error: Internal Server Error : exit status 128 make: *** [vendor-install] Error 1 Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins2875053994013688814.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 0d1ef828 +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 243 | n52.dusty | 172.19.2.116 | dusty | 3289 | Deployed | 0d1ef828 | None | None | 7 | x86_64 | 1 | 2510 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Fri Mar 8 02:03:38 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 8 Mar 2019 02:03:38 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #100 In-Reply-To: <1613144379.3339.1551945840238.JavaMail.jenkins@jenkins.ci.centos.org> References: <1613144379.3339.1551945840238.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <363048249.3695.1552010618454.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 465.65 KB...] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Friday 08 March 2019 00:59:41 +0000 (0:00:01.620) 0:20:10.434 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Friday 08 March 2019 00:59:41 +0000 (0:00:00.318) 0:20:10.752 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Friday 08 March 2019 00:59:43 +0000 (0:00:01.623) 0:20:12.376 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Friday 08 March 2019 00:59:43 +0000 (0:00:00.327) 0:20:12.703 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Friday 08 March 2019 00:59:45 +0000 (0:00:01.592) 0:20:14.296 ********** changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Friday 08 March 2019 00:59:46 +0000 (0:00:01.322) 0:20:15.618 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Friday 08 March 2019 00:59:47 +0000 (0:00:00.374) 0:20:15.993 ********** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Add devices] ***************************************** Friday 08 March 2019 01:00:37 +0000 (0:00:50.752) 0:21:06.746 ********** included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 TASK [GCS | GD2 Cluster | Add devices | Set facts] ***************************** Friday 08 March 2019 01:00:38 +0000 (0:00:00.359) 0:21:07.105 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Add devices | Add devices for kube3] ***************** Friday 08 March 2019 01:00:38 +0000 (0:00:00.462) 0:21:07.568 ********** FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (1 retries left). failed: [kube1] (item=/dev/vdc) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdc", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.33.134:24007/v1/devices/23faeba8-9edf-420e-9b10-a8d663cc0f47"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (1 retries left). failed: [kube1] (item=/dev/vdd) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdd", "msg": "Status code was -1 and not [201]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.33.134:24007/v1/devices/23faeba8-9edf-420e-9b10-a8d663cc0f47"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (1 retries left). failed: [kube1] (item=/dev/vde) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vde", "msg": "Status code was -1 and not [201]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.33.134:24007/v1/devices/23faeba8-9edf-420e-9b10-a8d663cc0f47"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=426 changed=119 unreachable=0 failed=1 kube2 : ok=320 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Friday 08 March 2019 02:03:37 +0000 (1:02:59.201) 1:24:06.769 ********** =============================================================================== GCS | GD2 Cluster | Add devices | Add devices for kube3 -------------- 3779.20s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 84.14s GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready --------- 50.75s kubernetes/master : kubeadm | Initialize first master ------------------ 39.30s kubernetes/master : kubeadm | Init other uninitialized masters --------- 39.05s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 35.21s etcd : Gen_certs | Write etcd master certs ----------------------------- 33.05s download : container_download | download images for kubeadm config images -- 32.94s Install packages ------------------------------------------------------- 32.36s Wait for host to be available ------------------------------------------ 20.64s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.51s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 18.72s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 18.41s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 13.61s gather facts from all instances ---------------------------------------- 12.87s etcd : Gen_certs | Gather etcd master certs ---------------------------- 12.61s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests -- 12.33s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.27s etcd : reload etcd ----------------------------------------------------- 11.91s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.47s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sat Mar 9 00:16:10 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 9 Mar 2019 00:16:10 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #297 In-Reply-To: <2089174866.3644.1552003914264.JavaMail.jenkins@jenkins.ci.centos.org> References: <2089174866.3644.1552003914264.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <716231125.3984.1552090570904.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.71 KB...] Total 57 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : perl-srpm-macros-1-8.el7.noarch 10/49 Installing : unzip-6.0-19.el7.x86_64 11/49 Installing : dwz-0.11-3.el7.x86_64 12/49 Installing : bzip2-1.0.6-13.el7.x86_64 13/49 Installing : usermode-1.111-5.el7.x86_64 14/49 Installing : python2-distro-1.2.0-1.el7.noarch 15/49 Installing : patch-2.7.1-10.el7_5.x86_64 16/49 Installing : python-backports-1.0-8.el7.x86_64 17/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/49 Installing : python-urllib3-1.10.2-5.el7.noarch 19/49 Installing : python-requests-2.6.0-1.el7_1.noarch 20/49 Installing : distribution-gpg-keys-1.29-1.el7.noarch 21/49 Installing : mock-core-configs-30.2-1.el7.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : libmodman-2.0.1-8.el7.x86_64 24/49 Installing : libproxy-0.4.11-11.el7.x86_64 25/49 Installing : python-markupsafe-0.11-10.el7.x86_64 26/49 Installing : python-jinja2-2.7.2-2.el7.noarch 27/49 Installing : gdb-7.6.1-114.el7.x86_64 28/49 Installing : kernel-headers-3.10.0-957.5.1.el7.x86_64 29/49 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 30/49 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 31/49 Installing : gcc-4.8.5-36.el7.x86_64 32/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : golang-src-1.11.5-1.el7.noarch 35/49 Installing : python2-pyroute2-0.4.13-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-8.el7.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.14-2.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 5/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 6/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/49 Verifying : zip-3.0-11.el7.x86_64 8/49 Verifying : gnutls-3.3.29-8.el7.x86_64 9/49 Verifying : nettle-2.7.1-8.el7.x86_64 10/49 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 11/49 Verifying : golang-src-1.11.5-1.el7.noarch 12/49 Verifying : pigz-2.3.4-1.el7.x86_64 13/49 Verifying : gcc-4.8.5-36.el7.x86_64 14/49 Verifying : golang-1.11.5-1.el7.x86_64 15/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 16/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : kernel-headers-3.10.0-957.5.1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 23/49 Verifying : libmodman-2.0.1-8.el7.x86_64 24/49 Verifying : cpp-4.8.5-36.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : python-babel-0.9.6-8.el7.noarch 27/49 Verifying : distribution-gpg-keys-1.29-1.el7.noarch 28/49 Verifying : apr-util-1.5.2-6.el7.x86_64 29/49 Verifying : python-backports-1.0-8.el7.x86_64 30/49 Verifying : patch-2.7.1-10.el7_5.x86_64 31/49 Verifying : libmpc-1.0.1-3.el7.x86_64 32/49 Verifying : python2-distro-1.2.0-1.el7.noarch 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : perl-srpm-macros-1-8.el7.noarch 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7 distribution-gpg-keys.noarch 0:1.29-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-8.el7 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.5.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1967 0 --:--:-- --:--:-- --:--:-- 1970 100 8513k 100 8513k 0 0 16.4M 0 --:--:-- --:--:-- --:--:-- 16.4M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2137 0 --:--:-- --:--:-- --:--:-- 2139 31 38.3M 31 11.9M 0 0 22.3M 0 0:00:01 --:--:-- 0:00:01 22.3M100 38.3M 100 38.3M 0 0 48.7M 0 --:--:-- --:--:-- --:--:-- 104M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 549 0 --:--:-- --:--:-- --:--:-- 550 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 620 0 0 1613 0 --:--:-- --:--:-- --:--:-- 605k 100 10.7M 100 10.7M 0 0 16.5M 0 --:--:-- --:--:-- --:--:-- 16.5M ~/nightlyrpmPUTJvt/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmPUTJvt/glusterd2-v6.0-dev.146.git73f5bbd-vendor.tar.xz Created dist archive /root/nightlyrpmPUTJvt/glusterd2-v6.0-dev.146.git73f5bbd-vendor.tar.xz ~ ~/nightlyrpmPUTJvt ~ INFO: mock.py version 1.4.14 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmPUTJvt/rpmbuild/SRPMS/glusterd2-5.0-0.dev.146.git73f5bbd.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.14 INFO: Mock Version: 1.4.14 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.146.git73f5bbd.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.146.git73f5bbd.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.146.git73f5bbd.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.146.git73f5bbd.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmPUTJvt/rpmbuild/SRPMS/glusterd2-5.0-0.dev.146.git73f5bbd.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 25 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 065f37dcc91f4a46b602c510609fc8db -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.U12XVK:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins1190258235238293137.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 7db57ac0 +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 143 | n16.crusty | 172.19.2.16 | crusty | 3296 | Deployed | 7db57ac0 | None | None | 7 | x86_64 | 1 | 2150 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sat Mar 9 02:24:00 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 9 Mar 2019 02:24:00 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #101 In-Reply-To: <363048249.3695.1552010618454.JavaMail.jenkins@jenkins.ci.centos.org> References: <363048249.3695.1552010618454.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <256537735.3996.1552098240248.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 465.44 KB...] ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Saturday 09 March 2019 00:59:28 +0000 (0:00:01.612) 0:20:27.848 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Saturday 09 March 2019 00:59:29 +0000 (0:00:00.336) 0:20:28.184 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Saturday 09 March 2019 00:59:31 +0000 (0:00:02.710) 0:20:30.895 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Saturday 09 March 2019 00:59:32 +0000 (0:00:00.340) 0:20:31.235 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Saturday 09 March 2019 00:59:34 +0000 (0:00:02.070) 0:20:33.305 ******** changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Saturday 09 March 2019 00:59:35 +0000 (0:00:01.316) 0:20:34.622 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Saturday 09 March 2019 00:59:35 +0000 (0:00:00.340) 0:20:34.962 ******** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Add devices] ***************************************** Saturday 09 March 2019 01:00:02 +0000 (0:00:26.941) 0:21:01.903 ******** included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 TASK [GCS | GD2 Cluster | Add devices | Set facts] ***************************** Saturday 09 March 2019 01:00:03 +0000 (0:00:00.241) 0:21:02.145 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Add devices | Add devices for kube2] ***************** Saturday 09 March 2019 01:00:03 +0000 (0:00:00.358) 0:21:02.504 ******** FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (1 retries left). failed: [kube1] (item=/dev/vdc) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdc", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.2.178:24007/v1/devices/157c9b00-fef1-4da6-a2c9-21356bd1c35d"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (1 retries left). failed: [kube1] (item=/dev/vdd) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdd", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.2.178:24007/v1/devices/157c9b00-fef1-4da6-a2c9-21356bd1c35d"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (1 retries left). failed: [kube1] (item=/dev/vde) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vde", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.2.178:24007/v1/devices/157c9b00-fef1-4da6-a2c9-21356bd1c35d"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=426 changed=119 unreachable=0 failed=1 kube2 : ok=320 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Saturday 09 March 2019 02:23:59 +0000 (1:23:56.317) 1:44:58.822 ******** =============================================================================== GCS | GD2 Cluster | Add devices | Add devices for kube2 -------------- 5036.32s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 84.51s kubernetes/master : kubeadm | Init other uninitialized masters --------- 39.92s kubernetes/master : kubeadm | Initialize first master ------------------ 39.84s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 35.58s Install packages ------------------------------------------------------- 33.99s download : container_download | download images for kubeadm config images -- 33.47s etcd : Gen_certs | Write etcd master certs ----------------------------- 33.37s GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready --------- 26.94s Wait for host to be available ------------------------------------------ 20.85s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.85s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 18.63s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 16.69s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 13.58s gather facts from all instances ---------------------------------------- 13.21s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests -- 12.81s etcd : Gen_certs | Gather etcd master certs ---------------------------- 12.64s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.20s etcd : reload etcd ----------------------------------------------------- 11.93s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.71s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun Mar 10 00:17:56 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 10 Mar 2019 00:17:56 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #298 In-Reply-To: <716231125.3984.1552090570904.JavaMail.jenkins@jenkins.ci.centos.org> References: <716231125.3984.1552090570904.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1469433044.4190.1552177076433.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.35 KB...] Total 51 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : perl-srpm-macros-1-8.el7.noarch 10/49 Installing : unzip-6.0-19.el7.x86_64 11/49 Installing : dwz-0.11-3.el7.x86_64 12/49 Installing : bzip2-1.0.6-13.el7.x86_64 13/49 Installing : usermode-1.111-5.el7.x86_64 14/49 Installing : python2-distro-1.2.0-1.el7.noarch 15/49 Installing : patch-2.7.1-10.el7_5.x86_64 16/49 Installing : python-backports-1.0-8.el7.x86_64 17/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/49 Installing : python-urllib3-1.10.2-5.el7.noarch 19/49 Installing : python-requests-2.6.0-1.el7_1.noarch 20/49 Installing : distribution-gpg-keys-1.29-1.el7.noarch 21/49 Installing : mock-core-configs-30.2-1.el7.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : libmodman-2.0.1-8.el7.x86_64 24/49 Installing : libproxy-0.4.11-11.el7.x86_64 25/49 Installing : python-markupsafe-0.11-10.el7.x86_64 26/49 Installing : python-jinja2-2.7.2-2.el7.noarch 27/49 Installing : gdb-7.6.1-114.el7.x86_64 28/49 Installing : kernel-headers-3.10.0-957.5.1.el7.x86_64 29/49 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 30/49 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 31/49 Installing : gcc-4.8.5-36.el7.x86_64 32/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : golang-src-1.11.5-1.el7.noarch 35/49 Installing : python2-pyroute2-0.4.13-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-8.el7.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.14-2.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 5/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 6/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/49 Verifying : zip-3.0-11.el7.x86_64 8/49 Verifying : gnutls-3.3.29-8.el7.x86_64 9/49 Verifying : nettle-2.7.1-8.el7.x86_64 10/49 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 11/49 Verifying : golang-src-1.11.5-1.el7.noarch 12/49 Verifying : pigz-2.3.4-1.el7.x86_64 13/49 Verifying : gcc-4.8.5-36.el7.x86_64 14/49 Verifying : golang-1.11.5-1.el7.x86_64 15/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 16/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : kernel-headers-3.10.0-957.5.1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 23/49 Verifying : libmodman-2.0.1-8.el7.x86_64 24/49 Verifying : cpp-4.8.5-36.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : python-babel-0.9.6-8.el7.noarch 27/49 Verifying : distribution-gpg-keys-1.29-1.el7.noarch 28/49 Verifying : apr-util-1.5.2-6.el7.x86_64 29/49 Verifying : python-backports-1.0-8.el7.x86_64 30/49 Verifying : patch-2.7.1-10.el7_5.x86_64 31/49 Verifying : libmpc-1.0.1-3.el7.x86_64 32/49 Verifying : python2-distro-1.2.0-1.el7.noarch 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : perl-srpm-macros-1-8.el7.noarch 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7 distribution-gpg-keys.noarch 0:1.29-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-8.el7 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.5.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1956 0 --:--:-- --:--:-- --:--:-- 1964 100 8513k 100 8513k 0 0 14.5M 0 --:--:-- --:--:-- --:--:-- 14.5M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2170 0 --:--:-- --:--:-- --:--:-- 2169 100 38.3M 100 38.3M 0 0 46.6M 0 --:--:-- --:--:-- --:--:-- 46.6M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 535 0 --:--:-- --:--:-- --:--:-- 536 0 0 0 620 0 0 1580 0 --:--:-- --:--:-- --:--:-- 1580 100 10.7M 100 10.7M 0 0 14.7M 0 --:--:-- --:--:-- --:--:-- 14.7M ~/nightlyrpm5bcmac/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpm5bcmac/glusterd2-v6.0-dev.146.git73f5bbd-vendor.tar.xz Created dist archive /root/nightlyrpm5bcmac/glusterd2-v6.0-dev.146.git73f5bbd-vendor.tar.xz ~ ~/nightlyrpm5bcmac ~ INFO: mock.py version 1.4.14 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpm5bcmac/rpmbuild/SRPMS/glusterd2-5.0-0.dev.146.git73f5bbd.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.14 INFO: Mock Version: 1.4.14 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.146.git73f5bbd.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.146.git73f5bbd.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.146.git73f5bbd.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.146.git73f5bbd.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpm5bcmac/rpmbuild/SRPMS/glusterd2-5.0-0.dev.146.git73f5bbd.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 22 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 28668bd9498c4faa92ea831ef8081703 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.8Bw5UT:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins3957702625638121809.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done e12f060f +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 74 | n10.pufty | 172.19.3.74 | pufty | 3302 | Deployed | e12f060f | None | None | 7 | x86_64 | 1 | 2090 | None | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sun Mar 10 00:56:14 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 10 Mar 2019 00:56:14 +0000 (UTC) Subject: [CI-results] Jenkins build is back to normal : gluster_anteater_gcs #102 In-Reply-To: <256537735.3996.1552098240248.JavaMail.jenkins@jenkins.ci.centos.org> References: <256537735.3996.1552098240248.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <269308554.4194.1552179374637.JavaMail.jenkins@jenkins.ci.centos.org> See From ci at centos.org Mon Mar 11 00:14:06 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 11 Mar 2019 00:14:06 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #299 In-Reply-To: <1469433044.4190.1552177076433.JavaMail.jenkins@jenkins.ci.centos.org> References: <1469433044.4190.1552177076433.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1900760424.4310.1552263246873.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.37 KB...] Total 93 MB/s | 143 MB 00:01 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : perl-srpm-macros-1-8.el7.noarch 10/49 Installing : unzip-6.0-19.el7.x86_64 11/49 Installing : dwz-0.11-3.el7.x86_64 12/49 Installing : bzip2-1.0.6-13.el7.x86_64 13/49 Installing : usermode-1.111-5.el7.x86_64 14/49 Installing : python2-distro-1.2.0-1.el7.noarch 15/49 Installing : patch-2.7.1-10.el7_5.x86_64 16/49 Installing : python-backports-1.0-8.el7.x86_64 17/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/49 Installing : python-urllib3-1.10.2-5.el7.noarch 19/49 Installing : python-requests-2.6.0-1.el7_1.noarch 20/49 Installing : distribution-gpg-keys-1.29-1.el7.noarch 21/49 Installing : mock-core-configs-30.2-1.el7.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : libmodman-2.0.1-8.el7.x86_64 24/49 Installing : libproxy-0.4.11-11.el7.x86_64 25/49 Installing : python-markupsafe-0.11-10.el7.x86_64 26/49 Installing : python-jinja2-2.7.2-2.el7.noarch 27/49 Installing : gdb-7.6.1-114.el7.x86_64 28/49 Installing : kernel-headers-3.10.0-957.5.1.el7.x86_64 29/49 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 30/49 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 31/49 Installing : gcc-4.8.5-36.el7.x86_64 32/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : golang-src-1.11.5-1.el7.noarch 35/49 Installing : python2-pyroute2-0.4.13-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-8.el7.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.14-2.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 5/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 6/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/49 Verifying : zip-3.0-11.el7.x86_64 8/49 Verifying : gnutls-3.3.29-8.el7.x86_64 9/49 Verifying : nettle-2.7.1-8.el7.x86_64 10/49 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 11/49 Verifying : golang-src-1.11.5-1.el7.noarch 12/49 Verifying : pigz-2.3.4-1.el7.x86_64 13/49 Verifying : gcc-4.8.5-36.el7.x86_64 14/49 Verifying : golang-1.11.5-1.el7.x86_64 15/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 16/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : kernel-headers-3.10.0-957.5.1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 23/49 Verifying : libmodman-2.0.1-8.el7.x86_64 24/49 Verifying : cpp-4.8.5-36.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : python-babel-0.9.6-8.el7.noarch 27/49 Verifying : distribution-gpg-keys-1.29-1.el7.noarch 28/49 Verifying : apr-util-1.5.2-6.el7.x86_64 29/49 Verifying : python-backports-1.0-8.el7.x86_64 30/49 Verifying : patch-2.7.1-10.el7_5.x86_64 31/49 Verifying : libmpc-1.0.1-3.el7.x86_64 32/49 Verifying : python2-distro-1.2.0-1.el7.noarch 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : perl-srpm-macros-1-8.el7.noarch 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7 distribution-gpg-keys.noarch 0:1.29-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-8.el7 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.5.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 3024 0 --:--:-- --:--:-- --:--:-- 3040 0 8513k 0 33334 0 0 97k 0 0:01:27 --:--:-- 0:01:27 97k100 8513k 100 8513k 0 0 18.7M 0 --:--:-- --:--:-- --:--:-- 75.9M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2903 0 --:--:-- --:--:-- --:--:-- 2916 100 38.3M 100 38.3M 0 0 55.0M 0 --:--:-- --:--:-- --:--:-- 55.0M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 986 0 --:--:-- --:--:-- --:--:-- 993 0 0 0 620 0 0 2501 0 --:--:-- --:--:-- --:--:-- 2501 0 10.7M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 10.7M 100 10.7M 0 0 20.1M 0 --:--:-- --:--:-- --:--:-- 80.0M ~/nightlyrpm15q2lc/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpm15q2lc/glusterd2-v6.0-dev.146.git73f5bbd-vendor.tar.xz Created dist archive /root/nightlyrpm15q2lc/glusterd2-v6.0-dev.146.git73f5bbd-vendor.tar.xz ~ ~/nightlyrpm15q2lc ~ INFO: mock.py version 1.4.14 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpm15q2lc/rpmbuild/SRPMS/glusterd2-5.0-0.dev.146.git73f5bbd.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.14 INFO: Mock Version: 1.4.14 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.146.git73f5bbd.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.146.git73f5bbd.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.146.git73f5bbd.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.146.git73f5bbd.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpm15q2lc/rpmbuild/SRPMS/glusterd2-5.0-0.dev.146.git73f5bbd.el7.src.rpm) Config(epel-7-x86_64) 1 minutes 32 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 518a5a2893d543558e81d5900acd6bae -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.JQBm_e:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins8661670485035795369.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done ab19a0da +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 223 | n32.dusty | 172.19.2.96 | dusty | 3308 | Deployed | ab19a0da | None | None | 7 | x86_64 | 1 | 2310 | None | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Mon Mar 11 00:55:29 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 11 Mar 2019 00:55:29 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #103 Message-ID: <1985610517.4315.1552265729314.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 459.27 KB...] changed: [kube1] => (item=gcs-namespace.yml) changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-fs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-virtblock-csi.yml) changed: [kube1] => (item=gcs-storage-virtblock.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Monday 11 March 2019 00:45:11 +0000 (0:00:11.895) 0:10:30.443 ********** included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Monday 11 March 2019 00:45:11 +0000 (0:00:00.089) 0:10:30.533 ********** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Monday 11 March 2019 00:45:11 +0000 (0:00:00.149) 0:10:30.683 ********** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Monday 11 March 2019 00:45:12 +0000 (0:00:00.711) 0:10:31.395 ********** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Monday 11 March 2019 00:45:12 +0000 (0:00:00.135) 0:10:31.530 ********** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Monday 11 March 2019 00:45:13 +0000 (0:00:00.726) 0:10:32.256 ********** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Monday 11 March 2019 00:45:13 +0000 (0:00:00.155) 0:10:32.412 ********** changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Monday 11 March 2019 00:45:14 +0000 (0:00:00.715) 0:10:33.127 ********** ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Monday 11 March 2019 00:45:14 +0000 (0:00:00.664) 0:10:33.792 ********** ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Monday 11 March 2019 00:45:15 +0000 (0:00:00.690) 0:10:34.482 ********** FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Monday 11 March 2019 00:45:26 +0000 (0:00:10.880) 0:10:45.363 ********** ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Monday 11 March 2019 00:45:27 +0000 (0:00:00.711) 0:10:46.074 ********** ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Monday 11 March 2019 00:45:27 +0000 (0:00:00.467) 0:10:46.542 ********** ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Monday 11 March 2019 00:45:28 +0000 (0:00:00.465) 0:10:47.007 ********** ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Monday 11 March 2019 00:45:28 +0000 (0:00:00.708) 0:10:47.715 ********** ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Monday 11 March 2019 00:45:29 +0000 (0:00:00.867) 0:10:48.583 ********** FAILED - RETRYING: GCS | ETCD Cluster | Get etcd-client service (5 retries left). changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Monday 11 March 2019 00:45:36 +0000 (0:00:07.354) 0:10:55.938 ********** ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Monday 11 March 2019 00:45:37 +0000 (0:00:00.147) 0:10:56.085 ********** FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Monday 11 March 2019 00:46:32 +0000 (0:00:55.101) 0:11:51.186 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Monday 11 March 2019 00:46:32 +0000 (0:00:00.752) 0:11:51.938 ********** included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Monday 11 March 2019 00:46:33 +0000 (0:00:00.115) 0:11:52.054 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Monday 11 March 2019 00:46:33 +0000 (0:00:00.139) 0:11:52.194 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Monday 11 March 2019 00:46:33 +0000 (0:00:00.686) 0:11:52.880 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Monday 11 March 2019 00:46:34 +0000 (0:00:00.195) 0:11:53.076 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Monday 11 March 2019 00:46:35 +0000 (0:00:00.975) 0:11:54.052 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Monday 11 March 2019 00:46:35 +0000 (0:00:00.146) 0:11:54.198 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Monday 11 March 2019 00:46:35 +0000 (0:00:00.716) 0:11:54.914 ********** changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Monday 11 March 2019 00:46:36 +0000 (0:00:00.509) 0:11:55.424 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Monday 11 March 2019 00:46:36 +0000 (0:00:00.167) 0:11:55.591 ********** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.44.66:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=421 changed=119 unreachable=0 failed=1 kube2 : ok=320 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Monday 11 March 2019 00:55:29 +0000 (0:08:52.491) 0:20:48.082 ********** =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 532.49s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 55.10s download : container_download | download images for kubeadm config images -- 36.94s kubernetes/master : kubeadm | Initialize first master ------------------ 28.76s kubernetes/master : kubeadm | Init other uninitialized masters --------- 25.11s Install packages ------------------------------------------------------- 24.70s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 20.32s Extend root VG --------------------------------------------------------- 17.74s Wait for host to be available ------------------------------------------ 16.47s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 13.90s etcd : Gen_certs | Write etcd master certs ----------------------------- 12.75s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 11.90s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 11.10s etcd : reload etcd ----------------------------------------------------- 10.98s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 10.88s container-engine/docker : Docker | pause while Docker restarts --------- 10.24s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests --- 9.40s gather facts from all instances ----------------------------------------- 8.40s download : file_download | Download item -------------------------------- 8.32s etcd : wait for etcd up ------------------------------------------------- 8.12s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Tue Mar 12 00:16:17 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 12 Mar 2019 00:16:17 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #300 In-Reply-To: <1900760424.4310.1552263246873.JavaMail.jenkins@jenkins.ci.centos.org> References: <1900760424.4310.1552263246873.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1267540100.4458.1552349777622.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.71 KB...] Total 100 MB/s | 143 MB 00:01 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : perl-srpm-macros-1-8.el7.noarch 10/49 Installing : unzip-6.0-19.el7.x86_64 11/49 Installing : dwz-0.11-3.el7.x86_64 12/49 Installing : bzip2-1.0.6-13.el7.x86_64 13/49 Installing : usermode-1.111-5.el7.x86_64 14/49 Installing : python2-distro-1.2.0-1.el7.noarch 15/49 Installing : patch-2.7.1-10.el7_5.x86_64 16/49 Installing : python-backports-1.0-8.el7.x86_64 17/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/49 Installing : python-urllib3-1.10.2-5.el7.noarch 19/49 Installing : python-requests-2.6.0-1.el7_1.noarch 20/49 Installing : distribution-gpg-keys-1.29-1.el7.noarch 21/49 Installing : mock-core-configs-30.2-1.el7.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : libmodman-2.0.1-8.el7.x86_64 24/49 Installing : libproxy-0.4.11-11.el7.x86_64 25/49 Installing : python-markupsafe-0.11-10.el7.x86_64 26/49 Installing : python-jinja2-2.7.2-2.el7.noarch 27/49 Installing : gdb-7.6.1-114.el7.x86_64 28/49 Installing : kernel-headers-3.10.0-957.5.1.el7.x86_64 29/49 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 30/49 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 31/49 Installing : gcc-4.8.5-36.el7.x86_64 32/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : golang-src-1.11.5-1.el7.noarch 35/49 Installing : python2-pyroute2-0.4.13-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-8.el7.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.14-2.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 5/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 6/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/49 Verifying : zip-3.0-11.el7.x86_64 8/49 Verifying : gnutls-3.3.29-8.el7.x86_64 9/49 Verifying : nettle-2.7.1-8.el7.x86_64 10/49 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 11/49 Verifying : golang-src-1.11.5-1.el7.noarch 12/49 Verifying : pigz-2.3.4-1.el7.x86_64 13/49 Verifying : gcc-4.8.5-36.el7.x86_64 14/49 Verifying : golang-1.11.5-1.el7.x86_64 15/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 16/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : kernel-headers-3.10.0-957.5.1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 23/49 Verifying : libmodman-2.0.1-8.el7.x86_64 24/49 Verifying : cpp-4.8.5-36.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : python-babel-0.9.6-8.el7.noarch 27/49 Verifying : distribution-gpg-keys-1.29-1.el7.noarch 28/49 Verifying : apr-util-1.5.2-6.el7.x86_64 29/49 Verifying : python-backports-1.0-8.el7.x86_64 30/49 Verifying : patch-2.7.1-10.el7_5.x86_64 31/49 Verifying : libmpc-1.0.1-3.el7.x86_64 32/49 Verifying : python2-distro-1.2.0-1.el7.noarch 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : perl-srpm-macros-1-8.el7.noarch 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7 distribution-gpg-keys.noarch 0:1.29-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-8.el7 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.5.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2215 0 --:--:-- --:--:-- --:--:-- 2224 100 8513k 100 8513k 0 0 15.9M 0 --:--:-- --:--:-- --:--:-- 15.9M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2327 0 --:--:-- --:--:-- --:--:-- 2330 0 38.3M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 38.3M 100 38.3M 0 0 42.2M 0 --:--:-- --:--:-- --:--:-- 69.3M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 971 0 --:--:-- --:--:-- --:--:-- 974 0 0 0 620 0 0 2178 0 --:--:-- --:--:-- --:--:-- 2178 91 10.7M 91 9.8M 0 0 17.1M 0 --:--:-- --:--:-- --:--:-- 17.1M100 10.7M 100 10.7M 0 0 18.1M 0 --:--:-- --:--:-- --:--:-- 62.7M ~/nightlyrpmDNtbwL/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmDNtbwL/glusterd2-v6.0-dev.146.git73f5bbd-vendor.tar.xz Created dist archive /root/nightlyrpmDNtbwL/glusterd2-v6.0-dev.146.git73f5bbd-vendor.tar.xz ~ ~/nightlyrpmDNtbwL ~ INFO: mock.py version 1.4.14 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmDNtbwL/rpmbuild/SRPMS/glusterd2-5.0-0.dev.146.git73f5bbd.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.14 INFO: Mock Version: 1.4.14 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.146.git73f5bbd.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.146.git73f5bbd.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.146.git73f5bbd.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.146.git73f5bbd.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmDNtbwL/rpmbuild/SRPMS/glusterd2-5.0-0.dev.146.git73f5bbd.el7.src.rpm) Config(epel-7-x86_64) 1 minutes 32 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 4fdb27bac13d403b8ade50b5a6c0cc91 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.AZTQak:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins1713902151540722744.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 6790efe4 +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 224 | n33.dusty | 172.19.2.97 | dusty | 3316 | Deployed | 6790efe4 | None | None | 7 | x86_64 | 1 | 2320 | None | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Tue Mar 12 01:07:32 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 12 Mar 2019 01:07:32 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #104 In-Reply-To: <1985610517.4315.1552265729314.JavaMail.jenkins@jenkins.ci.centos.org> References: <1985610517.4315.1552265729314.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1207221594.4464.1552352852603.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 459.55 KB...] changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-fs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-virtblock-csi.yml) changed: [kube1] => (item=gcs-storage-virtblock.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Tuesday 12 March 2019 00:55:48 +0000 (0:00:35.440) 0:18:10.026 ********* included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Tuesday 12 March 2019 00:55:48 +0000 (0:00:00.226) 0:18:10.253 ********* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Tuesday 12 March 2019 00:55:48 +0000 (0:00:00.359) 0:18:10.612 ********* changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Tuesday 12 March 2019 00:55:50 +0000 (0:00:02.131) 0:18:12.744 ********* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Tuesday 12 March 2019 00:55:51 +0000 (0:00:00.421) 0:18:13.165 ********* changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Tuesday 12 March 2019 00:55:53 +0000 (0:00:01.979) 0:18:15.145 ********* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Tuesday 12 March 2019 00:55:53 +0000 (0:00:00.439) 0:18:15.585 ********* changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Tuesday 12 March 2019 00:55:55 +0000 (0:00:02.086) 0:18:17.672 ********* ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Tuesday 12 March 2019 00:55:57 +0000 (0:00:01.510) 0:18:19.182 ********* ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Tuesday 12 March 2019 00:55:58 +0000 (0:00:01.600) 0:18:20.783 ********* FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Tuesday 12 March 2019 00:56:11 +0000 (0:00:12.275) 0:18:33.058 ********* ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Tuesday 12 March 2019 00:56:12 +0000 (0:00:01.743) 0:18:34.802 ********* ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Tuesday 12 March 2019 00:56:14 +0000 (0:00:01.241) 0:18:36.043 ********* ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Tuesday 12 March 2019 00:56:15 +0000 (0:00:01.300) 0:18:37.344 ********* ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Tuesday 12 March 2019 00:56:17 +0000 (0:00:01.619) 0:18:38.964 ********* ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Tuesday 12 March 2019 00:56:18 +0000 (0:00:01.795) 0:18:40.760 ********* changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Tuesday 12 March 2019 00:56:20 +0000 (0:00:01.074) 0:18:41.835 ********* ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Tuesday 12 March 2019 00:56:20 +0000 (0:00:00.400) 0:18:42.235 ********* FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (44 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Tuesday 12 March 2019 00:57:44 +0000 (0:01:24.433) 0:20:06.669 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Tuesday 12 March 2019 00:57:46 +0000 (0:00:01.662) 0:20:08.331 ********* included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Tuesday 12 March 2019 00:57:46 +0000 (0:00:00.211) 0:20:08.543 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Tuesday 12 March 2019 00:57:47 +0000 (0:00:00.329) 0:20:08.872 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Tuesday 12 March 2019 00:57:48 +0000 (0:00:01.613) 0:20:10.485 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Tuesday 12 March 2019 00:57:49 +0000 (0:00:00.338) 0:20:10.824 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Tuesday 12 March 2019 00:57:50 +0000 (0:00:01.578) 0:20:12.402 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Tuesday 12 March 2019 00:57:50 +0000 (0:00:00.368) 0:20:12.771 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Tuesday 12 March 2019 00:57:52 +0000 (0:00:01.510) 0:20:14.281 ********* changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Tuesday 12 March 2019 00:57:54 +0000 (0:00:01.729) 0:20:16.011 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Tuesday 12 March 2019 00:57:54 +0000 (0:00:00.344) 0:20:16.355 ********* FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.27.60:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=421 changed=119 unreachable=0 failed=1 kube2 : ok=320 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Tuesday 12 March 2019 01:07:32 +0000 (0:09:37.580) 0:29:53.936 ********* =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 577.58s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 84.43s kubernetes/master : kubeadm | Initialize first master ------------------ 39.59s kubernetes/master : kubeadm | Init other uninitialized masters --------- 38.22s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 35.44s etcd : Gen_certs | Write etcd master certs ----------------------------- 33.69s download : container_download | download images for kubeadm config images -- 33.00s Install packages ------------------------------------------------------- 29.83s Wait for host to be available ------------------------------------------ 20.57s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.32s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 20.18s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 19.13s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 13.57s gather facts from all instances ---------------------------------------- 13.03s etcd : Gen_certs | Gather etcd master certs ---------------------------- 12.87s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.28s etcd : reload etcd ----------------------------------------------------- 11.90s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.32s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests -- 10.87s container-engine/docker : Docker | pause while Docker restarts --------- 10.41s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed Mar 13 00:20:06 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 13 Mar 2019 00:20:06 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #301 In-Reply-To: <1267540100.4458.1552349777622.JavaMail.jenkins@jenkins.ci.centos.org> References: <1267540100.4458.1552349777622.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <6034301.4596.1552436406523.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.74 KB...] Total 90 MB/s | 143 MB 00:01 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : perl-srpm-macros-1-8.el7.noarch 10/49 Installing : unzip-6.0-19.el7.x86_64 11/49 Installing : dwz-0.11-3.el7.x86_64 12/49 Installing : bzip2-1.0.6-13.el7.x86_64 13/49 Installing : usermode-1.111-5.el7.x86_64 14/49 Installing : python2-distro-1.2.0-1.el7.noarch 15/49 Installing : patch-2.7.1-10.el7_5.x86_64 16/49 Installing : python-backports-1.0-8.el7.x86_64 17/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/49 Installing : python-urllib3-1.10.2-5.el7.noarch 19/49 Installing : python-requests-2.6.0-1.el7_1.noarch 20/49 Installing : distribution-gpg-keys-1.29-1.el7.noarch 21/49 Installing : mock-core-configs-30.2-1.el7.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : libmodman-2.0.1-8.el7.x86_64 24/49 Installing : libproxy-0.4.11-11.el7.x86_64 25/49 Installing : python-markupsafe-0.11-10.el7.x86_64 26/49 Installing : python-jinja2-2.7.2-2.el7.noarch 27/49 Installing : gdb-7.6.1-114.el7.x86_64 28/49 Installing : kernel-headers-3.10.0-957.5.1.el7.x86_64 29/49 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 30/49 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 31/49 Installing : gcc-4.8.5-36.el7.x86_64 32/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : golang-src-1.11.5-1.el7.noarch 35/49 Installing : python2-pyroute2-0.4.13-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-8.el7.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.14-2.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 5/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 6/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/49 Verifying : zip-3.0-11.el7.x86_64 8/49 Verifying : gnutls-3.3.29-8.el7.x86_64 9/49 Verifying : nettle-2.7.1-8.el7.x86_64 10/49 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 11/49 Verifying : golang-src-1.11.5-1.el7.noarch 12/49 Verifying : pigz-2.3.4-1.el7.x86_64 13/49 Verifying : gcc-4.8.5-36.el7.x86_64 14/49 Verifying : golang-1.11.5-1.el7.x86_64 15/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 16/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : kernel-headers-3.10.0-957.5.1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 23/49 Verifying : libmodman-2.0.1-8.el7.x86_64 24/49 Verifying : cpp-4.8.5-36.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : python-babel-0.9.6-8.el7.noarch 27/49 Verifying : distribution-gpg-keys-1.29-1.el7.noarch 28/49 Verifying : apr-util-1.5.2-6.el7.x86_64 29/49 Verifying : python-backports-1.0-8.el7.x86_64 30/49 Verifying : patch-2.7.1-10.el7_5.x86_64 31/49 Verifying : libmpc-1.0.1-3.el7.x86_64 32/49 Verifying : python2-distro-1.2.0-1.el7.noarch 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : perl-srpm-macros-1-8.el7.noarch 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7 distribution-gpg-keys.noarch 0:1.29-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-8.el7 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.5.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2759 0 --:--:-- --:--:-- --:--:-- 2775 100 8513k 100 8513k 0 0 16.5M 0 --:--:-- --:--:-- --:--:-- 16.5M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 3454 0 --:--:-- --:--:-- --:--:-- 3464 28 38.3M 28 10.8M 0 0 24.3M 0 0:00:01 --:--:-- 0:00:01 24.3M100 38.3M 100 38.3M 0 0 49.1M 0 --:--:-- --:--:-- --:--:-- 82.0M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 919 0 --:--:-- --:--:-- --:--:-- 921 0 0 0 620 0 0 2265 0 --:--:-- --:--:-- --:--:-- 2265 100 10.7M 100 10.7M 0 0 18.8M 0 --:--:-- --:--:-- --:--:-- 18.8M ~/nightlyrpmGXTZ2I/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmGXTZ2I/glusterd2-v6.0-dev.148.git82e1c18-vendor.tar.xz Created dist archive /root/nightlyrpmGXTZ2I/glusterd2-v6.0-dev.148.git82e1c18-vendor.tar.xz ~ ~/nightlyrpmGXTZ2I ~ INFO: mock.py version 1.4.14 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmGXTZ2I/rpmbuild/SRPMS/glusterd2-5.0-0.dev.148.git82e1c18.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.14 INFO: Mock Version: 1.4.14 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.148.git82e1c18.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.148.git82e1c18.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.148.git82e1c18.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.148.git82e1c18.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmGXTZ2I/rpmbuild/SRPMS/glusterd2-5.0-0.dev.148.git82e1c18.el7.src.rpm) Config(epel-7-x86_64) 1 minutes 31 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 1042d33665eb4c2eb8fc40623d7e2860 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.rCAux2:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins5910808361793516310.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done c46dcc85 +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 222 | n31.dusty | 172.19.2.95 | dusty | 3325 | Deployed | c46dcc85 | None | None | 7 | x86_64 | 1 | 2300 | None | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Wed Mar 13 01:07:11 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 13 Mar 2019 01:07:11 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #105 In-Reply-To: <1207221594.4464.1552352852603.JavaMail.jenkins@jenkins.ci.centos.org> References: <1207221594.4464.1552352852603.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <15657915.4599.1552439231526.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 459.44 KB...] changed: [kube1] => (item=gcs-namespace.yml) changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-fs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-virtblock-csi.yml) changed: [kube1] => (item=gcs-storage-virtblock.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Wednesday 13 March 2019 00:55:36 +0000 (0:00:34.899) 0:18:03.314 ******* included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Wednesday 13 March 2019 00:55:36 +0000 (0:00:00.236) 0:18:03.551 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Wednesday 13 March 2019 00:55:37 +0000 (0:00:00.515) 0:18:04.066 ******* changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Wednesday 13 March 2019 00:55:39 +0000 (0:00:02.215) 0:18:06.282 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Wednesday 13 March 2019 00:55:39 +0000 (0:00:00.547) 0:18:06.830 ******* changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Wednesday 13 March 2019 00:55:42 +0000 (0:00:02.117) 0:18:08.947 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Wednesday 13 March 2019 00:55:42 +0000 (0:00:00.531) 0:18:09.479 ******* changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Wednesday 13 March 2019 00:55:44 +0000 (0:00:02.181) 0:18:11.660 ******* ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Wednesday 13 March 2019 00:55:46 +0000 (0:00:01.671) 0:18:13.332 ******* ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Wednesday 13 March 2019 00:55:48 +0000 (0:00:01.836) 0:18:15.169 ******* FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Wednesday 13 March 2019 00:56:00 +0000 (0:00:12.273) 0:18:27.442 ******* ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Wednesday 13 March 2019 00:56:02 +0000 (0:00:01.806) 0:18:29.248 ******* ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Wednesday 13 March 2019 00:56:04 +0000 (0:00:02.366) 0:18:31.615 ******* ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Wednesday 13 March 2019 00:56:06 +0000 (0:00:01.437) 0:18:33.053 ******* ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Wednesday 13 March 2019 00:56:08 +0000 (0:00:01.800) 0:18:34.853 ******* ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Wednesday 13 March 2019 00:56:09 +0000 (0:00:01.880) 0:18:36.734 ******* changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Wednesday 13 March 2019 00:56:11 +0000 (0:00:01.230) 0:18:37.965 ******* ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Wednesday 13 March 2019 00:56:11 +0000 (0:00:00.432) 0:18:38.397 ******* FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (45 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Wednesday 13 March 2019 00:57:23 +0000 (0:01:12.433) 0:19:50.830 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Wednesday 13 March 2019 00:57:25 +0000 (0:00:01.741) 0:19:52.572 ******* included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Wednesday 13 March 2019 00:57:25 +0000 (0:00:00.200) 0:19:52.773 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Wednesday 13 March 2019 00:57:26 +0000 (0:00:00.462) 0:19:53.235 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Wednesday 13 March 2019 00:57:28 +0000 (0:00:01.747) 0:19:54.982 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Wednesday 13 March 2019 00:57:28 +0000 (0:00:00.305) 0:19:55.288 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Wednesday 13 March 2019 00:57:30 +0000 (0:00:01.692) 0:19:56.980 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Wednesday 13 March 2019 00:57:30 +0000 (0:00:00.317) 0:19:57.297 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Wednesday 13 March 2019 00:57:32 +0000 (0:00:01.542) 0:19:58.840 ******* changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Wednesday 13 March 2019 00:57:33 +0000 (0:00:01.135) 0:19:59.975 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Wednesday 13 March 2019 00:57:33 +0000 (0:00:00.304) 0:20:00.279 ******* FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.8.36:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=421 changed=119 unreachable=0 failed=1 kube2 : ok=320 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Wednesday 13 March 2019 01:07:11 +0000 (0:09:37.593) 0:29:37.873 ******* =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 577.59s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 72.43s kubernetes/master : kubeadm | Initialize first master ------------------ 40.84s kubernetes/master : kubeadm | Init other uninitialized masters --------- 39.25s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 34.90s etcd : Gen_certs | Write etcd master certs ----------------------------- 32.70s download : container_download | download images for kubeadm config images -- 32.51s Install packages ------------------------------------------------------- 31.74s Wait for host to be available ------------------------------------------ 20.70s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.52s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 19.27s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 18.80s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 13.12s gather facts from all instances ---------------------------------------- 12.94s etcd : Gen_certs | Gather etcd master certs ---------------------------- 12.32s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.27s etcd : reload etcd ----------------------------------------------------- 12.14s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests -- 11.16s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.15s container-engine/docker : Docker | pause while Docker restarts --------- 10.39s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu Mar 14 00:13:51 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 14 Mar 2019 00:13:51 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #302 In-Reply-To: <6034301.4596.1552436406523.JavaMail.jenkins@jenkins.ci.centos.org> References: <6034301.4596.1552436406523.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1428239904.4786.1552522431890.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.35 KB...] Total 99 MB/s | 143 MB 00:01 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : perl-srpm-macros-1-8.el7.noarch 10/49 Installing : unzip-6.0-19.el7.x86_64 11/49 Installing : dwz-0.11-3.el7.x86_64 12/49 Installing : bzip2-1.0.6-13.el7.x86_64 13/49 Installing : usermode-1.111-5.el7.x86_64 14/49 Installing : python2-distro-1.2.0-1.el7.noarch 15/49 Installing : patch-2.7.1-10.el7_5.x86_64 16/49 Installing : python-backports-1.0-8.el7.x86_64 17/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/49 Installing : python-urllib3-1.10.2-5.el7.noarch 19/49 Installing : python-requests-2.6.0-1.el7_1.noarch 20/49 Installing : distribution-gpg-keys-1.29-1.el7.noarch 21/49 Installing : mock-core-configs-30.2-1.el7.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : libmodman-2.0.1-8.el7.x86_64 24/49 Installing : libproxy-0.4.11-11.el7.x86_64 25/49 Installing : python-markupsafe-0.11-10.el7.x86_64 26/49 Installing : python-jinja2-2.7.2-2.el7.noarch 27/49 Installing : gdb-7.6.1-114.el7.x86_64 28/49 Installing : kernel-headers-3.10.0-957.5.1.el7.x86_64 29/49 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 30/49 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 31/49 Installing : gcc-4.8.5-36.el7.x86_64 32/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : golang-src-1.11.5-1.el7.noarch 35/49 Installing : python2-pyroute2-0.4.13-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-8.el7.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.14-2.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 5/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 6/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/49 Verifying : zip-3.0-11.el7.x86_64 8/49 Verifying : gnutls-3.3.29-8.el7.x86_64 9/49 Verifying : nettle-2.7.1-8.el7.x86_64 10/49 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 11/49 Verifying : golang-src-1.11.5-1.el7.noarch 12/49 Verifying : pigz-2.3.4-1.el7.x86_64 13/49 Verifying : gcc-4.8.5-36.el7.x86_64 14/49 Verifying : golang-1.11.5-1.el7.x86_64 15/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 16/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : kernel-headers-3.10.0-957.5.1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 23/49 Verifying : libmodman-2.0.1-8.el7.x86_64 24/49 Verifying : cpp-4.8.5-36.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : python-babel-0.9.6-8.el7.noarch 27/49 Verifying : distribution-gpg-keys-1.29-1.el7.noarch 28/49 Verifying : apr-util-1.5.2-6.el7.x86_64 29/49 Verifying : python-backports-1.0-8.el7.x86_64 30/49 Verifying : patch-2.7.1-10.el7_5.x86_64 31/49 Verifying : libmpc-1.0.1-3.el7.x86_64 32/49 Verifying : python2-distro-1.2.0-1.el7.noarch 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : perl-srpm-macros-1-8.el7.noarch 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7 distribution-gpg-keys.noarch 0:1.29-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-8.el7 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.5.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2631 0 --:--:-- --:--:-- --:--:-- 2641 0 8513k 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 8513k 100 8513k 0 0 17.1M 0 --:--:-- --:--:-- --:--:-- 58.1M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 3770 0 --:--:-- --:--:-- --:--:-- 3777 82 38.3M 82 31.7M 0 0 37.5M 0 0:00:01 --:--:-- 0:00:01 37.5M100 38.3M 100 38.3M 0 0 41.1M 0 --:--:-- --:--:-- --:--:-- 74.9M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 1059 0 --:--:-- --:--:-- --:--:-- 1055 0 0 0 620 0 0 2452 0 --:--:-- --:--:-- --:--:-- 2452 100 10.7M 100 10.7M 0 0 11.3M 0 --:--:-- --:--:-- --:--:-- 11.3M ~/nightlyrpm0jQgEG/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpm0jQgEG/glusterd2-v6.0-dev.148.git82e1c18-vendor.tar.xz Created dist archive /root/nightlyrpm0jQgEG/glusterd2-v6.0-dev.148.git82e1c18-vendor.tar.xz ~ ~/nightlyrpm0jQgEG ~ INFO: mock.py version 1.4.14 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpm0jQgEG/rpmbuild/SRPMS/glusterd2-5.0-0.dev.148.git82e1c18.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.14 INFO: Mock Version: 1.4.14 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.148.git82e1c18.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.148.git82e1c18.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.148.git82e1c18.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.148.git82e1c18.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpm0jQgEG/rpmbuild/SRPMS/glusterd2-5.0-0.dev.148.git82e1c18.el7.src.rpm) Config(epel-7-x86_64) 1 minutes 31 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M c8c389a410434aa0ba67927474356d0c -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.ShYt8j:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins8217799540243488823.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 6eb91752 +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 235 | n44.dusty | 172.19.2.108 | dusty | 3328 | Deployed | 6eb91752 | None | None | 7 | x86_64 | 1 | 2430 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Thu Mar 14 01:07:39 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 14 Mar 2019 01:07:39 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #106 In-Reply-To: <15657915.4599.1552439231526.JavaMail.jenkins@jenkins.ci.centos.org> References: <15657915.4599.1552439231526.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <25006213.4793.1552525659310.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 459.58 KB...] changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-fs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-virtblock-csi.yml) changed: [kube1] => (item=gcs-storage-virtblock.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Thursday 14 March 2019 00:55:37 +0000 (0:00:35.309) 0:18:16.817 ******** included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Thursday 14 March 2019 00:55:37 +0000 (0:00:00.313) 0:18:17.130 ******** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Thursday 14 March 2019 00:55:38 +0000 (0:00:00.448) 0:18:17.578 ******** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Thursday 14 March 2019 00:55:40 +0000 (0:00:02.059) 0:18:19.638 ******** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Thursday 14 March 2019 00:55:40 +0000 (0:00:00.439) 0:18:20.077 ******** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Thursday 14 March 2019 00:55:42 +0000 (0:00:02.136) 0:18:22.214 ******** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Thursday 14 March 2019 00:55:43 +0000 (0:00:00.420) 0:18:22.634 ******** changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Thursday 14 March 2019 00:55:45 +0000 (0:00:01.980) 0:18:24.615 ******** ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Thursday 14 March 2019 00:55:46 +0000 (0:00:01.436) 0:18:26.052 ******** ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Thursday 14 March 2019 00:55:48 +0000 (0:00:01.617) 0:18:27.669 ******** FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Thursday 14 March 2019 00:56:00 +0000 (0:00:12.100) 0:18:39.770 ******** ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Thursday 14 March 2019 00:56:01 +0000 (0:00:01.613) 0:18:41.383 ******** ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Thursday 14 March 2019 00:56:03 +0000 (0:00:01.286) 0:18:42.670 ******** ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Thursday 14 March 2019 00:56:04 +0000 (0:00:01.200) 0:18:43.870 ******** ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Thursday 14 March 2019 00:56:06 +0000 (0:00:01.692) 0:18:45.563 ******** ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Thursday 14 March 2019 00:56:08 +0000 (0:00:01.905) 0:18:47.468 ******** FAILED - RETRYING: GCS | ETCD Cluster | Get etcd-client service (5 retries left). changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Thursday 14 March 2019 00:56:15 +0000 (0:00:07.375) 0:18:54.844 ******** ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Thursday 14 March 2019 00:56:15 +0000 (0:00:00.351) 0:18:55.195 ******** FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (43 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Thursday 14 March 2019 00:57:51 +0000 (0:01:35.554) 0:20:30.750 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Thursday 14 March 2019 00:57:53 +0000 (0:00:01.894) 0:20:32.645 ******** included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Thursday 14 March 2019 00:57:53 +0000 (0:00:00.207) 0:20:32.853 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Thursday 14 March 2019 00:57:53 +0000 (0:00:00.397) 0:20:33.251 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Thursday 14 March 2019 00:57:55 +0000 (0:00:01.590) 0:20:34.842 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Thursday 14 March 2019 00:57:55 +0000 (0:00:00.316) 0:20:35.158 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Thursday 14 March 2019 00:57:57 +0000 (0:00:01.782) 0:20:36.941 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Thursday 14 March 2019 00:57:57 +0000 (0:00:00.370) 0:20:37.311 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Thursday 14 March 2019 00:57:59 +0000 (0:00:01.443) 0:20:38.755 ******** changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Thursday 14 March 2019 00:58:00 +0000 (0:00:01.413) 0:20:40.168 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Thursday 14 March 2019 00:58:01 +0000 (0:00:00.323) 0:20:40.492 ******** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.16.154:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=420 changed=119 unreachable=0 failed=1 kube2 : ok=321 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Thursday 14 March 2019 01:07:38 +0000 (0:09:37.667) 0:30:18.159 ******** =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 577.67s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 95.55s download : container_download | download images for kubeadm config images -- 39.38s kubernetes/master : kubeadm | Init other uninitialized masters --------- 38.65s kubernetes/master : kubeadm | Initialize first master ------------------ 38.39s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 35.31s etcd : Gen_certs | Write etcd master certs ----------------------------- 33.19s Install packages ------------------------------------------------------- 31.10s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.75s Wait for host to be available ------------------------------------------ 20.73s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 20.43s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 18.67s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 13.37s gather facts from all instances ---------------------------------------- 13.26s etcd : Gen_certs | Gather etcd master certs ---------------------------- 12.61s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.10s etcd : reload etcd ----------------------------------------------------- 11.90s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.42s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests -- 10.93s download : file_download | Download item ------------------------------- 10.60s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri Mar 15 00:16:01 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 15 Mar 2019 00:16:01 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #303 In-Reply-To: <1428239904.4786.1552522431890.JavaMail.jenkins@jenkins.ci.centos.org> References: <1428239904.4786.1552522431890.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <475517564.4920.1552608961600.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.38 KB...] Total 61 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : perl-srpm-macros-1-8.el7.noarch 10/49 Installing : unzip-6.0-19.el7.x86_64 11/49 Installing : dwz-0.11-3.el7.x86_64 12/49 Installing : bzip2-1.0.6-13.el7.x86_64 13/49 Installing : usermode-1.111-5.el7.x86_64 14/49 Installing : python2-distro-1.2.0-1.el7.noarch 15/49 Installing : patch-2.7.1-10.el7_5.x86_64 16/49 Installing : python-backports-1.0-8.el7.x86_64 17/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/49 Installing : python-urllib3-1.10.2-5.el7.noarch 19/49 Installing : python-requests-2.6.0-1.el7_1.noarch 20/49 Installing : distribution-gpg-keys-1.29-1.el7.noarch 21/49 Installing : mock-core-configs-30.2-1.el7.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : libmodman-2.0.1-8.el7.x86_64 24/49 Installing : libproxy-0.4.11-11.el7.x86_64 25/49 Installing : python-markupsafe-0.11-10.el7.x86_64 26/49 Installing : python-jinja2-2.7.2-2.el7.noarch 27/49 Installing : gdb-7.6.1-114.el7.x86_64 28/49 Installing : kernel-headers-3.10.0-957.5.1.el7.x86_64 29/49 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 30/49 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 31/49 Installing : gcc-4.8.5-36.el7.x86_64 32/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : golang-src-1.11.5-1.el7.noarch 35/49 Installing : python2-pyroute2-0.4.13-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-8.el7.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.14-2.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 5/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 6/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/49 Verifying : zip-3.0-11.el7.x86_64 8/49 Verifying : gnutls-3.3.29-8.el7.x86_64 9/49 Verifying : nettle-2.7.1-8.el7.x86_64 10/49 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 11/49 Verifying : golang-src-1.11.5-1.el7.noarch 12/49 Verifying : pigz-2.3.4-1.el7.x86_64 13/49 Verifying : gcc-4.8.5-36.el7.x86_64 14/49 Verifying : golang-1.11.5-1.el7.x86_64 15/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 16/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : kernel-headers-3.10.0-957.5.1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 23/49 Verifying : libmodman-2.0.1-8.el7.x86_64 24/49 Verifying : cpp-4.8.5-36.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : python-babel-0.9.6-8.el7.noarch 27/49 Verifying : distribution-gpg-keys-1.29-1.el7.noarch 28/49 Verifying : apr-util-1.5.2-6.el7.x86_64 29/49 Verifying : python-backports-1.0-8.el7.x86_64 30/49 Verifying : patch-2.7.1-10.el7_5.x86_64 31/49 Verifying : libmpc-1.0.1-3.el7.x86_64 32/49 Verifying : python2-distro-1.2.0-1.el7.noarch 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : perl-srpm-macros-1-8.el7.noarch 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7 distribution-gpg-keys.noarch 0:1.29-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-8.el7 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.5.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1680 0 --:--:-- --:--:-- --:--:-- 1685 100 8513k 100 8513k 0 0 13.5M 0 --:--:-- --:--:-- --:--:-- 13.5M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2307 0 --:--:-- --:--:-- --:--:-- 2305 100 38.3M 100 38.3M 0 0 38.4M 0 --:--:-- --:--:-- --:--:-- 38.4M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 467 0 --:--:-- --:--:-- --:--:-- 467 0 0 0 620 0 0 1358 0 --:--:-- --:--:-- --:--:-- 1358 42 10.7M 42 4698k 0 0 6872k 0 0:00:01 --:--:-- 0:00:01 6872k100 10.7M 100 10.7M 0 0 14.5M 0 --:--:-- --:--:-- --:--:-- 118M ~/nightlyrpmh2TeWC/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmh2TeWC/glusterd2-v6.0-dev.148.git82e1c18-vendor.tar.xz Created dist archive /root/nightlyrpmh2TeWC/glusterd2-v6.0-dev.148.git82e1c18-vendor.tar.xz ~ ~/nightlyrpmh2TeWC ~ INFO: mock.py version 1.4.14 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmh2TeWC/rpmbuild/SRPMS/glusterd2-5.0-0.dev.148.git82e1c18.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.14 INFO: Mock Version: 1.4.14 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.148.git82e1c18.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.148.git82e1c18.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.148.git82e1c18.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.148.git82e1c18.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmh2TeWC/rpmbuild/SRPMS/glusterd2-5.0-0.dev.148.git82e1c18.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 21 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 4728b55f232d46349bb3b3165c0df614 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.RKtHse:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins4994790579617327301.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 9d26a0ff +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 141 | n14.crusty | 172.19.2.14 | crusty | 3344 | Deployed | 9d26a0ff | None | None | 7 | x86_64 | 1 | 2130 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Fri Mar 15 01:03:36 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 15 Mar 2019 01:03:36 +0000 (UTC) Subject: [CI-results] Jenkins build is back to normal : gluster_anteater_gcs #107 In-Reply-To: <25006213.4793.1552525659310.JavaMail.jenkins@jenkins.ci.centos.org> References: <25006213.4793.1552525659310.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1130255239.4927.1552611816956.JavaMail.jenkins@jenkins.ci.centos.org> See From ci at centos.org Sat Mar 16 00:16:01 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 16 Mar 2019 00:16:01 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #304 In-Reply-To: <475517564.4920.1552608961600.JavaMail.jenkins@jenkins.ci.centos.org> References: <475517564.4920.1552608961600.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1200839719.5027.1552695362108.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.34 KB...] Total 64 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : perl-srpm-macros-1-8.el7.noarch 10/49 Installing : unzip-6.0-19.el7.x86_64 11/49 Installing : dwz-0.11-3.el7.x86_64 12/49 Installing : bzip2-1.0.6-13.el7.x86_64 13/49 Installing : usermode-1.111-5.el7.x86_64 14/49 Installing : python2-distro-1.2.0-1.el7.noarch 15/49 Installing : patch-2.7.1-10.el7_5.x86_64 16/49 Installing : python-backports-1.0-8.el7.x86_64 17/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/49 Installing : python-urllib3-1.10.2-5.el7.noarch 19/49 Installing : python-requests-2.6.0-1.el7_1.noarch 20/49 Installing : distribution-gpg-keys-1.29-1.el7.noarch 21/49 Installing : mock-core-configs-30.2-1.el7.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : libmodman-2.0.1-8.el7.x86_64 24/49 Installing : libproxy-0.4.11-11.el7.x86_64 25/49 Installing : python-markupsafe-0.11-10.el7.x86_64 26/49 Installing : python-jinja2-2.7.2-2.el7.noarch 27/49 Installing : gdb-7.6.1-114.el7.x86_64 28/49 Installing : kernel-headers-3.10.0-957.5.1.el7.x86_64 29/49 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 30/49 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 31/49 Installing : gcc-4.8.5-36.el7.x86_64 32/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : golang-src-1.11.5-1.el7.noarch 35/49 Installing : python2-pyroute2-0.4.13-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-8.el7.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.14-2.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 5/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 6/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/49 Verifying : zip-3.0-11.el7.x86_64 8/49 Verifying : gnutls-3.3.29-8.el7.x86_64 9/49 Verifying : nettle-2.7.1-8.el7.x86_64 10/49 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 11/49 Verifying : golang-src-1.11.5-1.el7.noarch 12/49 Verifying : pigz-2.3.4-1.el7.x86_64 13/49 Verifying : gcc-4.8.5-36.el7.x86_64 14/49 Verifying : golang-1.11.5-1.el7.x86_64 15/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 16/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : kernel-headers-3.10.0-957.5.1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 23/49 Verifying : libmodman-2.0.1-8.el7.x86_64 24/49 Verifying : cpp-4.8.5-36.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : python-babel-0.9.6-8.el7.noarch 27/49 Verifying : distribution-gpg-keys-1.29-1.el7.noarch 28/49 Verifying : apr-util-1.5.2-6.el7.x86_64 29/49 Verifying : python-backports-1.0-8.el7.x86_64 30/49 Verifying : patch-2.7.1-10.el7_5.x86_64 31/49 Verifying : libmpc-1.0.1-3.el7.x86_64 32/49 Verifying : python2-distro-1.2.0-1.el7.noarch 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : perl-srpm-macros-1-8.el7.noarch 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7 distribution-gpg-keys.noarch 0:1.29-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-8.el7 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.5.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2148 0 --:--:-- --:--:-- --:--:-- 2160 0 8513k 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 8513k 100 8513k 0 0 13.8M 0 --:--:-- --:--:-- --:--:-- 50.3M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2244 0 --:--:-- --:--:-- --:--:-- 2247 49 38.3M 49 18.8M 0 0 23.1M 0 0:00:01 --:--:-- 0:00:01 23.1M100 38.3M 100 38.3M 0 0 28.0M 0 0:00:01 0:00:01 --:--:-- 35.2M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 572 0 --:--:-- --:--:-- --:--:-- 570 0 0 0 620 0 0 1731 0 --:--:-- --:--:-- --:--:-- 1731 100 10.7M 100 10.7M 0 0 14.6M 0 --:--:-- --:--:-- --:--:-- 14.6M ~/nightlyrpmG4HsUw/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmG4HsUw/glusterd2-v6.0-dev.149.git568322a-vendor.tar.xz Created dist archive /root/nightlyrpmG4HsUw/glusterd2-v6.0-dev.149.git568322a-vendor.tar.xz ~ ~/nightlyrpmG4HsUw ~ INFO: mock.py version 1.4.14 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmG4HsUw/rpmbuild/SRPMS/glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.14 INFO: Mock Version: 1.4.14 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmG4HsUw/rpmbuild/SRPMS/glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 21 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 3e5d36e271f9482887d48dbaf5654d9e -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.oumdij:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins9179917273733405865.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done dd3e2265 +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 159 | n32.crusty | 172.19.2.32 | crusty | 3354 | Deployed | dd3e2265 | None | None | 7 | x86_64 | 1 | 2310 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sun Mar 17 00:16:02 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 17 Mar 2019 00:16:02 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #305 In-Reply-To: <1200839719.5027.1552695362108.JavaMail.jenkins@jenkins.ci.centos.org> References: <1200839719.5027.1552695362108.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1951315339.5130.1552781763002.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.37 KB...] Total 63 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : perl-srpm-macros-1-8.el7.noarch 10/49 Installing : unzip-6.0-19.el7.x86_64 11/49 Installing : dwz-0.11-3.el7.x86_64 12/49 Installing : bzip2-1.0.6-13.el7.x86_64 13/49 Installing : usermode-1.111-5.el7.x86_64 14/49 Installing : python2-distro-1.2.0-1.el7.noarch 15/49 Installing : patch-2.7.1-10.el7_5.x86_64 16/49 Installing : python-backports-1.0-8.el7.x86_64 17/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/49 Installing : python-urllib3-1.10.2-5.el7.noarch 19/49 Installing : python-requests-2.6.0-1.el7_1.noarch 20/49 Installing : distribution-gpg-keys-1.29-1.el7.noarch 21/49 Installing : mock-core-configs-30.2-1.el7.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : libmodman-2.0.1-8.el7.x86_64 24/49 Installing : libproxy-0.4.11-11.el7.x86_64 25/49 Installing : python-markupsafe-0.11-10.el7.x86_64 26/49 Installing : python-jinja2-2.7.2-2.el7.noarch 27/49 Installing : gdb-7.6.1-114.el7.x86_64 28/49 Installing : kernel-headers-3.10.0-957.5.1.el7.x86_64 29/49 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 30/49 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 31/49 Installing : gcc-4.8.5-36.el7.x86_64 32/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : golang-src-1.11.5-1.el7.noarch 35/49 Installing : python2-pyroute2-0.4.13-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-8.el7.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.14-2.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 5/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 6/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/49 Verifying : zip-3.0-11.el7.x86_64 8/49 Verifying : gnutls-3.3.29-8.el7.x86_64 9/49 Verifying : nettle-2.7.1-8.el7.x86_64 10/49 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 11/49 Verifying : golang-src-1.11.5-1.el7.noarch 12/49 Verifying : pigz-2.3.4-1.el7.x86_64 13/49 Verifying : gcc-4.8.5-36.el7.x86_64 14/49 Verifying : golang-1.11.5-1.el7.x86_64 15/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 16/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : kernel-headers-3.10.0-957.5.1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 23/49 Verifying : libmodman-2.0.1-8.el7.x86_64 24/49 Verifying : cpp-4.8.5-36.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : python-babel-0.9.6-8.el7.noarch 27/49 Verifying : distribution-gpg-keys-1.29-1.el7.noarch 28/49 Verifying : apr-util-1.5.2-6.el7.x86_64 29/49 Verifying : python-backports-1.0-8.el7.x86_64 30/49 Verifying : patch-2.7.1-10.el7_5.x86_64 31/49 Verifying : libmpc-1.0.1-3.el7.x86_64 32/49 Verifying : python2-distro-1.2.0-1.el7.noarch 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : perl-srpm-macros-1-8.el7.noarch 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7 distribution-gpg-keys.noarch 0:1.29-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-8.el7 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.5.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1860 0 --:--:-- --:--:-- --:--:-- 1867 100 8513k 100 8513k 0 0 14.2M 0 --:--:-- --:--:-- --:--:-- 14.2M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2172 0 --:--:-- --:--:-- --:--:-- 2177 1 38.3M 1 661k 0 0 1459k 0 0:00:26 --:--:-- 0:00:26 1459k 71 38.3M 71 27.4M 0 0 18.8M 0 0:00:02 0:00:01 0:00:01 26.8M100 38.3M 100 38.3M 0 0 19.0M 0 0:00:02 0:00:02 --:--:-- 24.1M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 540 0 --:--:-- --:--:-- --:--:-- 542 0 0 0 620 0 0 1698 0 --:--:-- --:--:-- --:--:-- 1698 100 10.7M 100 10.7M 0 0 16.1M 0 --:--:-- --:--:-- --:--:-- 16.1M ~/nightlyrpmJVwwIi/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmJVwwIi/glusterd2-v6.0-dev.149.git568322a-vendor.tar.xz Created dist archive /root/nightlyrpmJVwwIi/glusterd2-v6.0-dev.149.git568322a-vendor.tar.xz ~ ~/nightlyrpmJVwwIi ~ INFO: mock.py version 1.4.14 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmJVwwIi/rpmbuild/SRPMS/glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.14 INFO: Mock Version: 1.4.14 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmJVwwIi/rpmbuild/SRPMS/glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 23 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 0890325329e44b0497ecd317b25373c5 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.zjjpzl:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins8427666869382673382.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 2bd8fec2 +---------+-----------+------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 135 | n8.crusty | 172.19.2.8 | crusty | 3362 | Deployed | 2bd8fec2 | None | None | 7 | x86_64 | 1 | 2070 | None | +---------+-----------+------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Mon Mar 18 00:16:27 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 18 Mar 2019 00:16:27 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #306 In-Reply-To: <1951315339.5130.1552781763002.JavaMail.jenkins@jenkins.ci.centos.org> References: <1951315339.5130.1552781763002.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <458838644.5217.1552868187265.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.38 KB...] Total 52 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : perl-srpm-macros-1-8.el7.noarch 10/49 Installing : unzip-6.0-19.el7.x86_64 11/49 Installing : dwz-0.11-3.el7.x86_64 12/49 Installing : bzip2-1.0.6-13.el7.x86_64 13/49 Installing : usermode-1.111-5.el7.x86_64 14/49 Installing : python2-distro-1.2.0-1.el7.noarch 15/49 Installing : patch-2.7.1-10.el7_5.x86_64 16/49 Installing : python-backports-1.0-8.el7.x86_64 17/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/49 Installing : python-urllib3-1.10.2-5.el7.noarch 19/49 Installing : python-requests-2.6.0-1.el7_1.noarch 20/49 Installing : distribution-gpg-keys-1.29-1.el7.noarch 21/49 Installing : mock-core-configs-30.2-1.el7.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : libmodman-2.0.1-8.el7.x86_64 24/49 Installing : libproxy-0.4.11-11.el7.x86_64 25/49 Installing : python-markupsafe-0.11-10.el7.x86_64 26/49 Installing : python-jinja2-2.7.2-2.el7.noarch 27/49 Installing : gdb-7.6.1-114.el7.x86_64 28/49 Installing : kernel-headers-3.10.0-957.5.1.el7.x86_64 29/49 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 30/49 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 31/49 Installing : gcc-4.8.5-36.el7.x86_64 32/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : golang-src-1.11.5-1.el7.noarch 35/49 Installing : python2-pyroute2-0.4.13-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-8.el7.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.14-2.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 5/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 6/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/49 Verifying : zip-3.0-11.el7.x86_64 8/49 Verifying : gnutls-3.3.29-8.el7.x86_64 9/49 Verifying : nettle-2.7.1-8.el7.x86_64 10/49 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 11/49 Verifying : golang-src-1.11.5-1.el7.noarch 12/49 Verifying : pigz-2.3.4-1.el7.x86_64 13/49 Verifying : gcc-4.8.5-36.el7.x86_64 14/49 Verifying : golang-1.11.5-1.el7.x86_64 15/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 16/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : kernel-headers-3.10.0-957.5.1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 23/49 Verifying : libmodman-2.0.1-8.el7.x86_64 24/49 Verifying : cpp-4.8.5-36.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : python-babel-0.9.6-8.el7.noarch 27/49 Verifying : distribution-gpg-keys-1.29-1.el7.noarch 28/49 Verifying : apr-util-1.5.2-6.el7.x86_64 29/49 Verifying : python-backports-1.0-8.el7.x86_64 30/49 Verifying : patch-2.7.1-10.el7_5.x86_64 31/49 Verifying : libmpc-1.0.1-3.el7.x86_64 32/49 Verifying : python2-distro-1.2.0-1.el7.noarch 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : perl-srpm-macros-1-8.el7.noarch 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7 distribution-gpg-keys.noarch 0:1.29-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-8.el7 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.5.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1722 0 --:--:-- --:--:-- --:--:-- 1728 1 8513k 1 101k 0 0 194k 0 0:00:43 --:--:-- 0:00:43 194k100 8513k 100 8513k 0 0 13.2M 0 --:--:-- --:--:-- --:--:-- 76.0M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2069 0 --:--:-- --:--:-- --:--:-- 2076 77 38.3M 77 29.9M 0 0 34.2M 0 0:00:01 --:--:-- 0:00:01 34.2M100 38.3M 100 38.3M 0 0 38.3M 0 --:--:-- --:--:-- --:--:-- 67.7M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 702 0 --:--:-- --:--:-- --:--:-- 705 0 0 0 620 0 0 2170 0 --:--:-- --:--:-- --:--:-- 2170 23 10.7M 23 2575k 0 0 5084k 0 0:00:02 --:--:-- 0:00:02 5084k100 10.7M 100 10.7M 0 0 18.3M 0 --:--:-- --:--:-- --:--:-- 105M ~/nightlyrpmDXUvpG/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmDXUvpG/glusterd2-v6.0-dev.149.git568322a-vendor.tar.xz Created dist archive /root/nightlyrpmDXUvpG/glusterd2-v6.0-dev.149.git568322a-vendor.tar.xz ~ ~/nightlyrpmDXUvpG ~ INFO: mock.py version 1.4.14 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmDXUvpG/rpmbuild/SRPMS/glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.14 INFO: Mock Version: 1.4.14 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmDXUvpG/rpmbuild/SRPMS/glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 23 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 43e30681b062452892f747238673cea4 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.yzqVle:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins2245781322349623284.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 1232b3a6 +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 115 | n51.pufty | 172.19.3.115 | pufty | 3370 | Deployed | 1232b3a6 | None | None | 7 | x86_64 | 1 | 2500 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Tue Mar 19 00:16:57 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 19 Mar 2019 00:16:57 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #307 In-Reply-To: <458838644.5217.1552868187265.JavaMail.jenkins@jenkins.ci.centos.org> References: <458838644.5217.1552868187265.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <878810898.5392.1552954617700.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.34 KB...] Total 61 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : perl-srpm-macros-1-8.el7.noarch 10/49 Installing : unzip-6.0-19.el7.x86_64 11/49 Installing : dwz-0.11-3.el7.x86_64 12/49 Installing : bzip2-1.0.6-13.el7.x86_64 13/49 Installing : usermode-1.111-5.el7.x86_64 14/49 Installing : python2-distro-1.2.0-1.el7.noarch 15/49 Installing : patch-2.7.1-10.el7_5.x86_64 16/49 Installing : python-backports-1.0-8.el7.x86_64 17/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/49 Installing : python-urllib3-1.10.2-5.el7.noarch 19/49 Installing : python-requests-2.6.0-1.el7_1.noarch 20/49 Installing : distribution-gpg-keys-1.29-1.el7.noarch 21/49 Installing : mock-core-configs-30.2-1.el7.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : libmodman-2.0.1-8.el7.x86_64 24/49 Installing : libproxy-0.4.11-11.el7.x86_64 25/49 Installing : python-markupsafe-0.11-10.el7.x86_64 26/49 Installing : python-jinja2-2.7.2-2.el7.noarch 27/49 Installing : gdb-7.6.1-114.el7.x86_64 28/49 Installing : kernel-headers-3.10.0-957.10.1.el7.x86_64 29/49 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 30/49 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 31/49 Installing : gcc-4.8.5-36.el7.x86_64 32/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : golang-src-1.11.5-1.el7.noarch 35/49 Installing : python2-pyroute2-0.4.13-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-8.el7.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.14-2.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 5/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 6/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/49 Verifying : zip-3.0-11.el7.x86_64 8/49 Verifying : gnutls-3.3.29-8.el7.x86_64 9/49 Verifying : nettle-2.7.1-8.el7.x86_64 10/49 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 11/49 Verifying : golang-src-1.11.5-1.el7.noarch 12/49 Verifying : pigz-2.3.4-1.el7.x86_64 13/49 Verifying : gcc-4.8.5-36.el7.x86_64 14/49 Verifying : golang-1.11.5-1.el7.x86_64 15/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 16/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 17/49 Verifying : kernel-headers-3.10.0-957.10.1.el7.x86_64 18/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 23/49 Verifying : libmodman-2.0.1-8.el7.x86_64 24/49 Verifying : cpp-4.8.5-36.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : python-babel-0.9.6-8.el7.noarch 27/49 Verifying : distribution-gpg-keys-1.29-1.el7.noarch 28/49 Verifying : apr-util-1.5.2-6.el7.x86_64 29/49 Verifying : python-backports-1.0-8.el7.x86_64 30/49 Verifying : patch-2.7.1-10.el7_5.x86_64 31/49 Verifying : libmpc-1.0.1-3.el7.x86_64 32/49 Verifying : python2-distro-1.2.0-1.el7.noarch 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : perl-srpm-macros-1-8.el7.noarch 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7 distribution-gpg-keys.noarch 0:1.29-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-8.el7 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.10.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1610 0 --:--:-- --:--:-- --:--:-- 1613 100 8513k 100 8513k 0 0 13.3M 0 --:--:-- --:--:-- --:--:-- 13.3M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2085 0 --:--:-- --:--:-- --:--:-- 2083 100 38.3M 100 38.3M 0 0 44.6M 0 --:--:-- --:--:-- --:--:-- 44.6M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 641 0 --:--:-- --:--:-- --:--:-- 642 0 0 0 620 0 0 1984 0 --:--:-- --:--:-- --:--:-- 1984 100 10.7M 100 10.7M 0 0 16.2M 0 --:--:-- --:--:-- --:--:-- 16.2M ~/nightlyrpm51Lv5Q/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpm51Lv5Q/glusterd2-v6.0-dev.149.git568322a-vendor.tar.xz Created dist archive /root/nightlyrpm51Lv5Q/glusterd2-v6.0-dev.149.git568322a-vendor.tar.xz ~ ~/nightlyrpm51Lv5Q ~ INFO: mock.py version 1.4.14 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpm51Lv5Q/rpmbuild/SRPMS/glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.14 INFO: Mock Version: 1.4.14 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpm51Lv5Q/rpmbuild/SRPMS/glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 22 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M e94be59995394458ae5e22ae29ba0593 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.Ufi6eX:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins215089688480439049.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 75a7e6f0 +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 79 | n15.pufty | 172.19.3.79 | pufty | 3286 | Deployed | 75a7e6f0 | None | None | 7 | x86_64 | 1 | 2140 | None | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Tue Mar 19 01:07:39 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 19 Mar 2019 01:07:39 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #111 Message-ID: <536242360.5400.1552957659067.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 459.59 KB...] changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-fs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-virtblock-csi.yml) changed: [kube1] => (item=gcs-storage-virtblock.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Tuesday 19 March 2019 00:55:52 +0000 (0:00:36.047) 0:18:11.840 ********* included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Tuesday 19 March 2019 00:55:52 +0000 (0:00:00.260) 0:18:12.101 ********* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Tuesday 19 March 2019 00:55:52 +0000 (0:00:00.525) 0:18:12.627 ********* changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Tuesday 19 March 2019 00:55:55 +0000 (0:00:02.253) 0:18:14.881 ********* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Tuesday 19 March 2019 00:55:55 +0000 (0:00:00.552) 0:18:15.433 ********* changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Tuesday 19 March 2019 00:55:57 +0000 (0:00:02.259) 0:18:17.693 ********* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Tuesday 19 March 2019 00:55:58 +0000 (0:00:00.561) 0:18:18.255 ********* changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Tuesday 19 March 2019 00:56:00 +0000 (0:00:02.254) 0:18:20.509 ********* ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Tuesday 19 March 2019 00:56:02 +0000 (0:00:01.789) 0:18:22.299 ********* ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Tuesday 19 March 2019 00:56:04 +0000 (0:00:01.776) 0:18:24.075 ********* FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Tuesday 19 March 2019 00:56:16 +0000 (0:00:12.271) 0:18:36.347 ********* ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Tuesday 19 March 2019 00:56:18 +0000 (0:00:01.925) 0:18:38.272 ********* ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Tuesday 19 March 2019 00:56:19 +0000 (0:00:01.438) 0:18:39.711 ********* ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Tuesday 19 March 2019 00:56:21 +0000 (0:00:01.420) 0:18:41.131 ********* ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Tuesday 19 March 2019 00:56:23 +0000 (0:00:01.935) 0:18:43.067 ********* ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Tuesday 19 March 2019 00:56:25 +0000 (0:00:01.968) 0:18:45.035 ********* changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Tuesday 19 March 2019 00:56:26 +0000 (0:00:01.383) 0:18:46.419 ********* ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Tuesday 19 March 2019 00:56:26 +0000 (0:00:00.328) 0:18:46.747 ********* FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (44 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Tuesday 19 March 2019 00:57:50 +0000 (0:01:23.993) 0:20:10.741 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Tuesday 19 March 2019 00:57:52 +0000 (0:00:01.645) 0:20:12.387 ********* included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Tuesday 19 March 2019 00:57:52 +0000 (0:00:00.203) 0:20:12.590 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Tuesday 19 March 2019 00:57:53 +0000 (0:00:00.382) 0:20:12.973 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Tuesday 19 March 2019 00:57:54 +0000 (0:00:01.640) 0:20:14.613 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Tuesday 19 March 2019 00:57:55 +0000 (0:00:00.327) 0:20:14.941 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Tuesday 19 March 2019 00:57:56 +0000 (0:00:01.677) 0:20:16.619 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Tuesday 19 March 2019 00:57:57 +0000 (0:00:00.366) 0:20:16.985 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Tuesday 19 March 2019 00:57:58 +0000 (0:00:01.568) 0:20:18.554 ********* changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Tuesday 19 March 2019 00:57:59 +0000 (0:00:01.126) 0:20:19.681 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Tuesday 19 March 2019 00:58:00 +0000 (0:00:00.309) 0:20:19.991 ********* FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.14.176:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=421 changed=119 unreachable=0 failed=1 kube2 : ok=320 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Tuesday 19 March 2019 01:07:38 +0000 (0:09:38.483) 0:29:58.475 ********* =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 578.48s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 83.99s kubernetes/master : kubeadm | Initialize first master ------------------ 39.29s kubernetes/master : kubeadm | Init other uninitialized masters --------- 38.72s download : container_download | download images for kubeadm config images -- 38.58s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 36.05s etcd : Gen_certs | Write etcd master certs ----------------------------- 32.25s Install packages ------------------------------------------------------- 31.64s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 21.04s Wait for host to be available ------------------------------------------ 20.69s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 18.89s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 18.63s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 13.30s gather facts from all instances ---------------------------------------- 12.90s etcd : Gen_certs | Gather etcd master certs ---------------------------- 12.79s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.27s etcd : reload etcd ----------------------------------------------------- 12.12s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.22s container-engine/docker : Docker | pause while Docker restarts --------- 10.35s download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 9.82s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed Mar 20 00:17:03 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 20 Mar 2019 00:17:03 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #308 In-Reply-To: <878810898.5392.1552954617700.JavaMail.jenkins@jenkins.ci.centos.org> References: <878810898.5392.1552954617700.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1513955854.5517.1553041023232.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.37 KB...] Total 58 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.1.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : perl-srpm-macros-1-8.el7.noarch 10/49 Installing : unzip-6.0-19.el7.x86_64 11/49 Installing : dwz-0.11-3.el7.x86_64 12/49 Installing : bzip2-1.0.6-13.el7.x86_64 13/49 Installing : usermode-1.111-5.el7.x86_64 14/49 Installing : python2-distro-1.2.0-1.el7.noarch 15/49 Installing : patch-2.7.1-10.el7_5.x86_64 16/49 Installing : python-backports-1.0-8.el7.x86_64 17/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/49 Installing : python-urllib3-1.10.2-5.el7.noarch 19/49 Installing : python-requests-2.6.0-1.el7_1.noarch 20/49 Installing : distribution-gpg-keys-1.29-1.el7.noarch 21/49 Installing : mock-core-configs-30.2-1.el7.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : libmodman-2.0.1-8.el7.x86_64 24/49 Installing : libproxy-0.4.11-11.el7.x86_64 25/49 Installing : python-markupsafe-0.11-10.el7.x86_64 26/49 Installing : python-jinja2-2.7.2-2.el7.noarch 27/49 Installing : gdb-7.6.1-114.el7.x86_64 28/49 Installing : kernel-headers-3.10.0-957.10.1.el7.x86_64 29/49 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 30/49 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 31/49 Installing : gcc-4.8.5-36.el7_6.1.x86_64 32/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 33/49 Installing : python2-pyroute2-0.4.13-1.el7.noarch 34/49 Installing : pigz-2.3.4-1.el7.x86_64 35/49 Installing : golang-src-1.11.5-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.14-2.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 5/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 6/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/49 Verifying : zip-3.0-11.el7.x86_64 8/49 Verifying : nettle-2.7.1-8.el7.x86_64 9/49 Verifying : cpp-4.8.5-36.el7_6.1.x86_64 10/49 Verifying : golang-src-1.11.5-1.el7.noarch 11/49 Verifying : pigz-2.3.4-1.el7.x86_64 12/49 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 13/49 Verifying : golang-1.11.5-1.el7.x86_64 14/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 15/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 16/49 Verifying : kernel-headers-3.10.0-957.10.1.el7.x86_64 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : gdb-7.6.1-114.el7.x86_64 19/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 20/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 21/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 23/49 Verifying : libmodman-2.0.1-8.el7.x86_64 24/49 Verifying : mpfr-3.1.1-4.el7.x86_64 25/49 Verifying : python-babel-0.9.6-8.el7.noarch 26/49 Verifying : distribution-gpg-keys-1.29-1.el7.noarch 27/49 Verifying : apr-util-1.5.2-6.el7.x86_64 28/49 Verifying : python-backports-1.0-8.el7.x86_64 29/49 Verifying : patch-2.7.1-10.el7_5.x86_64 30/49 Verifying : libmpc-1.0.1-3.el7.x86_64 31/49 Verifying : python2-distro-1.2.0-1.el7.noarch 32/49 Verifying : usermode-1.111-5.el7.x86_64 33/49 Verifying : python-six-1.9.0-2.el7.noarch 34/49 Verifying : libproxy-0.4.11-11.el7.x86_64 35/49 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 36/49 Verifying : gcc-4.8.5-36.el7_6.1.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : perl-srpm-macros-1-8.el7.noarch 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.1 distribution-gpg-keys.noarch 0:1.29-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.1 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.10.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1889 0 --:--:-- --:--:-- --:--:-- 1896 35 8513k 35 2991k 0 0 5397k 0 0:00:01 --:--:-- 0:00:01 5397k100 8513k 100 8513k 0 0 12.1M 0 --:--:-- --:--:-- --:--:-- 40.8M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2272 0 --:--:-- --:--:-- --:--:-- 2280 72 38.3M 72 27.8M 0 0 32.8M 0 0:00:01 --:--:-- 0:00:01 32.8M100 38.3M 100 38.3M 0 0 36.4M 0 0:00:01 0:00:01 --:--:-- 51.7M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 614 0 --:--:-- --:--:-- --:--:-- 616 0 0 0 620 0 0 1786 0 --:--:-- --:--:-- --:--:-- 1786 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 10.7M 100 10.7M 0 0 14.9M 0 --:--:-- --:--:-- --:--:-- 32.3M ~/nightlyrpmcbavSg/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmcbavSg/glusterd2-v6.0-dev.149.git568322a-vendor.tar.xz Created dist archive /root/nightlyrpmcbavSg/glusterd2-v6.0-dev.149.git568322a-vendor.tar.xz ~ ~/nightlyrpmcbavSg ~ INFO: mock.py version 1.4.14 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmcbavSg/rpmbuild/SRPMS/glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.14 INFO: Mock Version: 1.4.14 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmcbavSg/rpmbuild/SRPMS/glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 21 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M ac67e0cbb14c44f08c7e620e01f1232d -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.anh0WF:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins5740681661687116125.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done fb61df55 +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 127 | n63.pufty | 172.19.3.127 | pufty | 2771 | Deployed | fb61df55 | None | None | 7 | x86_64 | 1 | 2620 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Wed Mar 20 01:08:58 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 20 Mar 2019 01:08:58 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #112 In-Reply-To: <536242360.5400.1552957659067.JavaMail.jenkins@jenkins.ci.centos.org> References: <536242360.5400.1552957659067.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1685747138.5522.1553044138782.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 459.44 KB...] changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-fs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-virtblock-csi.yml) changed: [kube1] => (item=gcs-storage-virtblock.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Wednesday 20 March 2019 00:57:12 +0000 (0:00:34.868) 0:18:12.103 ******* included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Wednesday 20 March 2019 00:57:12 +0000 (0:00:00.416) 0:18:12.519 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Wednesday 20 March 2019 00:57:12 +0000 (0:00:00.399) 0:18:12.918 ******* changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Wednesday 20 March 2019 00:57:14 +0000 (0:00:02.032) 0:18:14.951 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Wednesday 20 March 2019 00:57:15 +0000 (0:00:00.420) 0:18:15.371 ******* changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Wednesday 20 March 2019 00:57:17 +0000 (0:00:02.195) 0:18:17.567 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Wednesday 20 March 2019 00:57:18 +0000 (0:00:00.420) 0:18:17.988 ******* changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Wednesday 20 March 2019 00:57:19 +0000 (0:00:01.944) 0:18:19.933 ******* ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Wednesday 20 March 2019 00:57:21 +0000 (0:00:01.534) 0:18:21.467 ******* ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Wednesday 20 March 2019 00:57:23 +0000 (0:00:01.720) 0:18:23.188 ******* FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Wednesday 20 March 2019 00:57:35 +0000 (0:00:12.099) 0:18:35.287 ******* ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Wednesday 20 March 2019 00:57:36 +0000 (0:00:01.604) 0:18:36.892 ******* ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Wednesday 20 March 2019 00:57:38 +0000 (0:00:01.312) 0:18:38.205 ******* ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Wednesday 20 March 2019 00:57:40 +0000 (0:00:02.286) 0:18:40.491 ******* ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Wednesday 20 March 2019 00:57:42 +0000 (0:00:01.596) 0:18:42.087 ******* ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Wednesday 20 March 2019 00:57:44 +0000 (0:00:01.926) 0:18:44.014 ******* changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Wednesday 20 March 2019 00:57:45 +0000 (0:00:01.159) 0:18:45.174 ******* ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Wednesday 20 March 2019 00:57:45 +0000 (0:00:00.437) 0:18:45.612 ******* FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (44 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Wednesday 20 March 2019 00:59:10 +0000 (0:01:24.700) 0:20:10.312 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Wednesday 20 March 2019 00:59:11 +0000 (0:00:01.576) 0:20:11.889 ******* included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Wednesday 20 March 2019 00:59:12 +0000 (0:00:00.216) 0:20:12.105 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Wednesday 20 March 2019 00:59:12 +0000 (0:00:00.332) 0:20:12.438 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Wednesday 20 March 2019 00:59:14 +0000 (0:00:01.710) 0:20:14.148 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Wednesday 20 March 2019 00:59:14 +0000 (0:00:00.352) 0:20:14.501 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Wednesday 20 March 2019 00:59:16 +0000 (0:00:01.644) 0:20:16.145 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Wednesday 20 March 2019 00:59:16 +0000 (0:00:00.311) 0:20:16.457 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Wednesday 20 March 2019 00:59:18 +0000 (0:00:01.653) 0:20:18.110 ******* changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Wednesday 20 March 2019 00:59:19 +0000 (0:00:01.275) 0:20:19.385 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Wednesday 20 March 2019 00:59:19 +0000 (0:00:00.305) 0:20:19.691 ******* FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.50.247:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=421 changed=119 unreachable=0 failed=1 kube2 : ok=320 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Wednesday 20 March 2019 01:08:58 +0000 (0:09:38.475) 0:29:58.166 ******* =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 578.48s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 84.70s kubernetes/master : kubeadm | Initialize first master ------------------ 39.66s kubernetes/master : kubeadm | Init other uninitialized masters --------- 38.77s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 34.87s download : container_download | download images for kubeadm config images -- 34.39s etcd : Gen_certs | Write etcd master certs ----------------------------- 32.90s Install packages ------------------------------------------------------- 32.14s Wait for host to be available ------------------------------------------ 32.11s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.33s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 18.80s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 17.27s etcd : wait for etcd up ------------------------------------------------ 13.41s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 13.29s gather facts from all instances ---------------------------------------- 12.71s etcd : Gen_certs | Gather etcd master certs ---------------------------- 12.57s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.10s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests -- 11.36s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.18s container-engine/docker : Docker | pause while Docker restarts --------- 10.43s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu Mar 21 00:16:18 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 21 Mar 2019 00:16:18 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #309 In-Reply-To: <1513955854.5517.1553041023232.JavaMail.jenkins@jenkins.ci.centos.org> References: <1513955854.5517.1553041023232.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <356085022.5661.1553127378349.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.37 KB...] Total 56 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.1.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : perl-srpm-macros-1-8.el7.noarch 10/49 Installing : unzip-6.0-19.el7.x86_64 11/49 Installing : dwz-0.11-3.el7.x86_64 12/49 Installing : bzip2-1.0.6-13.el7.x86_64 13/49 Installing : usermode-1.111-5.el7.x86_64 14/49 Installing : python2-distro-1.2.0-1.el7.noarch 15/49 Installing : patch-2.7.1-10.el7_5.x86_64 16/49 Installing : python-backports-1.0-8.el7.x86_64 17/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/49 Installing : python-urllib3-1.10.2-5.el7.noarch 19/49 Installing : python-requests-2.6.0-1.el7_1.noarch 20/49 Installing : distribution-gpg-keys-1.29-1.el7.noarch 21/49 Installing : mock-core-configs-30.2-1.el7.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : libmodman-2.0.1-8.el7.x86_64 24/49 Installing : libproxy-0.4.11-11.el7.x86_64 25/49 Installing : python-markupsafe-0.11-10.el7.x86_64 26/49 Installing : python-jinja2-2.7.2-2.el7.noarch 27/49 Installing : gdb-7.6.1-114.el7.x86_64 28/49 Installing : kernel-headers-3.10.0-957.10.1.el7.x86_64 29/49 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 30/49 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 31/49 Installing : gcc-4.8.5-36.el7_6.1.x86_64 32/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 33/49 Installing : python2-pyroute2-0.4.13-1.el7.noarch 34/49 Installing : pigz-2.3.4-1.el7.x86_64 35/49 Installing : golang-src-1.11.5-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.14-2.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 5/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 6/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/49 Verifying : zip-3.0-11.el7.x86_64 8/49 Verifying : nettle-2.7.1-8.el7.x86_64 9/49 Verifying : cpp-4.8.5-36.el7_6.1.x86_64 10/49 Verifying : golang-src-1.11.5-1.el7.noarch 11/49 Verifying : pigz-2.3.4-1.el7.x86_64 12/49 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 13/49 Verifying : golang-1.11.5-1.el7.x86_64 14/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 15/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 16/49 Verifying : kernel-headers-3.10.0-957.10.1.el7.x86_64 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : gdb-7.6.1-114.el7.x86_64 19/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 20/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 21/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 23/49 Verifying : libmodman-2.0.1-8.el7.x86_64 24/49 Verifying : mpfr-3.1.1-4.el7.x86_64 25/49 Verifying : python-babel-0.9.6-8.el7.noarch 26/49 Verifying : distribution-gpg-keys-1.29-1.el7.noarch 27/49 Verifying : apr-util-1.5.2-6.el7.x86_64 28/49 Verifying : python-backports-1.0-8.el7.x86_64 29/49 Verifying : patch-2.7.1-10.el7_5.x86_64 30/49 Verifying : libmpc-1.0.1-3.el7.x86_64 31/49 Verifying : python2-distro-1.2.0-1.el7.noarch 32/49 Verifying : usermode-1.111-5.el7.x86_64 33/49 Verifying : python-six-1.9.0-2.el7.noarch 34/49 Verifying : libproxy-0.4.11-11.el7.x86_64 35/49 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 36/49 Verifying : gcc-4.8.5-36.el7_6.1.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : perl-srpm-macros-1-8.el7.noarch 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.1 distribution-gpg-keys.noarch 0:1.29-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.1 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.10.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1802 0 --:--:-- --:--:-- --:--:-- 1800 100 8513k 100 8513k 0 0 14.2M 0 --:--:-- --:--:-- --:--:-- 14.2M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2254 0 --:--:-- --:--:-- --:--:-- 2263 7 38.3M 7 2982k 0 0 5933k 0 0:00:06 --:--:-- 0:00:06 5933k100 38.3M 100 38.3M 0 0 39.0M 0 --:--:-- --:--:-- --:--:-- 74.0M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 571 0 --:--:-- --:--:-- --:--:-- 570 0 0 0 620 0 0 1672 0 --:--:-- --:--:-- --:--:-- 1672 100 10.7M 100 10.7M 0 0 13.5M 0 --:--:-- --:--:-- --:--:-- 13.5M ~/nightlyrpmm1OaZE/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmm1OaZE/glusterd2-v6.0-dev.149.git568322a-vendor.tar.xz Created dist archive /root/nightlyrpmm1OaZE/glusterd2-v6.0-dev.149.git568322a-vendor.tar.xz ~ ~/nightlyrpmm1OaZE ~ INFO: mock.py version 1.4.14 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmm1OaZE/rpmbuild/SRPMS/glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.14 INFO: Mock Version: 1.4.14 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmm1OaZE/rpmbuild/SRPMS/glusterd2-5.0-0.dev.149.git568322a.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 23 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 75ec6ff0a5804f2395e8a3485190ef05 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.coPyUC:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins806007716488306865.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 12617151 +---------+----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 65 | n1.pufty | 172.19.3.65 | pufty | 3324 | Deployed | 12617151 | None | None | 7 | x86_64 | 1 | 2000 | None | +---------+----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Thu Mar 21 00:56:58 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 21 Mar 2019 00:56:58 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #113 In-Reply-To: <1685747138.5522.1553044138782.JavaMail.jenkins@jenkins.ci.centos.org> References: <1685747138.5522.1553044138782.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <609711632.5664.1553129818625.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 459.23 KB...] changed: [kube1] => (item=gcs-namespace.yml) changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-fs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-virtblock-csi.yml) changed: [kube1] => (item=gcs-storage-virtblock.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Thursday 21 March 2019 00:46:41 +0000 (0:00:11.925) 0:10:19.406 ******** included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Thursday 21 March 2019 00:46:41 +0000 (0:00:00.091) 0:10:19.497 ******** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Thursday 21 March 2019 00:46:41 +0000 (0:00:00.135) 0:10:19.632 ******** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Thursday 21 March 2019 00:46:42 +0000 (0:00:00.716) 0:10:20.348 ******** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Thursday 21 March 2019 00:46:42 +0000 (0:00:00.155) 0:10:20.504 ******** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Thursday 21 March 2019 00:46:43 +0000 (0:00:00.726) 0:10:21.230 ******** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Thursday 21 March 2019 00:46:43 +0000 (0:00:00.133) 0:10:21.364 ******** changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Thursday 21 March 2019 00:46:44 +0000 (0:00:00.701) 0:10:22.065 ******** ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Thursday 21 March 2019 00:46:44 +0000 (0:00:00.637) 0:10:22.703 ******** ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Thursday 21 March 2019 00:46:45 +0000 (0:00:00.704) 0:10:23.407 ******** FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Thursday 21 March 2019 00:46:56 +0000 (0:00:10.879) 0:10:34.286 ******** ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Thursday 21 March 2019 00:46:56 +0000 (0:00:00.658) 0:10:34.945 ******** ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Thursday 21 March 2019 00:46:57 +0000 (0:00:00.453) 0:10:35.398 ******** ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Thursday 21 March 2019 00:46:57 +0000 (0:00:00.464) 0:10:35.862 ******** ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Thursday 21 March 2019 00:46:58 +0000 (0:00:00.700) 0:10:36.563 ******** ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Thursday 21 March 2019 00:46:59 +0000 (0:00:00.869) 0:10:37.433 ******** FAILED - RETRYING: GCS | ETCD Cluster | Get etcd-client service (5 retries left). changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Thursday 21 March 2019 00:47:05 +0000 (0:00:06.234) 0:10:43.667 ******** ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Thursday 21 March 2019 00:47:05 +0000 (0:00:00.136) 0:10:43.804 ******** FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Thursday 21 March 2019 00:48:00 +0000 (0:00:54.422) 0:11:38.226 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Thursday 21 March 2019 00:48:01 +0000 (0:00:00.857) 0:11:39.084 ******** included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Thursday 21 March 2019 00:48:01 +0000 (0:00:00.114) 0:11:39.198 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Thursday 21 March 2019 00:48:01 +0000 (0:00:00.138) 0:11:39.337 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Thursday 21 March 2019 00:48:01 +0000 (0:00:00.687) 0:11:40.024 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Thursday 21 March 2019 00:48:02 +0000 (0:00:00.151) 0:11:40.175 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Thursday 21 March 2019 00:48:02 +0000 (0:00:00.783) 0:11:40.958 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Thursday 21 March 2019 00:48:03 +0000 (0:00:00.245) 0:11:41.203 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Thursday 21 March 2019 00:48:03 +0000 (0:00:00.805) 0:11:42.009 ******** changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Thursday 21 March 2019 00:48:04 +0000 (0:00:00.536) 0:11:42.545 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Thursday 21 March 2019 00:48:04 +0000 (0:00:00.143) 0:11:42.689 ******** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.11.216:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=420 changed=119 unreachable=0 failed=1 kube2 : ok=320 changed=93 unreachable=0 failed=0 kube3 : ok=284 changed=78 unreachable=0 failed=0 Thursday 21 March 2019 00:56:58 +0000 (0:08:53.682) 0:20:36.371 ******** =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 533.68s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 54.42s download : container_download | download images for kubeadm config images -- 35.55s kubernetes/master : kubeadm | Initialize first master ------------------ 27.68s Install packages ------------------------------------------------------- 24.59s kubernetes/master : kubeadm | Init other uninitialized masters --------- 24.58s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 19.50s Wait for host to be available ------------------------------------------ 16.45s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 13.52s Extend root VG --------------------------------------------------------- 13.03s etcd : Gen_certs | Write etcd master certs ----------------------------- 12.71s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 11.93s etcd : reload etcd ----------------------------------------------------- 11.01s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 10.88s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 10.83s kubernetes/node : install | Copy hyperkube binary from download dir ---- 10.47s container-engine/docker : Docker | pause while Docker restarts --------- 10.25s download : file_download | Download item -------------------------------- 8.41s gather facts from all instances ----------------------------------------- 8.24s etcd : wait for etcd up ------------------------------------------------- 7.39s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu Mar 21 01:35:24 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 21 Mar 2019 01:35:24 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #138 Message-ID: <1028973417.5673.1553132124270.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 55.43 KB...] changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri Mar 22 00:16:53 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 22 Mar 2019 00:16:53 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #310 In-Reply-To: <356085022.5661.1553127378349.JavaMail.jenkins@jenkins.ci.centos.org> References: <356085022.5661.1553127378349.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <289024808.5806.1553213813534.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.37 KB...] Total 53 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.1.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : perl-srpm-macros-1-8.el7.noarch 10/49 Installing : unzip-6.0-19.el7.x86_64 11/49 Installing : dwz-0.11-3.el7.x86_64 12/49 Installing : bzip2-1.0.6-13.el7.x86_64 13/49 Installing : usermode-1.111-5.el7.x86_64 14/49 Installing : python2-distro-1.2.0-1.el7.noarch 15/49 Installing : patch-2.7.1-10.el7_5.x86_64 16/49 Installing : python-backports-1.0-8.el7.x86_64 17/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/49 Installing : python-urllib3-1.10.2-5.el7.noarch 19/49 Installing : python-requests-2.6.0-1.el7_1.noarch 20/49 Installing : distribution-gpg-keys-1.29-1.el7.noarch 21/49 Installing : mock-core-configs-30.2-1.el7.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : libmodman-2.0.1-8.el7.x86_64 24/49 Installing : libproxy-0.4.11-11.el7.x86_64 25/49 Installing : python-markupsafe-0.11-10.el7.x86_64 26/49 Installing : python-jinja2-2.7.2-2.el7.noarch 27/49 Installing : gdb-7.6.1-114.el7.x86_64 28/49 Installing : kernel-headers-3.10.0-957.10.1.el7.x86_64 29/49 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 30/49 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 31/49 Installing : gcc-4.8.5-36.el7_6.1.x86_64 32/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 33/49 Installing : python2-pyroute2-0.4.13-1.el7.noarch 34/49 Installing : pigz-2.3.4-1.el7.x86_64 35/49 Installing : golang-src-1.11.5-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.14-2.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 5/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 6/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/49 Verifying : zip-3.0-11.el7.x86_64 8/49 Verifying : nettle-2.7.1-8.el7.x86_64 9/49 Verifying : cpp-4.8.5-36.el7_6.1.x86_64 10/49 Verifying : golang-src-1.11.5-1.el7.noarch 11/49 Verifying : pigz-2.3.4-1.el7.x86_64 12/49 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 13/49 Verifying : golang-1.11.5-1.el7.x86_64 14/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 15/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 16/49 Verifying : kernel-headers-3.10.0-957.10.1.el7.x86_64 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : gdb-7.6.1-114.el7.x86_64 19/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 20/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 21/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 23/49 Verifying : libmodman-2.0.1-8.el7.x86_64 24/49 Verifying : mpfr-3.1.1-4.el7.x86_64 25/49 Verifying : python-babel-0.9.6-8.el7.noarch 26/49 Verifying : distribution-gpg-keys-1.29-1.el7.noarch 27/49 Verifying : apr-util-1.5.2-6.el7.x86_64 28/49 Verifying : python-backports-1.0-8.el7.x86_64 29/49 Verifying : patch-2.7.1-10.el7_5.x86_64 30/49 Verifying : libmpc-1.0.1-3.el7.x86_64 31/49 Verifying : python2-distro-1.2.0-1.el7.noarch 32/49 Verifying : usermode-1.111-5.el7.x86_64 33/49 Verifying : python-six-1.9.0-2.el7.noarch 34/49 Verifying : libproxy-0.4.11-11.el7.x86_64 35/49 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 36/49 Verifying : gcc-4.8.5-36.el7_6.1.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : perl-srpm-macros-1-8.el7.noarch 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.1 distribution-gpg-keys.noarch 0:1.29-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.1 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.10.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1902 0 --:--:-- --:--:-- --:--:-- 1902 100 8513k 100 8513k 0 0 10.7M 0 --:--:-- --:--:-- --:--:-- 10.7M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2061 0 --:--:-- --:--:-- --:--:-- 2069 0 38.3M 0 321k 0 0 644k 0 0:01:01 --:--:-- 0:01:01 644k100 38.3M 100 38.3M 0 0 36.9M 0 0:00:01 0:00:01 --:--:-- 70.7M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 543 0 --:--:-- --:--:-- --:--:-- 542 0 0 0 620 0 0 1725 0 --:--:-- --:--:-- --:--:-- 1725 100 10.7M 100 10.7M 0 0 16.7M 0 --:--:-- --:--:-- --:--:-- 16.7M ~/nightlyrpmeMFJJv/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmeMFJJv/glusterd2-v6.0-dev.151.git9054f74-vendor.tar.xz Created dist archive /root/nightlyrpmeMFJJv/glusterd2-v6.0-dev.151.git9054f74-vendor.tar.xz ~ ~/nightlyrpmeMFJJv ~ INFO: mock.py version 1.4.14 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmeMFJJv/rpmbuild/SRPMS/glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.14 INFO: Mock Version: 1.4.14 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmeMFJJv/rpmbuild/SRPMS/glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 23 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M b19ab9673c294151a42b7625520aeb68 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.TQkR7I:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins1688917106754393783.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 5e51e3d2 +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 79 | n15.pufty | 172.19.3.79 | pufty | 3340 | Deployed | 5e51e3d2 | None | None | 7 | x86_64 | 1 | 2140 | None | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Fri Mar 22 01:07:26 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 22 Mar 2019 01:07:26 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #114 In-Reply-To: <609711632.5664.1553129818625.JavaMail.jenkins@jenkins.ci.centos.org> References: <609711632.5664.1553129818625.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1001771133.5818.1553216846233.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 459.62 KB...] changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-fs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-virtblock-csi.yml) changed: [kube1] => (item=gcs-storage-virtblock.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Friday 22 March 2019 00:55:40 +0000 (0:00:34.558) 0:18:04.361 ********** included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Friday 22 March 2019 00:55:40 +0000 (0:00:00.301) 0:18:04.663 ********** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Friday 22 March 2019 00:55:41 +0000 (0:00:00.427) 0:18:05.090 ********** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Friday 22 March 2019 00:55:43 +0000 (0:00:01.998) 0:18:07.089 ********** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Friday 22 March 2019 00:55:43 +0000 (0:00:00.414) 0:18:07.503 ********** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Friday 22 March 2019 00:55:45 +0000 (0:00:02.187) 0:18:09.691 ********** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Friday 22 March 2019 00:55:46 +0000 (0:00:00.413) 0:18:10.104 ********** changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Friday 22 March 2019 00:55:48 +0000 (0:00:02.186) 0:18:12.291 ********** ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Friday 22 March 2019 00:55:49 +0000 (0:00:01.749) 0:18:14.041 ********** ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Friday 22 March 2019 00:55:51 +0000 (0:00:01.685) 0:18:15.726 ********** FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Friday 22 March 2019 00:56:03 +0000 (0:00:12.125) 0:18:27.852 ********** ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Friday 22 March 2019 00:56:05 +0000 (0:00:01.582) 0:18:29.435 ********** ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Friday 22 March 2019 00:56:06 +0000 (0:00:01.488) 0:18:30.923 ********** ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Friday 22 March 2019 00:56:08 +0000 (0:00:01.422) 0:18:32.345 ********** ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Friday 22 March 2019 00:56:09 +0000 (0:00:01.684) 0:18:34.030 ********** ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Friday 22 March 2019 00:56:11 +0000 (0:00:01.918) 0:18:35.948 ********** changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Friday 22 March 2019 00:56:13 +0000 (0:00:01.283) 0:18:37.232 ********** ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Friday 22 March 2019 00:56:13 +0000 (0:00:00.499) 0:18:37.731 ********** FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (44 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Friday 22 March 2019 00:57:37 +0000 (0:01:24.235) 0:20:01.967 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Friday 22 March 2019 00:57:39 +0000 (0:00:01.693) 0:20:03.660 ********** included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Friday 22 March 2019 00:57:39 +0000 (0:00:00.190) 0:20:03.850 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Friday 22 March 2019 00:57:40 +0000 (0:00:00.493) 0:20:04.344 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Friday 22 March 2019 00:57:41 +0000 (0:00:01.643) 0:20:05.987 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Friday 22 March 2019 00:57:42 +0000 (0:00:00.498) 0:20:06.486 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Friday 22 March 2019 00:57:44 +0000 (0:00:01.783) 0:20:08.270 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Friday 22 March 2019 00:57:44 +0000 (0:00:00.444) 0:20:08.714 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Friday 22 March 2019 00:57:46 +0000 (0:00:01.593) 0:20:10.307 ********** changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Friday 22 March 2019 00:57:47 +0000 (0:00:01.745) 0:20:12.053 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Friday 22 March 2019 00:57:48 +0000 (0:00:00.439) 0:20:12.493 ********** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.49.165:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=421 changed=119 unreachable=0 failed=1 kube2 : ok=320 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Friday 22 March 2019 01:07:25 +0000 (0:09:37.289) 0:29:49.783 ********** =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 577.29s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 84.24s kubernetes/master : kubeadm | Initialize first master ------------------ 39.32s kubernetes/master : kubeadm | Init other uninitialized masters --------- 38.91s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 34.56s download : container_download | download images for kubeadm config images -- 34.38s Install packages ------------------------------------------------------- 33.28s etcd : Gen_certs | Write etcd master certs ----------------------------- 33.19s Wait for host to be available ------------------------------------------ 20.86s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.35s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 18.50s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 16.44s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 14.55s etcd : Gen_certs | Gather etcd master certs ---------------------------- 13.32s gather facts from all instances ---------------------------------------- 12.99s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.13s etcd : reload etcd ----------------------------------------------------- 11.87s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.52s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests -- 11.25s container-engine/docker : Docker | pause while Docker restarts --------- 10.36s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri Mar 22 01:22:45 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 22 Mar 2019 01:22:45 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #139 In-Reply-To: <1028973417.5673.1553132124270.JavaMail.jenkins@jenkins.ci.centos.org> References: <1028973417.5673.1553132124270.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1299207060.5821.1553217765954.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 55.43 KB...] changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sat Mar 23 00:16:14 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 23 Mar 2019 00:16:14 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #311 In-Reply-To: <289024808.5806.1553213813534.JavaMail.jenkins@jenkins.ci.centos.org> References: <289024808.5806.1553213813534.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <431481762.5957.1553300174263.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.42 KB...] Total 60 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.1.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : perl-srpm-macros-1-8.el7.noarch 10/49 Installing : unzip-6.0-19.el7.x86_64 11/49 Installing : dwz-0.11-3.el7.x86_64 12/49 Installing : bzip2-1.0.6-13.el7.x86_64 13/49 Installing : usermode-1.111-5.el7.x86_64 14/49 Installing : python2-distro-1.2.0-1.el7.noarch 15/49 Installing : patch-2.7.1-10.el7_5.x86_64 16/49 Installing : python-backports-1.0-8.el7.x86_64 17/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/49 Installing : python-urllib3-1.10.2-5.el7.noarch 19/49 Installing : python-requests-2.6.0-1.el7_1.noarch 20/49 Installing : distribution-gpg-keys-1.29-1.el7.noarch 21/49 Installing : mock-core-configs-30.2-1.el7.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : libmodman-2.0.1-8.el7.x86_64 24/49 Installing : libproxy-0.4.11-11.el7.x86_64 25/49 Installing : python-markupsafe-0.11-10.el7.x86_64 26/49 Installing : python-jinja2-2.7.2-2.el7.noarch 27/49 Installing : gdb-7.6.1-114.el7.x86_64 28/49 Installing : kernel-headers-3.10.0-957.10.1.el7.x86_64 29/49 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 30/49 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 31/49 Installing : gcc-4.8.5-36.el7_6.1.x86_64 32/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 33/49 Installing : python2-pyroute2-0.4.13-1.el7.noarch 34/49 Installing : pigz-2.3.4-1.el7.x86_64 35/49 Installing : golang-src-1.11.5-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.14-2.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 5/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 6/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/49 Verifying : zip-3.0-11.el7.x86_64 8/49 Verifying : nettle-2.7.1-8.el7.x86_64 9/49 Verifying : cpp-4.8.5-36.el7_6.1.x86_64 10/49 Verifying : golang-src-1.11.5-1.el7.noarch 11/49 Verifying : pigz-2.3.4-1.el7.x86_64 12/49 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 13/49 Verifying : golang-1.11.5-1.el7.x86_64 14/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 15/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 16/49 Verifying : kernel-headers-3.10.0-957.10.1.el7.x86_64 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : gdb-7.6.1-114.el7.x86_64 19/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 20/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 21/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 23/49 Verifying : libmodman-2.0.1-8.el7.x86_64 24/49 Verifying : mpfr-3.1.1-4.el7.x86_64 25/49 Verifying : python-babel-0.9.6-8.el7.noarch 26/49 Verifying : distribution-gpg-keys-1.29-1.el7.noarch 27/49 Verifying : apr-util-1.5.2-6.el7.x86_64 28/49 Verifying : python-backports-1.0-8.el7.x86_64 29/49 Verifying : patch-2.7.1-10.el7_5.x86_64 30/49 Verifying : libmpc-1.0.1-3.el7.x86_64 31/49 Verifying : python2-distro-1.2.0-1.el7.noarch 32/49 Verifying : usermode-1.111-5.el7.x86_64 33/49 Verifying : python-six-1.9.0-2.el7.noarch 34/49 Verifying : libproxy-0.4.11-11.el7.x86_64 35/49 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 36/49 Verifying : gcc-4.8.5-36.el7_6.1.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : perl-srpm-macros-1-8.el7.noarch 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.1 distribution-gpg-keys.noarch 0:1.29-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.1 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.10.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1984 0 --:--:-- --:--:-- --:--:-- 1990 0 8513k 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 8513k 100 8513k 0 0 16.2M 0 --:--:-- --:--:-- --:--:-- 74.8M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2326 0 --:--:-- --:--:-- --:--:-- 2330 100 38.3M 100 38.3M 0 0 47.1M 0 --:--:-- --:--:-- --:--:-- 47.1M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 525 0 --:--:-- --:--:-- --:--:-- 527 0 0 0 620 0 0 1626 0 --:--:-- --:--:-- --:--:-- 1626 65 10.7M 65 7242k 0 0 11.4M 0 --:--:-- --:--:-- --:--:-- 11.4M100 10.7M 100 10.7M 0 0 15.4M 0 --:--:-- --:--:-- --:--:-- 46.9M ~/nightlyrpmwY167V/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmwY167V/glusterd2-v6.0-dev.151.git9054f74-vendor.tar.xz Created dist archive /root/nightlyrpmwY167V/glusterd2-v6.0-dev.151.git9054f74-vendor.tar.xz ~ ~/nightlyrpmwY167V ~ INFO: mock.py version 1.4.14 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmwY167V/rpmbuild/SRPMS/glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.14 INFO: Mock Version: 1.4.14 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmwY167V/rpmbuild/SRPMS/glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 23 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M d07a155a87b64d609ff92b650ab953ce -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.msCx69:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins8510926573791430814.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 9bd88105 +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 282 | n27.gusty | 172.19.2.155 | gusty | 3325 | Deployed | 9bd88105 | None | None | 7 | x86_64 | 1 | 2260 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sat Mar 23 01:19:12 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 23 Mar 2019 01:19:12 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #140 In-Reply-To: <1299207060.5821.1553217765954.JavaMail.jenkins@jenkins.ci.centos.org> References: <1299207060.5821.1553217765954.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <137577669.5966.1553303952912.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 55.43 KB...] changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sat Mar 23 02:02:14 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 23 Mar 2019 02:02:14 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #115 In-Reply-To: <1001771133.5818.1553216846233.JavaMail.jenkins@jenkins.ci.centos.org> References: <1001771133.5818.1553216846233.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <26749043.5968.1553306534637.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 464.57 KB...] Saturday 23 March 2019 00:46:50 +0000 (0:01:05.197) 0:12:05.736 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Saturday 23 March 2019 00:46:51 +0000 (0:00:00.789) 0:12:06.525 ******** included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Saturday 23 March 2019 00:46:51 +0000 (0:00:00.127) 0:12:06.653 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Saturday 23 March 2019 00:46:51 +0000 (0:00:00.131) 0:12:06.784 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Saturday 23 March 2019 00:46:52 +0000 (0:00:00.683) 0:12:07.468 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Saturday 23 March 2019 00:46:52 +0000 (0:00:00.184) 0:12:07.652 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Saturday 23 March 2019 00:46:53 +0000 (0:00:00.731) 0:12:08.383 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Saturday 23 March 2019 00:46:53 +0000 (0:00:00.159) 0:12:08.542 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Saturday 23 March 2019 00:46:54 +0000 (0:00:00.688) 0:12:09.231 ******** changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Saturday 23 March 2019 00:46:54 +0000 (0:00:00.521) 0:12:09.752 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Saturday 23 March 2019 00:46:54 +0000 (0:00:00.163) 0:12:09.915 ******** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Add devices] ***************************************** Saturday 23 March 2019 00:51:06 +0000 (0:04:11.146) 0:16:21.062 ******** included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 TASK [GCS | GD2 Cluster | Add devices | Set facts] ***************************** Saturday 23 March 2019 00:51:06 +0000 (0:00:00.103) 0:16:21.165 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Add devices | Add devices for kube3] ***************** Saturday 23 March 2019 00:51:06 +0000 (0:00:00.217) 0:16:21.383 ******** FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (37 retries left). ok: [kube1] => (item=/dev/vdc) FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (1 retries left). failed: [kube1] (item=/dev/vdd) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdd", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.42.255:24007/v1/devices/23995f72-ce66-4460-a3f7-2bd5e49f2de5"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (1 retries left). failed: [kube1] (item=/dev/vde) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vde", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.42.255:24007/v1/devices/23995f72-ce66-4460-a3f7-2bd5e49f2de5"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=426 changed=119 unreachable=0 failed=1 kube2 : ok=320 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Saturday 23 March 2019 02:02:14 +0000 (1:11:07.988) 1:27:29.371 ******** =============================================================================== GCS | GD2 Cluster | Add devices | Add devices for kube3 -------------- 4267.99s GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 251.15s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 65.20s download : container_download | download images for kubeadm config images -- 51.85s kubernetes/master : kubeadm | Initialize first master ------------------ 29.30s kubernetes/master : kubeadm | Init other uninitialized masters --------- 25.34s Install packages ------------------------------------------------------- 24.01s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 18.80s Wait for host to be available ------------------------------------------ 16.49s Extend root VG --------------------------------------------------------- 12.96s etcd : Gen_certs | Write etcd master certs ----------------------------- 12.64s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 12.00s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 11.93s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 11.31s etcd : reload etcd ----------------------------------------------------- 11.18s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 10.82s container-engine/docker : Docker | pause while Docker restarts --------- 10.25s kubernetes/node : install | Copy hyperkube binary from download dir ----- 9.89s download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 8.23s gather facts from all instances ----------------------------------------- 8.12s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun Mar 24 00:16:51 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 24 Mar 2019 00:16:51 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #312 In-Reply-To: <431481762.5957.1553300174263.JavaMail.jenkins@jenkins.ci.centos.org> References: <431481762.5957.1553300174263.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <220403711.6052.1553386611918.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.38 KB...] Total 63 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.1.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : perl-srpm-macros-1-8.el7.noarch 10/49 Installing : unzip-6.0-19.el7.x86_64 11/49 Installing : dwz-0.11-3.el7.x86_64 12/49 Installing : bzip2-1.0.6-13.el7.x86_64 13/49 Installing : usermode-1.111-5.el7.x86_64 14/49 Installing : python2-distro-1.2.0-1.el7.noarch 15/49 Installing : patch-2.7.1-10.el7_5.x86_64 16/49 Installing : python-backports-1.0-8.el7.x86_64 17/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/49 Installing : python-urllib3-1.10.2-5.el7.noarch 19/49 Installing : python-requests-2.6.0-1.el7_1.noarch 20/49 Installing : distribution-gpg-keys-1.29-1.el7.noarch 21/49 Installing : mock-core-configs-30.2-1.el7.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : libmodman-2.0.1-8.el7.x86_64 24/49 Installing : libproxy-0.4.11-11.el7.x86_64 25/49 Installing : python-markupsafe-0.11-10.el7.x86_64 26/49 Installing : python-jinja2-2.7.2-2.el7.noarch 27/49 Installing : gdb-7.6.1-114.el7.x86_64 28/49 Installing : kernel-headers-3.10.0-957.10.1.el7.x86_64 29/49 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 30/49 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 31/49 Installing : gcc-4.8.5-36.el7_6.1.x86_64 32/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 33/49 Installing : python2-pyroute2-0.4.13-1.el7.noarch 34/49 Installing : pigz-2.3.4-1.el7.x86_64 35/49 Installing : golang-src-1.11.5-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.14-2.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 5/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 6/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/49 Verifying : zip-3.0-11.el7.x86_64 8/49 Verifying : nettle-2.7.1-8.el7.x86_64 9/49 Verifying : cpp-4.8.5-36.el7_6.1.x86_64 10/49 Verifying : golang-src-1.11.5-1.el7.noarch 11/49 Verifying : pigz-2.3.4-1.el7.x86_64 12/49 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 13/49 Verifying : golang-1.11.5-1.el7.x86_64 14/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 15/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 16/49 Verifying : kernel-headers-3.10.0-957.10.1.el7.x86_64 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : gdb-7.6.1-114.el7.x86_64 19/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 20/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 21/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 23/49 Verifying : libmodman-2.0.1-8.el7.x86_64 24/49 Verifying : mpfr-3.1.1-4.el7.x86_64 25/49 Verifying : python-babel-0.9.6-8.el7.noarch 26/49 Verifying : distribution-gpg-keys-1.29-1.el7.noarch 27/49 Verifying : apr-util-1.5.2-6.el7.x86_64 28/49 Verifying : python-backports-1.0-8.el7.x86_64 29/49 Verifying : patch-2.7.1-10.el7_5.x86_64 30/49 Verifying : libmpc-1.0.1-3.el7.x86_64 31/49 Verifying : python2-distro-1.2.0-1.el7.noarch 32/49 Verifying : usermode-1.111-5.el7.x86_64 33/49 Verifying : python-six-1.9.0-2.el7.noarch 34/49 Verifying : libproxy-0.4.11-11.el7.x86_64 35/49 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 36/49 Verifying : gcc-4.8.5-36.el7_6.1.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : perl-srpm-macros-1-8.el7.noarch 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.1 distribution-gpg-keys.noarch 0:1.29-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.1 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.10.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2003 0 --:--:-- --:--:-- --:--:-- 2016 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 8513k 100 8513k 0 0 14.4M 0 --:--:-- --:--:-- --:--:-- 34.2M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1945 0 --:--:-- --:--:-- --:--:-- 1953 60 38.3M 60 23.2M 0 0 32.3M 0 0:00:01 --:--:-- 0:00:01 32.3M100 38.3M 100 38.3M 0 0 44.9M 0 --:--:-- --:--:-- --:--:-- 112M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 548 0 --:--:-- --:--:-- --:--:-- 550 0 0 0 620 0 0 1538 0 --:--:-- --:--:-- --:--:-- 1538 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 10.7M 100 10.7M 0 0 15.4M 0 --:--:-- --:--:-- --:--:-- 43.6M ~/nightlyrpmSHGVUo/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmSHGVUo/glusterd2-v6.0-dev.151.git9054f74-vendor.tar.xz Created dist archive /root/nightlyrpmSHGVUo/glusterd2-v6.0-dev.151.git9054f74-vendor.tar.xz ~ ~/nightlyrpmSHGVUo ~ INFO: mock.py version 1.4.14 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmSHGVUo/rpmbuild/SRPMS/glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.14 INFO: Mock Version: 1.4.14 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmSHGVUo/rpmbuild/SRPMS/glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 24 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 5816d9cae94448508623aa2946f728cb -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.ito0et:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins5427432430015512507.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done ab8cc238 +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 120 | n56.pufty | 172.19.3.120 | pufty | 3363 | Deployed | ab8cc238 | None | None | 7 | x86_64 | 1 | 2550 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sun Mar 24 01:05:52 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 24 Mar 2019 01:05:52 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #116 In-Reply-To: <26749043.5968.1553306534637.JavaMail.jenkins@jenkins.ci.centos.org> References: <26749043.5968.1553306534637.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1298068816.6059.1553389552635.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 454.87 KB...] Sunday 24 March 2019 00:55:53 +0000 (0:00:00.234) 0:17:30.025 ********** TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_region_id] *** Sunday 24 March 2019 00:55:53 +0000 (0:00:00.187) 0:17:30.213 ********** TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_tenancy_id] *** Sunday 24 March 2019 00:55:53 +0000 (0:00:00.199) 0:17:30.413 ********** TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_user_id] *** Sunday 24 March 2019 00:55:54 +0000 (0:00:00.244) 0:17:30.658 ********** TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_user_fingerprint] *** Sunday 24 March 2019 00:55:54 +0000 (0:00:00.211) 0:17:30.870 ********** TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_compartment_id] *** Sunday 24 March 2019 00:55:54 +0000 (0:00:00.203) 0:17:31.073 ********** TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_vnc_id] *** Sunday 24 March 2019 00:55:54 +0000 (0:00:00.222) 0:17:31.295 ********** TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_subnet1_id] *** Sunday 24 March 2019 00:55:55 +0000 (0:00:00.222) 0:17:31.518 ********** TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_subnet2_id] *** Sunday 24 March 2019 00:55:55 +0000 (0:00:00.208) 0:17:31.727 ********** TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_security_list_management] *** Sunday 24 March 2019 00:55:55 +0000 (0:00:00.197) 0:17:31.924 ********** TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Generate Configuration] *** Sunday 24 March 2019 00:55:55 +0000 (0:00:00.180) 0:17:32.104 ********** TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Encode Configuration] *** Sunday 24 March 2019 00:55:55 +0000 (0:00:00.196) 0:17:32.301 ********** TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Apply Configuration To Secret] *** Sunday 24 March 2019 00:55:56 +0000 (0:00:00.184) 0:17:32.485 ********** TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Apply Configuration] *** Sunday 24 March 2019 00:55:56 +0000 (0:00:00.172) 0:17:32.657 ********** TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Download Controller Manifest] *** Sunday 24 March 2019 00:55:56 +0000 (0:00:00.186) 0:17:32.844 ********** TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Apply Controller Manifest] *** Sunday 24 March 2019 00:55:56 +0000 (0:00:00.183) 0:17:33.027 ********** PLAY [Fetch config] ************************************************************ TASK [Retrieve kubectl config] ************************************************* Sunday 24 March 2019 00:55:56 +0000 (0:00:00.262) 0:17:33.289 ********** changed: [kube1] PLAY [Copy kube config for vagrant user] *************************************** TASK [Create a directory] ****************************************************** Sunday 24 March 2019 00:55:57 +0000 (0:00:00.871) 0:17:34.161 ********** changed: [kube1] changed: [kube2] TASK [Copy kube config for vagrant user] *************************************** Sunday 24 March 2019 00:55:59 +0000 (0:00:01.565) 0:17:35.727 ********** changed: [kube1] changed: [kube2] PLAY [Deploy GCS] ************************************************************** TASK [GCS Pre | Cluster ID | Generate a UUID] ********************************** Sunday 24 March 2019 00:56:00 +0000 (0:00:01.094) 0:17:36.822 ********** changed: [kube1] TASK [GCS Pre | Cluster ID | Set gcs_gd2_clusterid fact] *********************** Sunday 24 March 2019 00:56:01 +0000 (0:00:00.932) 0:17:37.754 ********** ok: [kube1] TASK [GCS Pre | Manifests directory | Create a temporary directory] ************ Sunday 24 March 2019 00:56:01 +0000 (0:00:00.382) 0:17:38.136 ********** changed: [kube1] TASK [GCS Pre | Manifests directory | Set manifests_dir fact] ****************** Sunday 24 March 2019 00:56:03 +0000 (0:00:01.421) 0:17:39.558 ********** ok: [kube1] TASK [GCS Pre | Manifests | Sync GCS manifests] ******************************** Sunday 24 March 2019 00:56:03 +0000 (0:00:00.340) 0:17:39.899 ********** changed: [kube1] => (item=gcs-namespace.yml) changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-fs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-virtblock-csi.yml) changed: [kube1] => (item=gcs-storage-virtblock.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Sunday 24 March 2019 00:56:35 +0000 (0:00:32.155) 0:18:12.054 ********** included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Sunday 24 March 2019 00:56:35 +0000 (0:00:00.223) 0:18:12.277 ********** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Sunday 24 March 2019 00:56:36 +0000 (0:00:00.376) 0:18:12.654 ********** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Sunday 24 March 2019 00:56:38 +0000 (0:00:01.864) 0:18:14.518 ********** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Sunday 24 March 2019 00:56:38 +0000 (0:00:00.360) 0:18:14.879 ********** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Sunday 24 March 2019 00:56:40 +0000 (0:00:01.847) 0:18:16.726 ********** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Sunday 24 March 2019 00:56:40 +0000 (0:00:00.305) 0:18:17.032 ********** changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Sunday 24 March 2019 00:56:42 +0000 (0:00:01.928) 0:18:18.961 ********** ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Sunday 24 March 2019 00:56:43 +0000 (0:00:01.244) 0:18:20.206 ********** ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Sunday 24 March 2019 00:56:45 +0000 (0:00:01.572) 0:18:21.778 ********** FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (49 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (48 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (47 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (46 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (45 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (44 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (43 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (42 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (41 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (40 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (39 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (38 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (37 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (36 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (35 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (34 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (33 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (32 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (31 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (30 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (29 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (28 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (27 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (26 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (25 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (24 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (23 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (22 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (21 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (20 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (19 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (18 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (17 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (16 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (15 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (14 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (13 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (12 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (11 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (10 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (9 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (8 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (7 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (6 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (5 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (4 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (3 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (2 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": true, "cmd": ["/usr/local/bin/kubectl", "-ngcs", "-ojsonpath={.status.availableReplicas}", "get", "deployment", "etcd-operator"], "delta": "0:00:00.342448", "end": "2019-03-24 01:05:52.227066", "rc": 0, "start": "2019-03-24 01:05:51.884618", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=399 changed=116 unreachable=0 failed=1 kube2 : ok=321 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Sunday 24 March 2019 01:05:52 +0000 (0:09:06.915) 0:27:28.694 ********** =============================================================================== GCS | ETCD Operator | Wait for etcd-operator to be available ---------- 546.91s kubernetes/master : kubeadm | Initialize first master ------------------ 39.50s kubernetes/master : kubeadm | Init other uninitialized masters --------- 38.35s etcd : Gen_certs | Write etcd master certs ----------------------------- 33.89s Install packages ------------------------------------------------------- 32.19s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 32.16s download : container_download | download images for kubeadm config images -- 32.01s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 23.00s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.78s Wait for host to be available ------------------------------------------ 20.74s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 17.68s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 17.33s etcd : Gen_certs | Gather etcd master certs ---------------------------- 12.66s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 12.39s etcd : reload etcd ----------------------------------------------------- 12.13s gather facts from all instances ---------------------------------------- 12.06s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.53s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests -- 10.48s container-engine/docker : Docker | pause while Docker restarts --------- 10.39s download : file_download | Download item ------------------------------- 10.28s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun Mar 24 01:23:07 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 24 Mar 2019 01:23:07 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #141 In-Reply-To: <137577669.5966.1553303952912.JavaMail.jenkins@jenkins.ci.centos.org> References: <137577669.5966.1553303952912.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <997290299.6062.1553390587907.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 55.43 KB...] changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Mon Mar 25 00:16:47 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 25 Mar 2019 00:16:47 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #313 In-Reply-To: <220403711.6052.1553386611918.JavaMail.jenkins@jenkins.ci.centos.org> References: <220403711.6052.1553386611918.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <832396469.6152.1553473007251.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.40 KB...] Total 60 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.1.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : perl-srpm-macros-1-8.el7.noarch 10/49 Installing : unzip-6.0-19.el7.x86_64 11/49 Installing : dwz-0.11-3.el7.x86_64 12/49 Installing : bzip2-1.0.6-13.el7.x86_64 13/49 Installing : usermode-1.111-5.el7.x86_64 14/49 Installing : python2-distro-1.2.0-1.el7.noarch 15/49 Installing : patch-2.7.1-10.el7_5.x86_64 16/49 Installing : python-backports-1.0-8.el7.x86_64 17/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/49 Installing : python-urllib3-1.10.2-5.el7.noarch 19/49 Installing : python-requests-2.6.0-1.el7_1.noarch 20/49 Installing : distribution-gpg-keys-1.29-1.el7.noarch 21/49 Installing : mock-core-configs-30.2-1.el7.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : libmodman-2.0.1-8.el7.x86_64 24/49 Installing : libproxy-0.4.11-11.el7.x86_64 25/49 Installing : python-markupsafe-0.11-10.el7.x86_64 26/49 Installing : python-jinja2-2.7.2-2.el7.noarch 27/49 Installing : gdb-7.6.1-114.el7.x86_64 28/49 Installing : kernel-headers-3.10.0-957.10.1.el7.x86_64 29/49 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 30/49 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 31/49 Installing : gcc-4.8.5-36.el7_6.1.x86_64 32/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 33/49 Installing : python2-pyroute2-0.4.13-1.el7.noarch 34/49 Installing : pigz-2.3.4-1.el7.x86_64 35/49 Installing : golang-src-1.11.5-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.14-2.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 5/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 6/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/49 Verifying : zip-3.0-11.el7.x86_64 8/49 Verifying : nettle-2.7.1-8.el7.x86_64 9/49 Verifying : cpp-4.8.5-36.el7_6.1.x86_64 10/49 Verifying : golang-src-1.11.5-1.el7.noarch 11/49 Verifying : pigz-2.3.4-1.el7.x86_64 12/49 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 13/49 Verifying : golang-1.11.5-1.el7.x86_64 14/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 15/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 16/49 Verifying : kernel-headers-3.10.0-957.10.1.el7.x86_64 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : gdb-7.6.1-114.el7.x86_64 19/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 20/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 21/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 23/49 Verifying : libmodman-2.0.1-8.el7.x86_64 24/49 Verifying : mpfr-3.1.1-4.el7.x86_64 25/49 Verifying : python-babel-0.9.6-8.el7.noarch 26/49 Verifying : distribution-gpg-keys-1.29-1.el7.noarch 27/49 Verifying : apr-util-1.5.2-6.el7.x86_64 28/49 Verifying : python-backports-1.0-8.el7.x86_64 29/49 Verifying : patch-2.7.1-10.el7_5.x86_64 30/49 Verifying : libmpc-1.0.1-3.el7.x86_64 31/49 Verifying : python2-distro-1.2.0-1.el7.noarch 32/49 Verifying : usermode-1.111-5.el7.x86_64 33/49 Verifying : python-six-1.9.0-2.el7.noarch 34/49 Verifying : libproxy-0.4.11-11.el7.x86_64 35/49 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 36/49 Verifying : gcc-4.8.5-36.el7_6.1.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : perl-srpm-macros-1-8.el7.noarch 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.1 distribution-gpg-keys.noarch 0:1.29-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.1 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.10.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2053 0 --:--:-- --:--:-- --:--:-- 2057 16 8513k 16 1439k 0 0 1308k 0 0:00:06 0:00:01 0:00:05 1308k100 8513k 100 8513k 0 0 5090k 0 0:00:01 0:00:01 --:--:-- 12.0M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2079 0 --:--:-- --:--:-- --:--:-- 2076 49 38.3M 49 19.1M 0 0 13.3M 0 0:00:02 0:00:01 0:00:01 13.3M 99 38.3M 99 38.3M 0 0 15.8M 0 0:00:02 0:00:02 --:--:-- 19.5M100 38.3M 100 38.3M 0 0 15.8M 0 0:00:02 0:00:02 --:--:-- 19.5M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 552 0 --:--:-- --:--:-- --:--:-- 554 0 0 0 620 0 0 1748 0 --:--:-- --:--:-- --:--:-- 1748 0 10.7M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 10.7M 100 10.7M 0 0 16.8M 0 --:--:-- --:--:-- --:--:-- 78.3M ~/nightlyrpmhz8vau/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmhz8vau/glusterd2-v6.0-dev.151.git9054f74-vendor.tar.xz Created dist archive /root/nightlyrpmhz8vau/glusterd2-v6.0-dev.151.git9054f74-vendor.tar.xz ~ ~/nightlyrpmhz8vau ~ INFO: mock.py version 1.4.14 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmhz8vau/rpmbuild/SRPMS/glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.14 INFO: Mock Version: 1.4.14 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmhz8vau/rpmbuild/SRPMS/glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 24 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 53572264b1254ebe9def4459bd4fd73f -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.P6VVKk:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins4717527777684442929.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 420dee33 +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 272 | n17.gusty | 172.19.2.145 | gusty | 3358 | Deployed | 420dee33 | None | None | 7 | x86_64 | 1 | 2160 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Mon Mar 25 00:52:08 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 25 Mar 2019 00:52:08 +0000 (UTC) Subject: [CI-results] Jenkins build is back to normal : gluster_anteater_gcs #117 In-Reply-To: <1298068816.6059.1553389552635.JavaMail.jenkins@jenkins.ci.centos.org> References: <1298068816.6059.1553389552635.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <123361350.6155.1553475128549.JavaMail.jenkins@jenkins.ci.centos.org> See From ci at centos.org Mon Mar 25 01:17:33 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 25 Mar 2019 01:17:33 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #142 In-Reply-To: <997290299.6062.1553390587907.JavaMail.jenkins@jenkins.ci.centos.org> References: <997290299.6062.1553390587907.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2065990136.6161.1553476653971.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 55.43 KB...] changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Tue Mar 26 00:16:48 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 26 Mar 2019 00:16:48 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #314 In-Reply-To: <832396469.6152.1553473007251.JavaMail.jenkins@jenkins.ci.centos.org> References: <832396469.6152.1553473007251.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <914228139.6259.1553559408496.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.40 KB...] Total 65 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.1.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : perl-srpm-macros-1-8.el7.noarch 10/49 Installing : unzip-6.0-19.el7.x86_64 11/49 Installing : dwz-0.11-3.el7.x86_64 12/49 Installing : bzip2-1.0.6-13.el7.x86_64 13/49 Installing : usermode-1.111-5.el7.x86_64 14/49 Installing : python2-distro-1.2.0-1.el7.noarch 15/49 Installing : patch-2.7.1-10.el7_5.x86_64 16/49 Installing : python-backports-1.0-8.el7.x86_64 17/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/49 Installing : python-urllib3-1.10.2-5.el7.noarch 19/49 Installing : python-requests-2.6.0-1.el7_1.noarch 20/49 Installing : distribution-gpg-keys-1.29-1.el7.noarch 21/49 Installing : mock-core-configs-30.2-1.el7.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : libmodman-2.0.1-8.el7.x86_64 24/49 Installing : libproxy-0.4.11-11.el7.x86_64 25/49 Installing : python-markupsafe-0.11-10.el7.x86_64 26/49 Installing : python-jinja2-2.7.2-2.el7.noarch 27/49 Installing : gdb-7.6.1-114.el7.x86_64 28/49 Installing : kernel-headers-3.10.0-957.10.1.el7.x86_64 29/49 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 30/49 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 31/49 Installing : gcc-4.8.5-36.el7_6.1.x86_64 32/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 33/49 Installing : python2-pyroute2-0.4.13-1.el7.noarch 34/49 Installing : pigz-2.3.4-1.el7.x86_64 35/49 Installing : golang-src-1.11.5-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.14-2.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 5/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 6/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/49 Verifying : zip-3.0-11.el7.x86_64 8/49 Verifying : nettle-2.7.1-8.el7.x86_64 9/49 Verifying : cpp-4.8.5-36.el7_6.1.x86_64 10/49 Verifying : golang-src-1.11.5-1.el7.noarch 11/49 Verifying : pigz-2.3.4-1.el7.x86_64 12/49 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 13/49 Verifying : golang-1.11.5-1.el7.x86_64 14/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 15/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 16/49 Verifying : kernel-headers-3.10.0-957.10.1.el7.x86_64 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : gdb-7.6.1-114.el7.x86_64 19/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 20/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 21/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 23/49 Verifying : libmodman-2.0.1-8.el7.x86_64 24/49 Verifying : mpfr-3.1.1-4.el7.x86_64 25/49 Verifying : python-babel-0.9.6-8.el7.noarch 26/49 Verifying : distribution-gpg-keys-1.29-1.el7.noarch 27/49 Verifying : apr-util-1.5.2-6.el7.x86_64 28/49 Verifying : python-backports-1.0-8.el7.x86_64 29/49 Verifying : patch-2.7.1-10.el7_5.x86_64 30/49 Verifying : libmpc-1.0.1-3.el7.x86_64 31/49 Verifying : python2-distro-1.2.0-1.el7.noarch 32/49 Verifying : usermode-1.111-5.el7.x86_64 33/49 Verifying : python-six-1.9.0-2.el7.noarch 34/49 Verifying : libproxy-0.4.11-11.el7.x86_64 35/49 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 36/49 Verifying : gcc-4.8.5-36.el7_6.1.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : perl-srpm-macros-1-8.el7.noarch 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.1 distribution-gpg-keys.noarch 0:1.29-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.1 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.10.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1998 0 --:--:-- --:--:-- --:--:-- 2009 100 8513k 100 8513k 0 0 12.4M 0 --:--:-- --:--:-- --:--:-- 12.4M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2188 0 --:--:-- --:--:-- --:--:-- 2184 58 38.3M 58 22.3M 0 0 19.8M 0 0:00:01 0:00:01 --:--:-- 19.8M100 38.3M 100 38.3M 0 0 20.6M 0 0:00:01 0:00:01 --:--:-- 22.0M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 519 0 --:--:-- --:--:-- --:--:-- 522 0 0 0 620 0 0 1609 0 --:--:-- --:--:-- --:--:-- 1609 100 10.7M 100 10.7M 0 0 16.6M 0 --:--:-- --:--:-- --:--:-- 16.6M ~/nightlyrpmPZcnC5/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmPZcnC5/glusterd2-v6.0-dev.151.git9054f74-vendor.tar.xz Created dist archive /root/nightlyrpmPZcnC5/glusterd2-v6.0-dev.151.git9054f74-vendor.tar.xz ~ ~/nightlyrpmPZcnC5 ~ INFO: mock.py version 1.4.14 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmPZcnC5/rpmbuild/SRPMS/glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.14 INFO: Mock Version: 1.4.14 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmPZcnC5/rpmbuild/SRPMS/glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 23 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 3332d593772e4c89a2b8a8e20933125f -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.e84xB1:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins2617523920020999603.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 38a6dfe5 +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 117 | n53.pufty | 172.19.3.117 | pufty | 3302 | Deployed | 38a6dfe5 | None | None | 7 | x86_64 | 1 | 2520 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Tue Mar 26 01:07:40 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 26 Mar 2019 01:07:40 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #118 Message-ID: <71329717.6266.1553562460232.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 459.25 KB...] changed: [kube1] => (item=gcs-namespace.yml) changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-fs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-virtblock-csi.yml) changed: [kube1] => (item=gcs-storage-virtblock.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Tuesday 26 March 2019 00:56:05 +0000 (0:00:34.844) 0:18:19.061 ********* included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Tuesday 26 March 2019 00:56:05 +0000 (0:00:00.235) 0:18:19.297 ********* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Tuesday 26 March 2019 00:56:06 +0000 (0:00:00.496) 0:18:19.794 ********* changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Tuesday 26 March 2019 00:56:08 +0000 (0:00:02.263) 0:18:22.058 ********* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Tuesday 26 March 2019 00:56:08 +0000 (0:00:00.495) 0:18:22.554 ********* changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Tuesday 26 March 2019 00:56:11 +0000 (0:00:02.266) 0:18:24.820 ********* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Tuesday 26 March 2019 00:56:11 +0000 (0:00:00.540) 0:18:25.361 ********* changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Tuesday 26 March 2019 00:56:13 +0000 (0:00:02.160) 0:18:27.521 ********* ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Tuesday 26 March 2019 00:56:15 +0000 (0:00:01.627) 0:18:29.149 ********* ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Tuesday 26 March 2019 00:56:17 +0000 (0:00:01.783) 0:18:30.933 ********* FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Tuesday 26 March 2019 00:56:29 +0000 (0:00:12.134) 0:18:43.067 ********* ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Tuesday 26 March 2019 00:56:31 +0000 (0:00:01.550) 0:18:44.617 ********* ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Tuesday 26 March 2019 00:56:32 +0000 (0:00:01.234) 0:18:45.852 ********* ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Tuesday 26 March 2019 00:56:33 +0000 (0:00:01.220) 0:18:47.072 ********* ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Tuesday 26 March 2019 00:56:35 +0000 (0:00:01.662) 0:18:48.735 ********* ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Tuesday 26 March 2019 00:56:37 +0000 (0:00:01.859) 0:18:50.594 ********* changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Tuesday 26 March 2019 00:56:38 +0000 (0:00:01.296) 0:18:51.890 ********* ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Tuesday 26 March 2019 00:56:38 +0000 (0:00:00.321) 0:18:52.212 ********* FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (45 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Tuesday 26 March 2019 00:57:52 +0000 (0:01:13.429) 0:20:05.641 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Tuesday 26 March 2019 00:57:53 +0000 (0:00:01.676) 0:20:07.318 ********* included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Tuesday 26 March 2019 00:57:53 +0000 (0:00:00.208) 0:20:07.527 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Tuesday 26 March 2019 00:57:54 +0000 (0:00:00.321) 0:20:07.849 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Tuesday 26 March 2019 00:57:55 +0000 (0:00:01.427) 0:20:09.276 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Tuesday 26 March 2019 00:57:56 +0000 (0:00:00.369) 0:20:09.645 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Tuesday 26 March 2019 00:57:57 +0000 (0:00:01.680) 0:20:11.326 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Tuesday 26 March 2019 00:57:58 +0000 (0:00:00.322) 0:20:11.649 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Tuesday 26 March 2019 00:57:59 +0000 (0:00:01.558) 0:20:13.208 ********* changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Tuesday 26 March 2019 00:58:00 +0000 (0:00:01.317) 0:20:14.526 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Tuesday 26 March 2019 00:58:01 +0000 (0:00:00.374) 0:20:14.900 ********* FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.32.7:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=421 changed=119 unreachable=0 failed=1 kube2 : ok=320 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Tuesday 26 March 2019 01:07:39 +0000 (0:09:38.474) 0:29:53.374 ********* =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 578.47s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 73.43s download : container_download | download images for kubeadm config images -- 44.32s kubernetes/master : kubeadm | Init other uninitialized masters --------- 39.16s kubernetes/master : kubeadm | Initialize first master ------------------ 39.09s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 34.84s etcd : Gen_certs | Write etcd master certs ----------------------------- 32.79s Install packages ------------------------------------------------------- 31.23s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 21.22s Wait for host to be available ------------------------------------------ 21.05s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 19.27s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 16.50s etcd : wait for etcd up ------------------------------------------------ 13.46s etcd : Gen_certs | Gather etcd master certs ---------------------------- 13.13s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 13.00s gather facts from all instances ---------------------------------------- 12.66s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.13s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.59s download : file_download | Download item ------------------------------- 11.00s container-engine/docker : Docker | pause while Docker restarts --------- 10.41s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Tue Mar 26 01:15:40 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 26 Mar 2019 01:15:40 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #143 In-Reply-To: <2065990136.6161.1553476653971.JavaMail.jenkins@jenkins.ci.centos.org> References: <2065990136.6161.1553476653971.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1981856781.6267.1553562940974.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 55.19 KB...] changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed Mar 27 00:16:17 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 27 Mar 2019 00:16:17 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #315 In-Reply-To: <914228139.6259.1553559408496.JavaMail.jenkins@jenkins.ci.centos.org> References: <914228139.6259.1553559408496.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1081247985.6374.1553645777567.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.40 KB...] Total 57 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.1.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : perl-srpm-macros-1-8.el7.noarch 10/49 Installing : unzip-6.0-19.el7.x86_64 11/49 Installing : dwz-0.11-3.el7.x86_64 12/49 Installing : bzip2-1.0.6-13.el7.x86_64 13/49 Installing : usermode-1.111-5.el7.x86_64 14/49 Installing : python2-distro-1.2.0-1.el7.noarch 15/49 Installing : patch-2.7.1-10.el7_5.x86_64 16/49 Installing : python-backports-1.0-8.el7.x86_64 17/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/49 Installing : python-urllib3-1.10.2-5.el7.noarch 19/49 Installing : python-requests-2.6.0-1.el7_1.noarch 20/49 Installing : distribution-gpg-keys-1.29-1.el7.noarch 21/49 Installing : mock-core-configs-30.2-1.el7.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : libmodman-2.0.1-8.el7.x86_64 24/49 Installing : libproxy-0.4.11-11.el7.x86_64 25/49 Installing : python-markupsafe-0.11-10.el7.x86_64 26/49 Installing : python-jinja2-2.7.2-2.el7.noarch 27/49 Installing : gdb-7.6.1-114.el7.x86_64 28/49 Installing : kernel-headers-3.10.0-957.10.1.el7.x86_64 29/49 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 30/49 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 31/49 Installing : gcc-4.8.5-36.el7_6.1.x86_64 32/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 33/49 Installing : python2-pyroute2-0.4.13-1.el7.noarch 34/49 Installing : pigz-2.3.4-1.el7.x86_64 35/49 Installing : golang-src-1.11.5-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.14-2.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 5/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 6/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/49 Verifying : zip-3.0-11.el7.x86_64 8/49 Verifying : nettle-2.7.1-8.el7.x86_64 9/49 Verifying : cpp-4.8.5-36.el7_6.1.x86_64 10/49 Verifying : golang-src-1.11.5-1.el7.noarch 11/49 Verifying : pigz-2.3.4-1.el7.x86_64 12/49 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 13/49 Verifying : golang-1.11.5-1.el7.x86_64 14/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 15/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 16/49 Verifying : kernel-headers-3.10.0-957.10.1.el7.x86_64 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : gdb-7.6.1-114.el7.x86_64 19/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 20/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 21/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 23/49 Verifying : libmodman-2.0.1-8.el7.x86_64 24/49 Verifying : mpfr-3.1.1-4.el7.x86_64 25/49 Verifying : python-babel-0.9.6-8.el7.noarch 26/49 Verifying : distribution-gpg-keys-1.29-1.el7.noarch 27/49 Verifying : apr-util-1.5.2-6.el7.x86_64 28/49 Verifying : python-backports-1.0-8.el7.x86_64 29/49 Verifying : patch-2.7.1-10.el7_5.x86_64 30/49 Verifying : libmpc-1.0.1-3.el7.x86_64 31/49 Verifying : python2-distro-1.2.0-1.el7.noarch 32/49 Verifying : usermode-1.111-5.el7.x86_64 33/49 Verifying : python-six-1.9.0-2.el7.noarch 34/49 Verifying : libproxy-0.4.11-11.el7.x86_64 35/49 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 36/49 Verifying : gcc-4.8.5-36.el7_6.1.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : perl-srpm-macros-1-8.el7.noarch 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.1 distribution-gpg-keys.noarch 0:1.29-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.1 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.10.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1923 0 --:--:-- --:--:-- --:--:-- 1932 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0 14 8513k 14 1239k 0 0 861k 0 0:00:09 0:00:01 0:00:08 10.8M100 8513k 100 8513k 0 0 5613k 0 0:00:01 0:00:01 --:--:-- 43.9M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2247 0 --:--:-- --:--:-- --:--:-- 2255 74 38.3M 74 28.6M 0 0 31.7M 0 0:00:01 --:--:-- 0:00:01 31.7M100 38.3M 100 38.3M 0 0 33.7M 0 0:00:01 0:00:01 --:--:-- 41.2M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 555 0 --:--:-- --:--:-- --:--:-- 558 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 620 0 0 1574 0 --:--:-- --:--:-- --:--:-- 605k 100 10.7M 100 10.7M 0 0 10.6M 0 0:00:01 0:00:01 --:--:-- 10.6M ~/nightlyrpmCTFJ1h/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmCTFJ1h/glusterd2-v6.0-dev.151.git9054f74-vendor.tar.xz Created dist archive /root/nightlyrpmCTFJ1h/glusterd2-v6.0-dev.151.git9054f74-vendor.tar.xz ~ ~/nightlyrpmCTFJ1h ~ INFO: mock.py version 1.4.14 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmCTFJ1h/rpmbuild/SRPMS/glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.14 INFO: Mock Version: 1.4.14 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmCTFJ1h/rpmbuild/SRPMS/glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 22 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 71731e6516874134ab65a2c4c7bbbb48 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.oZfzis:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins512899053045255537.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done dfed9f4d +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 155 | n28.crusty | 172.19.2.28 | crusty | 3388 | Deployed | dfed9f4d | None | None | 7 | x86_64 | 1 | 2270 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Wed Mar 27 00:55:51 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 27 Mar 2019 00:55:51 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #119 In-Reply-To: <71329717.6266.1553562460232.JavaMail.jenkins@jenkins.ci.centos.org> References: <71329717.6266.1553562460232.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <824652007.6376.1553648151063.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 459.34 KB...] changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-fs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-virtblock-csi.yml) changed: [kube1] => (item=gcs-storage-virtblock.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Wednesday 27 March 2019 00:45:23 +0000 (0:00:11.929) 0:10:33.389 ******* included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Wednesday 27 March 2019 00:45:23 +0000 (0:00:00.090) 0:10:33.480 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Wednesday 27 March 2019 00:45:23 +0000 (0:00:00.216) 0:10:33.697 ******* changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Wednesday 27 March 2019 00:45:24 +0000 (0:00:00.808) 0:10:34.505 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Wednesday 27 March 2019 00:45:24 +0000 (0:00:00.212) 0:10:34.717 ******* changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Wednesday 27 March 2019 00:45:25 +0000 (0:00:00.806) 0:10:35.524 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Wednesday 27 March 2019 00:45:25 +0000 (0:00:00.206) 0:10:35.731 ******* changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Wednesday 27 March 2019 00:45:26 +0000 (0:00:00.796) 0:10:36.527 ******* ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Wednesday 27 March 2019 00:45:26 +0000 (0:00:00.736) 0:10:37.264 ******* ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Wednesday 27 March 2019 00:45:27 +0000 (0:00:00.764) 0:10:38.028 ******* FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Wednesday 27 March 2019 00:45:38 +0000 (0:00:10.961) 0:10:48.990 ******* ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Wednesday 27 March 2019 00:45:39 +0000 (0:00:00.693) 0:10:49.683 ******* ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Wednesday 27 March 2019 00:45:39 +0000 (0:00:00.533) 0:10:50.217 ******* ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Wednesday 27 March 2019 00:45:40 +0000 (0:00:00.552) 0:10:50.769 ******* ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Wednesday 27 March 2019 00:45:41 +0000 (0:00:00.747) 0:10:51.517 ******* ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Wednesday 27 March 2019 00:45:42 +0000 (0:00:00.969) 0:10:52.487 ******* FAILED - RETRYING: GCS | ETCD Cluster | Get etcd-client service (5 retries left). changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Wednesday 27 March 2019 00:45:47 +0000 (0:00:05.834) 0:10:58.321 ******* ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Wednesday 27 March 2019 00:45:48 +0000 (0:00:00.145) 0:10:58.467 ******* FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (45 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Wednesday 27 March 2019 00:46:53 +0000 (0:01:05.256) 0:12:03.724 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Wednesday 27 March 2019 00:46:54 +0000 (0:00:00.835) 0:12:04.559 ******* included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Wednesday 27 March 2019 00:46:54 +0000 (0:00:00.102) 0:12:04.662 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Wednesday 27 March 2019 00:46:54 +0000 (0:00:00.143) 0:12:04.805 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Wednesday 27 March 2019 00:46:55 +0000 (0:00:00.725) 0:12:05.531 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Wednesday 27 March 2019 00:46:55 +0000 (0:00:00.145) 0:12:05.677 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Wednesday 27 March 2019 00:46:56 +0000 (0:00:00.803) 0:12:06.480 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Wednesday 27 March 2019 00:46:56 +0000 (0:00:00.150) 0:12:06.631 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Wednesday 27 March 2019 00:46:57 +0000 (0:00:00.697) 0:12:07.329 ******* changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Wednesday 27 March 2019 00:46:57 +0000 (0:00:00.498) 0:12:07.827 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Wednesday 27 March 2019 00:46:57 +0000 (0:00:00.170) 0:12:07.998 ******* FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.53.85:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=420 changed=119 unreachable=0 failed=1 kube2 : ok=321 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Wednesday 27 March 2019 00:55:50 +0000 (0:08:53.152) 0:21:01.151 ******* =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 533.15s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 65.26s download : container_download | download images for kubeadm config images -- 35.48s kubernetes/master : kubeadm | Initialize first master ------------------ 33.25s kubernetes/master : kubeadm | Init other uninitialized masters --------- 25.17s Install packages ------------------------------------------------------- 23.55s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 21.05s Wait for host to be available ------------------------------------------ 16.48s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 13.52s etcd : Gen_certs | Write etcd master certs ----------------------------- 12.74s Extend root VG --------------------------------------------------------- 12.64s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 12.17s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 11.93s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 10.96s etcd : reload etcd ----------------------------------------------------- 10.92s container-engine/docker : Docker | pause while Docker restarts --------- 10.25s gather facts from all instances ----------------------------------------- 8.45s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests --- 7.95s kubernetes/node : install | Copy hyperkube binary from download dir ----- 7.92s download : file_download | Download item -------------------------------- 7.68s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed Mar 27 01:09:04 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 27 Mar 2019 01:09:04 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #144 In-Reply-To: <1981856781.6267.1553562940974.JavaMail.jenkins@jenkins.ci.centos.org> References: <1981856781.6267.1553562940974.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1589054727.6400.1553648944218.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 55.25 KB...] changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu Mar 28 00:16:49 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 28 Mar 2019 00:16:49 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #316 In-Reply-To: <1081247985.6374.1553645777567.JavaMail.jenkins@jenkins.ci.centos.org> References: <1081247985.6374.1553645777567.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1945468890.6567.1553732210092.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.40 KB...] Total 46 MB/s | 143 MB 00:03 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.1.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : perl-srpm-macros-1-8.el7.noarch 10/49 Installing : unzip-6.0-19.el7.x86_64 11/49 Installing : dwz-0.11-3.el7.x86_64 12/49 Installing : bzip2-1.0.6-13.el7.x86_64 13/49 Installing : usermode-1.111-5.el7.x86_64 14/49 Installing : python2-distro-1.2.0-1.el7.noarch 15/49 Installing : patch-2.7.1-10.el7_5.x86_64 16/49 Installing : python-backports-1.0-8.el7.x86_64 17/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/49 Installing : python-urllib3-1.10.2-5.el7.noarch 19/49 Installing : python-requests-2.6.0-1.el7_1.noarch 20/49 Installing : distribution-gpg-keys-1.29-1.el7.noarch 21/49 Installing : mock-core-configs-30.2-1.el7.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : libmodman-2.0.1-8.el7.x86_64 24/49 Installing : libproxy-0.4.11-11.el7.x86_64 25/49 Installing : python-markupsafe-0.11-10.el7.x86_64 26/49 Installing : python-jinja2-2.7.2-2.el7.noarch 27/49 Installing : gdb-7.6.1-114.el7.x86_64 28/49 Installing : kernel-headers-3.10.0-957.10.1.el7.x86_64 29/49 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 30/49 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 31/49 Installing : gcc-4.8.5-36.el7_6.1.x86_64 32/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 33/49 Installing : python2-pyroute2-0.4.13-1.el7.noarch 34/49 Installing : pigz-2.3.4-1.el7.x86_64 35/49 Installing : golang-src-1.11.5-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.14-2.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 5/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 6/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/49 Verifying : zip-3.0-11.el7.x86_64 8/49 Verifying : nettle-2.7.1-8.el7.x86_64 9/49 Verifying : cpp-4.8.5-36.el7_6.1.x86_64 10/49 Verifying : golang-src-1.11.5-1.el7.noarch 11/49 Verifying : pigz-2.3.4-1.el7.x86_64 12/49 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 13/49 Verifying : golang-1.11.5-1.el7.x86_64 14/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 15/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 16/49 Verifying : kernel-headers-3.10.0-957.10.1.el7.x86_64 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : gdb-7.6.1-114.el7.x86_64 19/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 20/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 21/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 23/49 Verifying : libmodman-2.0.1-8.el7.x86_64 24/49 Verifying : mpfr-3.1.1-4.el7.x86_64 25/49 Verifying : python-babel-0.9.6-8.el7.noarch 26/49 Verifying : distribution-gpg-keys-1.29-1.el7.noarch 27/49 Verifying : apr-util-1.5.2-6.el7.x86_64 28/49 Verifying : python-backports-1.0-8.el7.x86_64 29/49 Verifying : patch-2.7.1-10.el7_5.x86_64 30/49 Verifying : libmpc-1.0.1-3.el7.x86_64 31/49 Verifying : python2-distro-1.2.0-1.el7.noarch 32/49 Verifying : usermode-1.111-5.el7.x86_64 33/49 Verifying : python-six-1.9.0-2.el7.noarch 34/49 Verifying : libproxy-0.4.11-11.el7.x86_64 35/49 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 36/49 Verifying : gcc-4.8.5-36.el7_6.1.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : perl-srpm-macros-1-8.el7.noarch 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.1 distribution-gpg-keys.noarch 0:1.29-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.1 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.10.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1584 0 --:--:-- --:--:-- --:--:-- 1583 100 8513k 100 8513k 0 0 14.1M 0 --:--:-- --:--:-- --:--:-- 14.1M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2365 0 --:--:-- --:--:-- --:--:-- 2375 83 38.3M 83 32.1M 0 0 46.2M 0 --:--:-- --:--:-- --:--:-- 46.2M100 38.3M 100 38.3M 0 0 50.2M 0 --:--:-- --:--:-- --:--:-- 91.7M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 584 0 --:--:-- --:--:-- --:--:-- 586 0 0 0 620 0 0 1731 0 --:--:-- --:--:-- --:--:-- 1731 1 10.7M 1 127k 0 0 253k 0 0:00:43 --:--:-- 0:00:43 253k100 10.7M 100 10.7M 0 0 17.5M 0 --:--:-- --:--:-- --:--:-- 98.2M ~/nightlyrpmeouTwd/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmeouTwd/glusterd2-v6.0-dev.151.git9054f74-vendor.tar.xz Created dist archive /root/nightlyrpmeouTwd/glusterd2-v6.0-dev.151.git9054f74-vendor.tar.xz ~ ~/nightlyrpmeouTwd ~ INFO: mock.py version 1.4.14 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmeouTwd/rpmbuild/SRPMS/glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.14 INFO: Mock Version: 1.4.14 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmeouTwd/rpmbuild/SRPMS/glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 23 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 94117e151505408b857e85ea0a74e599 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.qma9HW:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins135818531663423764.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 3bf22729 +---------+-----------+------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 135 | n8.crusty | 172.19.2.8 | crusty | 3394 | Deployed | 3bf22729 | None | None | 7 | x86_64 | 1 | 2070 | None | +---------+-----------+------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Thu Mar 28 01:04:43 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 28 Mar 2019 01:04:43 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #120 In-Reply-To: <824652007.6376.1553648151063.JavaMail.jenkins@jenkins.ci.centos.org> References: <824652007.6376.1553648151063.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <976628004.6574.1553735083655.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 398.78 KB...] TASK [network_plugin/contiv : Contiv | Copy the generated certificate on nodes] *** Thursday 28 March 2019 00:54:03 +0000 (0:00:00.223) 0:14:22.920 ******** TASK [network_plugin/contiv : Contiv | Set cni directory permissions] ********** Thursday 28 March 2019 00:54:03 +0000 (0:00:00.353) 0:14:23.273 ******** TASK [network_plugin/contiv : Contiv | Copy cni plugins] *********************** Thursday 28 March 2019 00:54:04 +0000 (0:00:00.339) 0:14:23.612 ******** TASK [network_plugin/contiv : Contiv | Copy netctl binary from docker container] *** Thursday 28 March 2019 00:54:04 +0000 (0:00:00.322) 0:14:23.935 ******** TASK [network_plugin/kube-router : kube-router | Add annotations on kube-master] *** Thursday 28 March 2019 00:54:04 +0000 (0:00:00.346) 0:14:24.281 ******** TASK [network_plugin/kube-router : kube-router | Add annotations on kube-node] *** Thursday 28 March 2019 00:54:05 +0000 (0:00:00.330) 0:14:24.611 ******** TASK [network_plugin/kube-router : kube-router | Add common annotations on all servers] *** Thursday 28 March 2019 00:54:05 +0000 (0:00:00.275) 0:14:24.887 ******** TASK [network_plugin/kube-router : kube-roter | Set cni directory permissions] *** Thursday 28 March 2019 00:54:05 +0000 (0:00:00.260) 0:14:25.148 ******** TASK [network_plugin/kube-router : kube-router | Copy cni plugins] ************* Thursday 28 March 2019 00:54:05 +0000 (0:00:00.273) 0:14:25.422 ******** TASK [network_plugin/kube-router : kube-router | Create manifest] ************** Thursday 28 March 2019 00:54:06 +0000 (0:00:00.324) 0:14:25.747 ******** TASK [network_plugin/cloud : Cloud | Set cni directory permissions] ************ Thursday 28 March 2019 00:54:06 +0000 (0:00:00.285) 0:14:26.032 ******** TASK [network_plugin/cloud : Canal | Copy cni plugins] ************************* Thursday 28 March 2019 00:54:06 +0000 (0:00:00.248) 0:14:26.282 ******** TASK [network_plugin/multus : Multus | Copy manifest files] ******************** Thursday 28 March 2019 00:54:06 +0000 (0:00:00.261) 0:14:26.543 ******** TASK [network_plugin/multus : Multus | Copy manifest templates] **************** Thursday 28 March 2019 00:54:07 +0000 (0:00:00.442) 0:14:26.985 ******** RUNNING HANDLER [kubernetes/kubeadm : restart kubelet] ************************* Thursday 28 March 2019 00:54:07 +0000 (0:00:00.213) 0:14:27.199 ******** changed: [kube3] PLAY [kube-master[0]] ********************************************************** TASK [download : include_tasks] ************************************************ Thursday 28 March 2019 00:54:08 +0000 (0:00:01.318) 0:14:28.517 ******** TASK [download : Download items] *********************************************** Thursday 28 March 2019 00:54:09 +0000 (0:00:00.164) 0:14:28.682 ******** TASK [download : Sync container] *********************************************** Thursday 28 March 2019 00:54:10 +0000 (0:00:01.637) 0:14:30.320 ******** TASK [download : include_tasks] ************************************************ Thursday 28 March 2019 00:54:12 +0000 (0:00:01.600) 0:14:31.920 ******** TASK [kubespray-defaults : Configure defaults] ********************************* Thursday 28 March 2019 00:54:12 +0000 (0:00:00.161) 0:14:32.082 ******** ok: [kube1] => { "msg": "Check roles/kubespray-defaults/defaults/main.yml" } TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Get default token name] *** Thursday 28 March 2019 00:54:12 +0000 (0:00:00.484) 0:14:32.567 ******** ok: [kube1] TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Get default token data] *** Thursday 28 March 2019 00:54:14 +0000 (0:00:01.573) 0:14:34.140 ******** ok: [kube1] TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Test if default certificate is expired] *** Thursday 28 March 2019 00:54:15 +0000 (0:00:01.288) 0:14:35.429 ******** ok: [kube1] TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Determine if certificate is expired] *** Thursday 28 March 2019 00:54:17 +0000 (0:00:01.910) 0:14:37.340 ******** ok: [kube1] TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Get all serviceaccount tokens to expire] *** Thursday 28 March 2019 00:54:18 +0000 (0:00:00.515) 0:14:37.855 ******** TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Delete expired tokens] *** Thursday 28 March 2019 00:54:18 +0000 (0:00:00.155) 0:14:38.011 ******** TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Delete pods in system namespace] *** Thursday 28 March 2019 00:54:18 +0000 (0:00:00.136) 0:14:38.147 ******** TASK [win_nodes/kubernetes_patch : Ensure that user manifests directory exists] *** Thursday 28 March 2019 00:54:18 +0000 (0:00:00.166) 0:14:38.314 ******** changed: [kube1] TASK [win_nodes/kubernetes_patch : Copy kube-proxy daemonset hostnameOverride patch] *** Thursday 28 March 2019 00:54:19 +0000 (0:00:01.009) 0:14:39.324 ******** changed: [kube1] TASK [win_nodes/kubernetes_patch : Check current command for kube-proxy daemonset] *** Thursday 28 March 2019 00:54:21 +0000 (0:00:02.191) 0:14:41.515 ******** changed: [kube1] TASK [win_nodes/kubernetes_patch : Apply hostnameOverride patch for kube-proxy daemonset] *** Thursday 28 March 2019 00:54:23 +0000 (0:00:01.424) 0:14:42.940 ******** changed: [kube1] TASK [win_nodes/kubernetes_patch : debug] ************************************** Thursday 28 March 2019 00:54:24 +0000 (0:00:01.506) 0:14:44.446 ******** ok: [kube1] => { "msg": [ "daemonset.extensions/kube-proxy patched" ] } TASK [win_nodes/kubernetes_patch : debug] ************************************** Thursday 28 March 2019 00:54:25 +0000 (0:00:00.469) 0:14:44.916 ******** ok: [kube1] => { "msg": [] } TASK [win_nodes/kubernetes_patch : Copy kube-proxy daemonset nodeselector patch] *** Thursday 28 March 2019 00:54:25 +0000 (0:00:00.539) 0:14:45.455 ******** changed: [kube1] TASK [win_nodes/kubernetes_patch : Check current nodeselector for kube-proxy daemonset] *** Thursday 28 March 2019 00:54:28 +0000 (0:00:02.330) 0:14:47.786 ******** changed: [kube1] TASK [win_nodes/kubernetes_patch : Apply nodeselector patch for kube-proxy daemonset] *** Thursday 28 March 2019 00:54:29 +0000 (0:00:01.352) 0:14:49.138 ******** changed: [kube1] TASK [win_nodes/kubernetes_patch : debug] ************************************** Thursday 28 March 2019 00:54:31 +0000 (0:00:01.484) 0:14:50.623 ******** ok: [kube1] => { "msg": [ "daemonset.extensions/kube-proxy patched" ] } TASK [win_nodes/kubernetes_patch : debug] ************************************** Thursday 28 March 2019 00:54:31 +0000 (0:00:00.507) 0:14:51.131 ******** ok: [kube1] => { "msg": [] } PLAY [kube-master] ************************************************************* TASK [download : include_tasks] ************************************************ Thursday 28 March 2019 00:54:32 +0000 (0:00:00.668) 0:14:51.799 ******** TASK [download : Download items] *********************************************** Thursday 28 March 2019 00:54:32 +0000 (0:00:00.198) 0:14:51.997 ******** TASK [download : Sync container] *********************************************** Thursday 28 March 2019 00:54:34 +0000 (0:00:01.779) 0:14:53.777 ******** TASK [download : include_tasks] ************************************************ Thursday 28 March 2019 00:54:36 +0000 (0:00:01.843) 0:14:55.620 ******** TASK [kubespray-defaults : Configure defaults] ********************************* Thursday 28 March 2019 00:54:36 +0000 (0:00:00.226) 0:14:55.847 ******** ok: [kube1] => { "msg": "Check roles/kubespray-defaults/defaults/main.yml" } ok: [kube2] => { "msg": "Check roles/kubespray-defaults/defaults/main.yml" } TASK [kubernetes-apps/network_plugin/cilium : Cilium | Start Resources] ******** Thursday 28 March 2019 00:54:36 +0000 (0:00:00.455) 0:14:56.303 ******** TASK [kubernetes-apps/network_plugin/cilium : Cilium | Wait for pods to run] *** Thursday 28 March 2019 00:54:37 +0000 (0:00:00.381) 0:14:56.684 ******** TASK [kubernetes-apps/network_plugin/calico : Start Calico resources] ********** Thursday 28 March 2019 00:54:37 +0000 (0:00:00.210) 0:14:56.894 ******** TASK [kubernetes-apps/network_plugin/calico : calico upgrade complete] ********* Thursday 28 March 2019 00:54:37 +0000 (0:00:00.257) 0:14:57.151 ******** TASK [kubernetes-apps/network_plugin/canal : Canal | Start Resources] ********** Thursday 28 March 2019 00:54:37 +0000 (0:00:00.283) 0:14:57.434 ******** TASK [kubernetes-apps/network_plugin/flannel : Flannel | Start Resources] ****** Thursday 28 March 2019 00:54:38 +0000 (0:00:00.403) 0:14:57.838 ******** ok: [kube1] => (item={'_ansible_parsed': True, u'md5sum': u'973704ff91b4c9341dccaf1da6003177', u'uid': 0, u'dest': u'/etc/kubernetes/cni-flannel-rbac.yml', '_ansible_item_result': True, '_ansible_no_log': False, u'owner': u'root', 'diff': [], u'size': 836, u'src': u'/home/vagrant/.ansible/tmp/ansible-tmp-1553734431.65-78430218277608/source', u'group': u'root', '_ansible_item_label': {u'type': u'sa', u'name': u'flannel', u'file': u'cni-flannel-rbac.yml'}, 'item': {u'type': u'sa', u'name': u'flannel', u'file': u'cni-flannel-rbac.yml'}, u'checksum': u'8c69db180ab422f55a122372bee4620dfb2ad0ed', u'changed': True, 'failed': False, u'state': u'file', u'gid': 0, u'secontext': u'system_u:object_r:etc_t:s0', u'mode': u'0644', u'invocation': {u'module_args': {u'directory_mode': None, u'force': True, u'remote_src': None, u'dest': u'/etc/kubernetes/cni-flannel-rbac.yml', u'selevel': None, u'_original_basename': u'cni-flannel-rbac.yml.j2', u'delimiter': None, u'regexp': None, u'owner': None, u'follow': False, u'validate': None, u'local_follow': None, u'src': u'/home/vagrant/.ansible/tmp/ansible-tmp-1553734431.65-78430218277608/source', u'group': None, u'unsafe_writes': None, u'checksum': u'8c69db180ab422f55a122372bee4620dfb2ad0ed', u'seuser': None, u'serole': None, u'content': None, u'setype': None, u'mode': None, u'attributes': None, u'backup': False}}, '_ansible_ignore_errors': None}) ok: [kube1] => (item={'_ansible_parsed': True, u'md5sum': u'51829ca2a2d540389c94291f63118112', u'uid': 0, u'dest': u'/etc/kubernetes/cni-flannel.yml', '_ansible_item_result': True, '_ansible_no_log': False, u'owner': u'root', 'diff': [], u'size': 3198, u'src': u'/home/vagrant/.ansible/tmp/ansible-tmp-1553734433.4-157087191596506/source', u'group': u'root', '_ansible_item_label': {u'type': u'ds', u'name': u'kube-flannel', u'file': u'cni-flannel.yml'}, 'item': {u'type': u'ds', u'name': u'kube-flannel', u'file': u'cni-flannel.yml'}, u'checksum': u'0b1393229c9e863d63eff80c96bda56568b58e82', u'changed': True, 'failed': False, u'state': u'file', u'gid': 0, u'secontext': u'system_u:object_r:etc_t:s0', u'mode': u'0644', u'invocation': {u'module_args': {u'directory_mode': None, u'force': True, u'remote_src': None, u'dest': u'/etc/kubernetes/cni-flannel.yml', u'selevel': None, u'_original_basename': u'cni-flannel.yml.j2', u'delimiter': None, u'regexp': None, u'owner': None, u'follow': False, u'validate': None, u'local_follow': None, u'src': u'/home/vagrant/.ansible/tmp/ansible-tmp-1553734433.4-157087191596506/source', u'group': None, u'unsafe_writes': None, u'checksum': u'0b1393229c9e863d63eff80c96bda56568b58e82', u'seuser': None, u'serole': None, u'content': None, u'setype': None, u'mode': None, u'attributes': None, u'backup': False}}, '_ansible_ignore_errors': None}) TASK [kubernetes-apps/network_plugin/flannel : Flannel | Wait for flannel subnet.env file presence] *** Thursday 28 March 2019 00:54:41 +0000 (0:00:03.164) 0:15:01.003 ******** ok: [kube1] fatal: [kube2]: FAILED! => {"changed": false, "elapsed": 600, "msg": "Timeout when waiting for file /run/flannel/subnet.env"} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=364 changed=103 unreachable=0 failed=0 kube2 : ok=315 changed=91 unreachable=0 failed=1 kube3 : ok=282 changed=78 unreachable=0 failed=0 Thursday 28 March 2019 01:04:43 +0000 (0:10:01.825) 0:25:02.829 ******** =============================================================================== kubernetes-apps/network_plugin/flannel : Flannel | Wait for flannel subnet.env file presence - 601.83s kubernetes/master : kubeadm | Initialize first master ------------------ 39.63s kubernetes/master : kubeadm | Init other uninitialized masters --------- 38.78s download : container_download | download images for kubeadm config images -- 35.03s etcd : Gen_certs | Write etcd master certs ----------------------------- 33.62s Install packages ------------------------------------------------------- 31.93s Wait for host to be available ------------------------------------------ 20.97s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.25s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 16.71s etcd : Gen_certs | Gather etcd master certs ---------------------------- 13.16s gather facts from all instances ---------------------------------------- 13.11s etcd : reload etcd ----------------------------------------------------- 11.90s container-engine/docker : Docker | pause while Docker restarts --------- 10.39s download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 9.96s download : file_download | Download item -------------------------------- 9.40s kubernetes/master : slurp kubeadm certs --------------------------------- 8.18s etcd : wait for etcd up ------------------------------------------------- 8.12s download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 8.01s download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 6.80s download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 6.01s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu Mar 28 01:20:00 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 28 Mar 2019 01:20:00 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #145 In-Reply-To: <1589054727.6400.1553648944218.JavaMail.jenkins@jenkins.ci.centos.org> References: <1589054727.6400.1553648944218.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <60566049.6577.1553736000765.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 55.25 KB...] changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri Mar 29 00:15:58 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 29 Mar 2019 00:15:58 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #317 In-Reply-To: <1945468890.6567.1553732210092.JavaMail.jenkins@jenkins.ci.centos.org> References: <1945468890.6567.1553732210092.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1907262849.6727.1553818558106.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.40 KB...] Total 75 MB/s | 143 MB 00:01 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.1.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : perl-srpm-macros-1-8.el7.noarch 10/49 Installing : unzip-6.0-19.el7.x86_64 11/49 Installing : dwz-0.11-3.el7.x86_64 12/49 Installing : bzip2-1.0.6-13.el7.x86_64 13/49 Installing : usermode-1.111-5.el7.x86_64 14/49 Installing : python2-distro-1.2.0-1.el7.noarch 15/49 Installing : patch-2.7.1-10.el7_5.x86_64 16/49 Installing : python-backports-1.0-8.el7.x86_64 17/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/49 Installing : python-urllib3-1.10.2-5.el7.noarch 19/49 Installing : python-requests-2.6.0-1.el7_1.noarch 20/49 Installing : distribution-gpg-keys-1.29-1.el7.noarch 21/49 Installing : mock-core-configs-30.2-1.el7.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : libmodman-2.0.1-8.el7.x86_64 24/49 Installing : libproxy-0.4.11-11.el7.x86_64 25/49 Installing : python-markupsafe-0.11-10.el7.x86_64 26/49 Installing : python-jinja2-2.7.2-2.el7.noarch 27/49 Installing : gdb-7.6.1-114.el7.x86_64 28/49 Installing : kernel-headers-3.10.0-957.10.1.el7.x86_64 29/49 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 30/49 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 31/49 Installing : gcc-4.8.5-36.el7_6.1.x86_64 32/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 33/49 Installing : python2-pyroute2-0.4.13-1.el7.noarch 34/49 Installing : pigz-2.3.4-1.el7.x86_64 35/49 Installing : golang-src-1.11.5-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.14-2.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 5/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 6/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/49 Verifying : zip-3.0-11.el7.x86_64 8/49 Verifying : nettle-2.7.1-8.el7.x86_64 9/49 Verifying : cpp-4.8.5-36.el7_6.1.x86_64 10/49 Verifying : golang-src-1.11.5-1.el7.noarch 11/49 Verifying : pigz-2.3.4-1.el7.x86_64 12/49 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 13/49 Verifying : golang-1.11.5-1.el7.x86_64 14/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 15/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 16/49 Verifying : kernel-headers-3.10.0-957.10.1.el7.x86_64 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : gdb-7.6.1-114.el7.x86_64 19/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 20/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 21/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 23/49 Verifying : libmodman-2.0.1-8.el7.x86_64 24/49 Verifying : mpfr-3.1.1-4.el7.x86_64 25/49 Verifying : python-babel-0.9.6-8.el7.noarch 26/49 Verifying : distribution-gpg-keys-1.29-1.el7.noarch 27/49 Verifying : apr-util-1.5.2-6.el7.x86_64 28/49 Verifying : python-backports-1.0-8.el7.x86_64 29/49 Verifying : patch-2.7.1-10.el7_5.x86_64 30/49 Verifying : libmpc-1.0.1-3.el7.x86_64 31/49 Verifying : python2-distro-1.2.0-1.el7.noarch 32/49 Verifying : usermode-1.111-5.el7.x86_64 33/49 Verifying : python-six-1.9.0-2.el7.noarch 34/49 Verifying : libproxy-0.4.11-11.el7.x86_64 35/49 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 36/49 Verifying : gcc-4.8.5-36.el7_6.1.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : perl-srpm-macros-1-8.el7.noarch 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.1 distribution-gpg-keys.noarch 0:1.29-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.1 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.10.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2296 0 --:--:-- --:--:-- --:--:-- 2300 100 8513k 100 8513k 0 0 18.2M 0 --:--:-- --:--:-- --:--:-- 18.2M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1175 0 --:--:-- --:--:-- --:--:-- 1174 100 38.3M 100 38.3M 0 0 36.4M 0 0:00:01 0:00:01 --:--:-- 36.4M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 553 0 --:--:-- --:--:-- --:--:-- 554 0 0 0 620 0 0 1323 0 --:--:-- --:--:-- --:--:-- 1323 100 10.7M 100 10.7M 0 0 14.8M 0 --:--:-- --:--:-- --:--:-- 14.8M ~/nightlyrpmfhfhJJ/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmfhfhJJ/glusterd2-v6.0-dev.151.git9054f74-vendor.tar.xz Created dist archive /root/nightlyrpmfhfhJJ/glusterd2-v6.0-dev.151.git9054f74-vendor.tar.xz ~ ~/nightlyrpmfhfhJJ ~ INFO: mock.py version 1.4.14 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmfhfhJJ/rpmbuild/SRPMS/glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.14 INFO: Mock Version: 1.4.14 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmfhfhJJ/rpmbuild/SRPMS/glusterd2-5.0-0.dev.151.git9054f74.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 24 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 21761ec4f5854abeb049a2fe9929d59c -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.OAz5GK:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins5090490271648417793.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done ed27f488 +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 156 | n29.crusty | 172.19.2.29 | crusty | 3402 | Deployed | ed27f488 | None | None | 7 | x86_64 | 1 | 2280 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Fri Mar 29 00:57:25 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 29 Mar 2019 00:57:25 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #121 In-Reply-To: <976628004.6574.1553735083655.JavaMail.jenkins@jenkins.ci.centos.org> References: <976628004.6574.1553735083655.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <643007453.6733.1553821045336.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 459.58 KB...] changed: [kube1] => (item=gcs-namespace.yml) changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-fs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-virtblock-csi.yml) changed: [kube1] => (item=gcs-storage-virtblock.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Friday 29 March 2019 00:47:06 +0000 (0:00:12.097) 0:10:41.079 ********** included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Friday 29 March 2019 00:47:07 +0000 (0:00:00.090) 0:10:41.170 ********** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Friday 29 March 2019 00:47:07 +0000 (0:00:00.209) 0:10:41.379 ********** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Friday 29 March 2019 00:47:08 +0000 (0:00:00.799) 0:10:42.179 ********** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Friday 29 March 2019 00:47:08 +0000 (0:00:00.206) 0:10:42.385 ********** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Friday 29 March 2019 00:47:09 +0000 (0:00:00.804) 0:10:43.190 ********** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Friday 29 March 2019 00:47:09 +0000 (0:00:00.214) 0:10:43.404 ********** changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Friday 29 March 2019 00:47:10 +0000 (0:00:00.798) 0:10:44.202 ********** ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Friday 29 March 2019 00:47:10 +0000 (0:00:00.700) 0:10:44.903 ********** ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Friday 29 March 2019 00:47:11 +0000 (0:00:00.760) 0:10:45.663 ********** FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Friday 29 March 2019 00:47:22 +0000 (0:00:10.904) 0:10:56.568 ********** ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Friday 29 March 2019 00:47:23 +0000 (0:00:00.717) 0:10:57.285 ********** ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Friday 29 March 2019 00:47:23 +0000 (0:00:00.559) 0:10:57.845 ********** ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Friday 29 March 2019 00:47:24 +0000 (0:00:00.534) 0:10:58.380 ********** ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Friday 29 March 2019 00:47:24 +0000 (0:00:00.743) 0:10:59.123 ********** ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Friday 29 March 2019 00:47:25 +0000 (0:00:00.967) 0:11:00.091 ********** FAILED - RETRYING: GCS | ETCD Cluster | Get etcd-client service (5 retries left). changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Friday 29 March 2019 00:47:31 +0000 (0:00:05.956) 0:11:06.047 ********** ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Friday 29 March 2019 00:47:32 +0000 (0:00:00.231) 0:11:06.279 ********** FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Friday 29 March 2019 00:48:26 +0000 (0:00:54.712) 0:12:00.992 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Friday 29 March 2019 00:48:27 +0000 (0:00:00.776) 0:12:01.768 ********** included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Friday 29 March 2019 00:48:27 +0000 (0:00:00.101) 0:12:01.869 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Friday 29 March 2019 00:48:27 +0000 (0:00:00.144) 0:12:02.014 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Friday 29 March 2019 00:48:28 +0000 (0:00:00.666) 0:12:02.681 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Friday 29 March 2019 00:48:28 +0000 (0:00:00.161) 0:12:02.842 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Friday 29 March 2019 00:48:29 +0000 (0:00:00.944) 0:12:03.787 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Friday 29 March 2019 00:48:29 +0000 (0:00:00.143) 0:12:03.931 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Friday 29 March 2019 00:48:30 +0000 (0:00:00.761) 0:12:04.692 ********** changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Friday 29 March 2019 00:48:31 +0000 (0:00:00.544) 0:12:05.236 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Friday 29 March 2019 00:48:31 +0000 (0:00:00.158) 0:12:05.395 ********** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.3.206:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=420 changed=119 unreachable=0 failed=1 kube2 : ok=320 changed=93 unreachable=0 failed=0 kube3 : ok=284 changed=78 unreachable=0 failed=0 Friday 29 March 2019 00:57:25 +0000 (0:08:53.823) 0:20:59.219 ********** =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 533.82s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 54.71s download : container_download | download images for kubeadm config images -- 39.32s kubernetes/master : kubeadm | Initialize first master ------------------ 26.50s kubernetes/master : kubeadm | Init other uninitialized masters --------- 24.86s Install packages ------------------------------------------------------- 24.12s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 19.92s Wait for host to be available ------------------------------------------ 16.39s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 13.51s Extend root VG --------------------------------------------------------- 13.48s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 13.14s etcd : Gen_certs | Write etcd master certs ----------------------------- 12.94s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 12.10s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 10.90s etcd : reload etcd ----------------------------------------------------- 10.87s container-engine/docker : Docker | pause while Docker restarts --------- 10.17s kubernetes/node : Enable bridge-nf-call tables ------------------------- 10.15s etcd : Configure | Check if etcd cluster is healthy --------------------- 8.08s gather facts from all instances ----------------------------------------- 7.96s etcd : wait for etcd up ------------------------------------------------- 7.83s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri Mar 29 01:10:26 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 29 Mar 2019 01:10:26 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #146 In-Reply-To: <60566049.6577.1553736000765.JavaMail.jenkins@jenkins.ci.centos.org> References: <60566049.6577.1553736000765.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <272767995.6738.1553821826469.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 55.24 KB...] changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sat Mar 30 00:18:13 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 30 Mar 2019 00:18:13 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #318 In-Reply-To: <1907262849.6727.1553818558106.JavaMail.jenkins@jenkins.ci.centos.org> References: <1907262849.6727.1553818558106.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1590185073.6859.1553905093267.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [dkhandel] [GLUSTO-JOB]Remove functional test test_vvt.py from glusto ------------------------------------------ [...truncated 37.39 KB...] Total 66 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.1.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : perl-srpm-macros-1-8.el7.noarch 10/49 Installing : unzip-6.0-19.el7.x86_64 11/49 Installing : dwz-0.11-3.el7.x86_64 12/49 Installing : bzip2-1.0.6-13.el7.x86_64 13/49 Installing : usermode-1.111-5.el7.x86_64 14/49 Installing : python2-distro-1.2.0-1.el7.noarch 15/49 Installing : patch-2.7.1-10.el7_5.x86_64 16/49 Installing : python-backports-1.0-8.el7.x86_64 17/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/49 Installing : python-urllib3-1.10.2-5.el7.noarch 19/49 Installing : python-requests-2.6.0-1.el7_1.noarch 20/49 Installing : distribution-gpg-keys-1.29-1.el7.noarch 21/49 Installing : mock-core-configs-30.2-1.el7.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : libmodman-2.0.1-8.el7.x86_64 24/49 Installing : libproxy-0.4.11-11.el7.x86_64 25/49 Installing : python-markupsafe-0.11-10.el7.x86_64 26/49 Installing : python-jinja2-2.7.2-2.el7.noarch 27/49 Installing : gdb-7.6.1-114.el7.x86_64 28/49 Installing : kernel-headers-3.10.0-957.10.1.el7.x86_64 29/49 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 30/49 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 31/49 Installing : gcc-4.8.5-36.el7_6.1.x86_64 32/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 33/49 Installing : python2-pyroute2-0.4.13-1.el7.noarch 34/49 Installing : pigz-2.3.4-1.el7.x86_64 35/49 Installing : golang-src-1.11.5-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.14-2.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 5/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 6/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/49 Verifying : zip-3.0-11.el7.x86_64 8/49 Verifying : nettle-2.7.1-8.el7.x86_64 9/49 Verifying : cpp-4.8.5-36.el7_6.1.x86_64 10/49 Verifying : golang-src-1.11.5-1.el7.noarch 11/49 Verifying : pigz-2.3.4-1.el7.x86_64 12/49 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 13/49 Verifying : golang-1.11.5-1.el7.x86_64 14/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 15/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 16/49 Verifying : kernel-headers-3.10.0-957.10.1.el7.x86_64 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : gdb-7.6.1-114.el7.x86_64 19/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 20/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 21/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 23/49 Verifying : libmodman-2.0.1-8.el7.x86_64 24/49 Verifying : mpfr-3.1.1-4.el7.x86_64 25/49 Verifying : python-babel-0.9.6-8.el7.noarch 26/49 Verifying : distribution-gpg-keys-1.29-1.el7.noarch 27/49 Verifying : apr-util-1.5.2-6.el7.x86_64 28/49 Verifying : python-backports-1.0-8.el7.x86_64 29/49 Verifying : patch-2.7.1-10.el7_5.x86_64 30/49 Verifying : libmpc-1.0.1-3.el7.x86_64 31/49 Verifying : python2-distro-1.2.0-1.el7.noarch 32/49 Verifying : usermode-1.111-5.el7.x86_64 33/49 Verifying : python-six-1.9.0-2.el7.noarch 34/49 Verifying : libproxy-0.4.11-11.el7.x86_64 35/49 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 36/49 Verifying : gcc-4.8.5-36.el7_6.1.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : perl-srpm-macros-1-8.el7.noarch 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.1 distribution-gpg-keys.noarch 0:1.29-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.1 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.10.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1183 0 --:--:-- --:--:-- --:--:-- 1186 79 8513k 79 6748k 0 0 7445k 0 0:00:01 --:--:-- 0:00:01 7445k100 8513k 100 8513k 0 0 8992k 0 --:--:-- --:--:-- --:--:-- 43.0M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1177 0 --:--:-- --:--:-- --:--:-- 1178 36 38.3M 36 13.9M 0 0 14.8M 0 0:00:02 --:--:-- 0:00:02 14.8M100 38.3M 100 38.3M 0 0 28.3M 0 0:00:01 0:00:01 --:--:-- 59.0M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 302 0 --:--:-- --:--:-- --:--:-- 301 0 0 0 620 0 0 914 0 --:--:-- --:--:-- --:--:-- 914 100 10.7M 100 10.7M 0 0 11.2M 0 --:--:-- --:--:-- --:--:-- 11.2M ~/nightlyrpmOWdlzA/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmOWdlzA/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmOWdlzA/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmOWdlzA ~ INFO: mock.py version 1.4.14 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmOWdlzA/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.14 INFO: Mock Version: 1.4.14 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmOWdlzA/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 25 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M cd64518d1aec4061848e2048fe6a8d30 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.lo399I:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins9209219582049345526.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 5054537f +---------+-----------+------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 135 | n8.crusty | 172.19.2.8 | crusty | 3409 | Deployed | 5054537f | None | None | 7 | x86_64 | 1 | 2070 | None | +---------+-----------+------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sat Mar 30 01:04:30 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 30 Mar 2019 01:04:30 +0000 (UTC) Subject: [CI-results] Jenkins build is back to normal : gluster_anteater_gcs #122 In-Reply-To: <643007453.6733.1553821045336.JavaMail.jenkins@jenkins.ci.centos.org> References: <643007453.6733.1553821045336.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <706212547.6868.1553907870106.JavaMail.jenkins@jenkins.ci.centos.org> See From ci at centos.org Sat Mar 30 01:17:24 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 30 Mar 2019 01:17:24 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #147 In-Reply-To: <272767995.6738.1553821826469.JavaMail.jenkins@jenkins.ci.centos.org> References: <272767995.6738.1553821826469.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1972954851.6870.1553908644443.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [dkhandel] [GLUSTO-JOB]Remove functional test test_vvt.py from glusto ------------------------------------------ [...truncated 55.20 KB...] changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun Mar 31 00:14:45 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 31 Mar 2019 00:14:45 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #319 In-Reply-To: <1590185073.6859.1553905093267.JavaMail.jenkins@jenkins.ci.centos.org> References: <1590185073.6859.1553905093267.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1421968597.6920.1553991286150.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 38.95 KB...] Installing : python2-distro-1.2.0-1.el7.noarch 15/49 Installing : patch-2.7.1-10.el7_5.x86_64 16/49 Installing : python-backports-1.0-8.el7.x86_64 17/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/49 Installing : python-urllib3-1.10.2-5.el7.noarch 19/49 Installing : python-requests-2.6.0-1.el7_1.noarch 20/49 Installing : distribution-gpg-keys-1.29-1.el7.noarch 21/49 Installing : mock-core-configs-30.2-1.el7.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : libmodman-2.0.1-8.el7.x86_64 24/49 Installing : libproxy-0.4.11-11.el7.x86_64 25/49 Installing : python-markupsafe-0.11-10.el7.x86_64 26/49 Installing : python-jinja2-2.7.2-2.el7.noarch 27/49 Installing : gdb-7.6.1-114.el7.x86_64 28/49 Installing : kernel-headers-3.10.0-957.10.1.el7.x86_64 29/49 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 30/49 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 31/49 Installing : gcc-4.8.5-36.el7_6.1.x86_64 32/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 33/49 Installing : python2-pyroute2-0.4.13-1.el7.noarch 34/49 Installing : pigz-2.3.4-1.el7.x86_64 35/49 Installing : golang-src-1.11.5-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.14-2.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 5/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 6/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/49 Verifying : zip-3.0-11.el7.x86_64 8/49 Verifying : nettle-2.7.1-8.el7.x86_64 9/49 Verifying : cpp-4.8.5-36.el7_6.1.x86_64 10/49 Verifying : golang-src-1.11.5-1.el7.noarch 11/49 Verifying : pigz-2.3.4-1.el7.x86_64 12/49 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 13/49 Verifying : golang-1.11.5-1.el7.x86_64 14/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 15/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 16/49 Verifying : kernel-headers-3.10.0-957.10.1.el7.x86_64 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : gdb-7.6.1-114.el7.x86_64 19/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 20/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 21/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 23/49 Verifying : libmodman-2.0.1-8.el7.x86_64 24/49 Verifying : mpfr-3.1.1-4.el7.x86_64 25/49 Verifying : python-babel-0.9.6-8.el7.noarch 26/49 Verifying : distribution-gpg-keys-1.29-1.el7.noarch 27/49 Verifying : apr-util-1.5.2-6.el7.x86_64 28/49 Verifying : python-backports-1.0-8.el7.x86_64 29/49 Verifying : patch-2.7.1-10.el7_5.x86_64 30/49 Verifying : libmpc-1.0.1-3.el7.x86_64 31/49 Verifying : python2-distro-1.2.0-1.el7.noarch 32/49 Verifying : usermode-1.111-5.el7.x86_64 33/49 Verifying : python-six-1.9.0-2.el7.noarch 34/49 Verifying : libproxy-0.4.11-11.el7.x86_64 35/49 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 36/49 Verifying : gcc-4.8.5-36.el7_6.1.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : perl-srpm-macros-1-8.el7.noarch 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.1 distribution-gpg-keys.noarch 0:1.29-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.1 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.10.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1019 0 --:--:-- --:--:-- --:--:-- 1021 0 8513k 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 8513k 100 8513k 0 0 9548k 0 --:--:-- --:--:-- --:--:-- 66.5M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1137 0 --:--:-- --:--:-- --:--:-- 1140 31 38.3M 31 12.2M 0 0 14.5M 0 0:00:02 --:--:-- 0:00:02 14.5M100 38.3M 100 38.3M 0 0 35.5M 0 0:00:01 0:00:01 --:--:-- 112M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 295 0 --:--:-- --:--:-- --:--:-- 295 0 0 0 620 0 0 966 0 --:--:-- --:--:-- --:--:-- 966 100 10.7M 100 10.7M 0 0 12.1M 0 --:--:-- --:--:-- --:--:-- 12.1M ~/nightlyrpmE3l7tc/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmE3l7tc/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmE3l7tc/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmE3l7tc ~ INFO: mock.py version 1.4.14 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmE3l7tc/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.14 INFO: Mock Version: 1.4.14 Start: yum install ERROR: Exception(/root/nightlyrpmE3l7tc/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 0 minutes 1 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/yum --installroot /var/lib/mock/epel-7-x86_64/root/ --releasever 7 install @buildsys-build Failed to set locale, defaulting to C One of the configured repositories failed (Unknown), and yum doesn't have enough cached data to continue. At this point the only safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work). 3. Run the command with the repository temporarily disabled yum --disablerepo= ... 4. Disable the repository permanently, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage: yum-config-manager --disable or subscription-manager repos --disable= 5. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands, so will have to try and fail each time (and thus. yum will be be much slower). If it is a very temporary problem though, this is often a nice compromise: yum-config-manager --save --setopt=.skip_if_unavailable=true Cannot find a valid baseurl for repo: epel Could not retrieve mirrorlist http://mirrors.fedoraproject.org/mirrorlist?repo=epel-7&arch=x86_64 error was 14: HTTP Error 503 - Service Unavailable Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins5916712769625035751.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 96b3ce90 +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 159 | n32.crusty | 172.19.2.32 | crusty | 3416 | Deployed | 96b3ce90 | None | None | 7 | x86_64 | 1 | 2310 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sun Mar 31 00:53:20 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 31 Mar 2019 00:53:20 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 9722 - Failure! (master on CentOS-7/x86_64) Message-ID: <857595020.6926.1553993600828.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 9722 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/9722/ to view the results. From ci at centos.org Sun Mar 31 00:53:28 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 31 Mar 2019 00:53:28 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 9723 - Still Failing! (master on CentOS-6/x86_64) In-Reply-To: <857595020.6926.1553993600828.JavaMail.jenkins@jenkins.ci.centos.org> References: <857595020.6926.1553993600828.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <514642831.6928.1553993608758.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 9723 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/9723/ to view the results. From ci at centos.org Sun Mar 31 00:53:41 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 31 Mar 2019 00:53:41 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 9724 - Still Failing! (release-4.1 on CentOS-6/x86_64) In-Reply-To: <514642831.6928.1553993608758.JavaMail.jenkins@jenkins.ci.centos.org> References: <514642831.6928.1553993608758.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <447407700.6930.1553993621959.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 9724 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/9724/ to view the results. From ci at centos.org Sun Mar 31 00:54:48 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 31 Mar 2019 00:54:48 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 9725 - Still Failing! (release-4.1 on CentOS-7/x86_64) In-Reply-To: <447407700.6930.1553993621959.JavaMail.jenkins@jenkins.ci.centos.org> References: <447407700.6930.1553993621959.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1002955171.6934.1553993688425.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 9725 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/9725/ to view the results. From ci at centos.org Sun Mar 31 00:54:52 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 31 Mar 2019 00:54:52 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 9726 - Still Failing! (release-5 on CentOS-6/x86_64) In-Reply-To: <1002955171.6934.1553993688425.JavaMail.jenkins@jenkins.ci.centos.org> References: <1002955171.6934.1553993688425.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <309518176.6936.1553993692464.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 9726 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/9726/ to view the results. From ci at centos.org Sun Mar 31 00:55:30 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 31 Mar 2019 00:55:30 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 9727 - Still Failing! (release-5 on CentOS-7/x86_64) In-Reply-To: <309518176.6936.1553993692464.JavaMail.jenkins@jenkins.ci.centos.org> References: <309518176.6936.1553993692464.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1944869662.6938.1553993731137.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 9727 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/9727/ to view the results. From ci at centos.org Sun Mar 31 00:55:48 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 31 Mar 2019 00:55:48 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #148 In-Reply-To: <1972954851.6870.1553908644443.JavaMail.jenkins@jenkins.ci.centos.org> References: <1972954851.6870.1553908644443.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <669435294.6939.1553993748546.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 12.41 KB...] -------------------------------------------------------------------------------- Total 22 MB/s | 16 MB 00:00 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 Importing GPG key 0xF4A80EB5: Userid : "CentOS-7 Key (CentOS 7 Official Signing Key) " Fingerprint: 6341 ab27 53d7 8a78 a7c2 7bb1 24c6 a8a7 f4a8 0eb5 Package : centos-release-7-6.1810.2.el7.centos.x86_64 (@base) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : 1:perl-parent-0.225-244.el7.noarch 1/32 Installing : perl-HTTP-Tiny-0.033-3.el7.noarch 2/32 Installing : perl-podlators-2.5.1-3.el7.noarch 3/32 Installing : perl-Pod-Perldoc-3.20-4.el7.noarch 4/32 Installing : 1:perl-Pod-Escapes-1.04-294.el7_6.noarch 5/32 Installing : perl-Encode-2.51-7.el7.x86_64 6/32 Installing : perl-Text-ParseWords-3.29-4.el7.noarch 7/32 Installing : perl-Pod-Usage-1.63-3.el7.noarch 8/32 Installing : 4:perl-libs-5.16.3-294.el7_6.x86_64 9/32 Installing : 4:perl-macros-5.16.3-294.el7_6.x86_64 10/32 Installing : 4:perl-Time-HiRes-1.9725-3.el7.x86_64 11/32 Installing : perl-Exporter-5.68-3.el7.noarch 12/32 Installing : perl-constant-1.27-2.el7.noarch 13/32 Installing : perl-Time-Local-1.2300-2.el7.noarch 14/32 Installing : perl-Socket-2.010-4.el7.x86_64 15/32 Installing : perl-Carp-1.26-244.el7.noarch 16/32 Installing : perl-Storable-2.45-3.el7.x86_64 17/32 Installing : perl-PathTools-3.40-5.el7.x86_64 18/32 Installing : perl-Scalar-List-Utils-1.27-248.el7.x86_64 19/32 Installing : 1:perl-Pod-Simple-3.28-4.el7.noarch 20/32 Installing : perl-File-Temp-0.23.01-3.el7.noarch 21/32 Installing : perl-File-Path-2.09-2.el7.noarch 22/32 Installing : perl-threads-shared-1.43-6.el7.x86_64 23/32 Installing : perl-threads-1.87-4.el7.x86_64 24/32 Installing : perl-Filter-1.49-3.el7.x86_64 25/32 Installing : perl-Getopt-Long-2.40-3.el7.noarch 26/32 Installing : 4:perl-5.16.3-294.el7_6.x86_64 27/32 Installing : 1:perl-Error-0.17020-2.el7.noarch 28/32 Installing : perl-TermReadKey-2.30-20.el7.x86_64 29/32 Installing : rsync-3.1.2-4.el7.x86_64 30/32 Installing : perl-Git-1.8.3.1-20.el7.noarch 31/32 Installing : git-1.8.3.1-20.el7.x86_64 32/32 Verifying : perl-HTTP-Tiny-0.033-3.el7.noarch 1/32 Verifying : perl-threads-shared-1.43-6.el7.x86_64 2/32 Verifying : 4:perl-Time-HiRes-1.9725-3.el7.x86_64 3/32 Verifying : 1:perl-Pod-Escapes-1.04-294.el7_6.noarch 4/32 Verifying : perl-Exporter-5.68-3.el7.noarch 5/32 Verifying : perl-constant-1.27-2.el7.noarch 6/32 Verifying : perl-PathTools-3.40-5.el7.x86_64 7/32 Verifying : 1:perl-parent-0.225-244.el7.noarch 8/32 Verifying : perl-TermReadKey-2.30-20.el7.x86_64 9/32 Verifying : 4:perl-libs-5.16.3-294.el7_6.x86_64 10/32 Verifying : perl-File-Temp-0.23.01-3.el7.noarch 11/32 Verifying : 1:perl-Pod-Simple-3.28-4.el7.noarch 12/32 Verifying : perl-Time-Local-1.2300-2.el7.noarch 13/32 Verifying : 4:perl-macros-5.16.3-294.el7_6.x86_64 14/32 Verifying : perl-Socket-2.010-4.el7.x86_64 15/32 Verifying : perl-Carp-1.26-244.el7.noarch 16/32 Verifying : 1:perl-Error-0.17020-2.el7.noarch 17/32 Verifying : git-1.8.3.1-20.el7.x86_64 18/32 Verifying : perl-Storable-2.45-3.el7.x86_64 19/32 Verifying : perl-Scalar-List-Utils-1.27-248.el7.x86_64 20/32 Verifying : perl-Git-1.8.3.1-20.el7.noarch 21/32 Verifying : rsync-3.1.2-4.el7.x86_64 22/32 Verifying : perl-Pod-Usage-1.63-3.el7.noarch 23/32 Verifying : perl-Encode-2.51-7.el7.x86_64 24/32 Verifying : perl-Pod-Perldoc-3.20-4.el7.noarch 25/32 Verifying : perl-podlators-2.5.1-3.el7.noarch 26/32 Verifying : perl-File-Path-2.09-2.el7.noarch 27/32 Verifying : perl-threads-1.87-4.el7.x86_64 28/32 Verifying : perl-Filter-1.49-3.el7.x86_64 29/32 Verifying : perl-Getopt-Long-2.40-3.el7.noarch 30/32 Verifying : perl-Text-ParseWords-3.29-4.el7.noarch 31/32 Verifying : 4:perl-5.16.3-294.el7_6.x86_64 32/32 Installed: git.x86_64 0:1.8.3.1-20.el7 Dependency Installed: perl.x86_64 4:5.16.3-294.el7_6 perl-Carp.noarch 0:1.26-244.el7 perl-Encode.x86_64 0:2.51-7.el7 perl-Error.noarch 1:0.17020-2.el7 perl-Exporter.noarch 0:5.68-3.el7 perl-File-Path.noarch 0:2.09-2.el7 perl-File-Temp.noarch 0:0.23.01-3.el7 perl-Filter.x86_64 0:1.49-3.el7 perl-Getopt-Long.noarch 0:2.40-3.el7 perl-Git.noarch 0:1.8.3.1-20.el7 perl-HTTP-Tiny.noarch 0:0.033-3.el7 perl-PathTools.x86_64 0:3.40-5.el7 perl-Pod-Escapes.noarch 1:1.04-294.el7_6 perl-Pod-Perldoc.noarch 0:3.20-4.el7 perl-Pod-Simple.noarch 1:3.28-4.el7 perl-Pod-Usage.noarch 0:1.63-3.el7 perl-Scalar-List-Utils.x86_64 0:1.27-248.el7 perl-Socket.x86_64 0:2.010-4.el7 perl-Storable.x86_64 0:2.45-3.el7 perl-TermReadKey.x86_64 0:2.30-20.el7 perl-Text-ParseWords.noarch 0:3.29-4.el7 perl-Time-HiRes.x86_64 4:1.9725-3.el7 perl-Time-Local.noarch 0:1.2300-2.el7 perl-constant.noarch 0:1.27-2.el7 perl-libs.x86_64 4:5.16.3-294.el7_6 perl-macros.x86_64 4:5.16.3-294.el7_6 perl-parent.noarch 1:0.225-244.el7 perl-podlators.noarch 0:2.5.1-3.el7 perl-threads.x86_64 0:1.87-4.el7 perl-threads-shared.x86_64 0:1.43-6.el7 rsync.x86_64 0:3.1.2-4.el7 Complete! Cloning into 'gluster-ansible-infra'... Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror.centos.org * extras: mirror.centos.org * updates: mirror.centos.org Resolving Dependencies --> Running transaction check ---> Package epel-release.noarch 0:7-11 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: epel-release noarch 7-11 extras 15 k Transaction Summary ================================================================================ Install 1 Package Total download size: 15 k Installed size: 24 k Downloading packages: Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : epel-release-7-11.noarch 1/1 Verifying : epel-release-7-11.noarch 1/1 Installed: epel-release.noarch 0:7-11 Complete! Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile One of the configured repositories failed (Unknown), and yum doesn't have enough cached data to continue. At this point the only safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work). 3. Run the command with the repository temporarily disabled yum --disablerepo= ... 4. Disable the repository permanently, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage: yum-config-manager --disable or subscription-manager repos --disable= 5. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands, so will have to try and fail each time (and thus. yum will be be much slower). If it is a very temporary problem though, this is often a nice compromise: yum-config-manager --save --setopt=.skip_if_unavailable=true Cannot retrieve metalink for repository: epel/x86_64. Please verify its path and try again ./gluster-ansible-infra/tests/run-centos-ci.sh: line 8: virtualenv: command not found ./gluster-ansible-infra/tests/run-centos-ci.sh: line 9: env/bin/activate: No such file or directory ./gluster-ansible-infra/tests/run-centos-ci.sh: line 12: pip: command not found Loaded plugins: fastestmirror adding repo from: https://download.docker.com/linux/centos/docker-ce.repo grabbing file https://download.docker.com/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo repo saved to /etc/yum.repos.d/docker-ce.repo Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile One of the configured repositories failed (Unknown), and yum doesn't have enough cached data to continue. At this point the only safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work). 3. Run the command with the repository temporarily disabled yum --disablerepo= ... 4. Disable the repository permanently, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage: yum-config-manager --disable or subscription-manager repos --disable= 5. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands, so will have to try and fail each time (and thus. yum will be be much slower). If it is a very temporary problem though, this is often a nice compromise: yum-config-manager --save --setopt=.skip_if_unavailable=true Cannot retrieve metalink for repository: epel/x86_64. Please verify its path and try again Failed to start docker.service: Unit not found. Failed to execute operation: No such file or directory ./gluster-ansible-infra/tests/run-centos-ci.sh: line 26: molecule: command not found ./gluster-ansible-infra/tests/run-centos-ci.sh: line 27: molecule: command not found ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory ./gluster-ansible-infra/tests/run-centos-ci.sh: line 30: molecule: command not found ./gluster-ansible-infra/tests/run-centos-ci.sh: line 31: molecule: command not found Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun Mar 31 00:56:12 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 31 Mar 2019 00:56:12 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 9728 - Still Failing! (release-6 on CentOS-7/x86_64) In-Reply-To: <1944869662.6938.1553993731137.JavaMail.jenkins@jenkins.ci.centos.org> References: <1944869662.6938.1553993731137.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1198966959.6941.1553993772291.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 9728 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/9728/ to view the results. From ci at centos.org Sun Mar 31 00:56:14 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 31 Mar 2019 00:56:14 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 9729 - Still Failing! (release-6 on CentOS-6/x86_64) In-Reply-To: <1198966959.6941.1553993772291.JavaMail.jenkins@jenkins.ci.centos.org> References: <1198966959.6941.1553993772291.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1654595014.6943.1553993774949.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 9729 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/9729/ to view the results. From ci at centos.org Sun Mar 31 02:10:23 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 31 Mar 2019 02:10:23 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #123 Message-ID: <24825837.6948.1553998223595.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 466.41 KB...] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Sunday 31 March 2019 00:47:18 +0000 (0:00:00.147) 0:12:13.267 ********** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Add devices] ***************************************** Sunday 31 March 2019 00:50:26 +0000 (0:03:07.337) 0:15:20.605 ********** included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 TASK [GCS | GD2 Cluster | Add devices | Set facts] ***************************** Sunday 31 March 2019 00:50:26 +0000 (0:00:00.104) 0:15:20.710 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Add devices | Add devices for kube1] ***************** Sunday 31 March 2019 00:50:26 +0000 (0:00:00.148) 0:15:20.858 ********** ok: [kube1] => (item=/dev/vdc) FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (50 retries left). ok: [kube1] => (item=/dev/vdd) ok: [kube1] => (item=/dev/vde) TASK [GCS | GD2 Cluster | Add devices | Set facts] ***************************** Sunday 31 March 2019 00:50:55 +0000 (0:00:28.848) 0:15:49.706 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Add devices | Add devices for kube3] ***************** Sunday 31 March 2019 00:50:55 +0000 (0:00:00.139) 0:15:49.846 ********** FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (1 retries left). failed: [kube1] (item=/dev/vdc) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdc", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.31.251:24007/v1/devices/34bcde62-7b2f-4fef-9610-375402d28d16"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (1 retries left). failed: [kube1] (item=/dev/vdd) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdd", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.31.251:24007/v1/devices/34bcde62-7b2f-4fef-9610-375402d28d16"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (1 retries left). failed: [kube1] (item=/dev/vde) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vde", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.31.251:24007/v1/devices/34bcde62-7b2f-4fef-9610-375402d28d16"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=427 changed=119 unreachable=0 failed=1 kube2 : ok=321 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Sunday 31 March 2019 03:10:23 +0100 (1:19:27.842) 1:35:17.688 ********** =============================================================================== GCS | GD2 Cluster | Add devices | Add devices for kube3 -------------- 4767.84s GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 187.34s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 54.68s download : container_download | download images for kubeadm config images -- 53.41s GCS | GD2 Cluster | Add devices | Add devices for kube1 ---------------- 28.85s kubernetes/master : kubeadm | Initialize first master ------------------ 26.34s kubernetes/master : kubeadm | Init other uninitialized masters --------- 25.71s Install packages ------------------------------------------------------- 23.81s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 19.94s Extend root VG --------------------------------------------------------- 17.42s Wait for host to be available ------------------------------------------ 16.33s etcd : Gen_certs | Write etcd master certs ----------------------------- 13.00s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 12.72s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 12.02s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 11.01s etcd : reload etcd ----------------------------------------------------- 10.94s container-engine/docker : Docker | pause while Docker restarts --------- 10.25s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 10.01s gather facts from all instances ----------------------------------------- 8.44s download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 8.29s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0