From ci at centos.org Fri Feb 1 02:46:46 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 1 Feb 2019 02:46:46 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #66 Message-ID: <818182693.7377.1548989206411.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 465.46 KB...] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Friday 01 February 2019 00:58:15 +0000 (0:00:00.487) 0:20:26.039 ******* FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Add devices] ***************************************** Friday 01 February 2019 00:59:06 +0000 (0:00:50.924) 0:21:16.964 ******* included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 TASK [GCS | GD2 Cluster | Add devices | Set facts] ***************************** Friday 01 February 2019 00:59:06 +0000 (0:00:00.270) 0:21:17.234 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Add devices | Add devices for kube3] ***************** Friday 01 February 2019 00:59:07 +0000 (0:00:00.529) 0:21:17.764 ******* FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (39 retries left). ok: [kube1] => (item=/dev/vdc) ok: [kube1] => (item=/dev/vdd) FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (50 retries left). ok: [kube1] => (item=/dev/vde) TASK [GCS | GD2 Cluster | Add devices | Set facts] ***************************** Friday 01 February 2019 01:02:46 +0000 (0:03:39.569) 0:24:57.334 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Add devices | Add devices for kube2] ***************** Friday 01 February 2019 01:02:47 +0000 (0:00:00.424) 0:24:57.758 ******* FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (1 retries left). failed: [kube1] (item=/dev/vdc) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdc", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.59.23:24007/v1/devices/f7a81c97-0e1b-473c-875a-c6d126f91068"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (1 retries left). failed: [kube1] (item=/dev/vdd) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdd", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.59.23:24007/v1/devices/f7a81c97-0e1b-473c-875a-c6d126f91068"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (1 retries left). failed: [kube1] (item=/dev/vde) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vde", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.59.23:24007/v1/devices/f7a81c97-0e1b-473c-875a-c6d126f91068"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=423 changed=115 unreachable=0 failed=1 kube2 : ok=319 changed=91 unreachable=0 failed=0 kube3 : ok=281 changed=77 unreachable=0 failed=0 Friday 01 February 2019 02:46:45 +0000 (1:43:58.714) 2:08:56.473 ******* =============================================================================== GCS | GD2 Cluster | Add devices | Add devices for kube2 -------------- 6238.71s GCS | GD2 Cluster | Add devices | Add devices for kube3 --------------- 219.57s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------ 107.29s GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready --------- 50.92s kubernetes/master : kubeadm | Initialize first master ------------------ 38.76s kubernetes/master : kubeadm | Init other uninitialized masters --------- 38.23s download : container_download | download images for kubeadm config images -- 34.66s etcd : Gen_certs | Write etcd master certs ----------------------------- 33.80s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 32.18s Install packages ------------------------------------------------------- 31.08s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.85s Wait for host to be available ------------------------------------------ 20.76s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 18.98s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 18.76s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 13.45s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests -- 12.94s gather facts from all instances ---------------------------------------- 12.92s etcd : Gen_certs | Gather etcd master certs ---------------------------- 12.80s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.14s etcd : reload etcd ----------------------------------------------------- 11.72s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sat Feb 2 01:45:44 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 2 Feb 2019 01:45:44 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #67 In-Reply-To: <818182693.7377.1548989206411.JavaMail.jenkins@jenkins.ci.centos.org> References: <818182693.7377.1548989206411.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1031385548.7505.1549071944213.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 465.82 KB...] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Saturday 02 February 2019 00:46:33 +0000 (0:00:00.638) 0:11:44.405 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Saturday 02 February 2019 00:46:34 +0000 (0:00:00.145) 0:11:44.551 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Saturday 02 February 2019 00:46:34 +0000 (0:00:00.758) 0:11:45.309 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Saturday 02 February 2019 00:46:35 +0000 (0:00:00.143) 0:11:45.453 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Saturday 02 February 2019 00:46:35 +0000 (0:00:00.694) 0:11:46.148 ***** changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Saturday 02 February 2019 00:46:36 +0000 (0:00:00.504) 0:11:46.652 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Saturday 02 February 2019 00:46:36 +0000 (0:00:00.142) 0:11:46.795 ***** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Add devices] ***************************************** Saturday 02 February 2019 00:47:17 +0000 (0:00:41.553) 0:12:28.349 ***** included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 TASK [GCS | GD2 Cluster | Add devices | Set facts] ***************************** Saturday 02 February 2019 00:47:18 +0000 (0:00:00.101) 0:12:28.451 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Add devices | Add devices for kube3] ***************** Saturday 02 February 2019 00:47:18 +0000 (0:00:00.223) 0:12:28.674 ***** FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (1 retries left). failed: [kube1] (item=/dev/vdc) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdc", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.45.213:24007/v1/devices/a5115da2-ab6e-438d-a54a-4fd56671c7d4"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (1 retries left). failed: [kube1] (item=/dev/vdd) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdd", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.45.213:24007/v1/devices/a5115da2-ab6e-438d-a54a-4fd56671c7d4"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (1 retries left). failed: [kube1] (item=/dev/vde) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vde", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.45.213:24007/v1/devices/a5115da2-ab6e-438d-a54a-4fd56671c7d4"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=420 changed=115 unreachable=0 failed=1 kube2 : ok=319 changed=91 unreachable=0 failed=0 kube3 : ok=282 changed=77 unreachable=0 failed=0 Saturday 02 February 2019 01:45:43 +0000 (0:58:25.631) 1:10:54.305 ***** =============================================================================== GCS | GD2 Cluster | Add devices | Add devices for kube3 -------------- 3505.63s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 66.51s GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready --------- 41.55s download : container_download | download images for kubeadm config images -- 37.62s kubernetes/master : kubeadm | Initialize first master ------------------ 26.29s Install packages ------------------------------------------------------- 24.70s kubernetes/master : kubeadm | Init other uninitialized masters --------- 24.45s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 18.92s Wait for host to be available ------------------------------------------ 16.45s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 15.46s etcd : Gen_certs | Write etcd master certs ----------------------------- 14.30s Extend root VG --------------------------------------------------------- 12.15s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 12.01s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 10.89s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 10.85s etcd : reload etcd ----------------------------------------------------- 10.76s container-engine/docker : Docker | pause while Docker restarts --------- 10.18s download : file_download | Download item -------------------------------- 8.17s gather facts from all instances ----------------------------------------- 8.04s etcd : wait for etcd up ------------------------------------------------- 7.62s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun Feb 3 00:55:34 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 3 Feb 2019 00:55:34 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #68 In-Reply-To: <1031385548.7505.1549071944213.JavaMail.jenkins@jenkins.ci.centos.org> References: <1031385548.7505.1549071944213.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <480277898.7629.1549155334507.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 458.40 KB...] changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Sunday 03 February 2019 00:44:45 +0000 (0:00:11.864) 0:10:06.301 ******* included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Sunday 03 February 2019 00:44:46 +0000 (0:00:00.089) 0:10:06.391 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Sunday 03 February 2019 00:44:46 +0000 (0:00:00.151) 0:10:06.543 ******* changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Sunday 03 February 2019 00:44:46 +0000 (0:00:00.720) 0:10:07.263 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Sunday 03 February 2019 00:44:47 +0000 (0:00:00.152) 0:10:07.415 ******* changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Sunday 03 February 2019 00:44:47 +0000 (0:00:00.741) 0:10:08.157 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Sunday 03 February 2019 00:44:47 +0000 (0:00:00.149) 0:10:08.307 ******* changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Sunday 03 February 2019 00:44:48 +0000 (0:00:00.735) 0:10:09.043 ******* ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Sunday 03 February 2019 00:44:49 +0000 (0:00:00.725) 0:10:09.768 ******* ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Sunday 03 February 2019 00:44:50 +0000 (0:00:00.653) 0:10:10.422 ******* FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Sunday 03 February 2019 00:45:00 +0000 (0:00:10.831) 0:10:21.253 ******* ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Sunday 03 February 2019 00:45:01 +0000 (0:00:00.676) 0:10:21.930 ******* ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Sunday 03 February 2019 00:45:02 +0000 (0:00:00.470) 0:10:22.401 ******* ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Sunday 03 February 2019 00:45:02 +0000 (0:00:00.474) 0:10:22.875 ******* ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Sunday 03 February 2019 00:45:03 +0000 (0:00:00.669) 0:10:23.545 ******* ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Sunday 03 February 2019 00:45:04 +0000 (0:00:00.870) 0:10:24.415 ******* FAILED - RETRYING: GCS | ETCD Cluster | Get etcd-client service (5 retries left). changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Sunday 03 February 2019 00:45:10 +0000 (0:00:05.978) 0:10:30.394 ******* ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Sunday 03 February 2019 00:45:10 +0000 (0:00:00.145) 0:10:30.540 ******* FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (43 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Sunday 03 February 2019 00:46:36 +0000 (0:01:26.294) 0:11:56.834 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Sunday 03 February 2019 00:46:37 +0000 (0:00:00.744) 0:11:57.579 ******* included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Sunday 03 February 2019 00:46:37 +0000 (0:00:00.112) 0:11:57.691 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Sunday 03 February 2019 00:46:37 +0000 (0:00:00.140) 0:11:57.831 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Sunday 03 February 2019 00:46:38 +0000 (0:00:00.666) 0:11:58.498 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Sunday 03 February 2019 00:46:38 +0000 (0:00:00.149) 0:11:58.647 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Sunday 03 February 2019 00:46:39 +0000 (0:00:00.783) 0:11:59.431 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Sunday 03 February 2019 00:46:39 +0000 (0:00:00.163) 0:11:59.594 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Sunday 03 February 2019 00:46:39 +0000 (0:00:00.717) 0:12:00.311 ******* changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Sunday 03 February 2019 00:46:40 +0000 (0:00:00.507) 0:12:00.819 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Sunday 03 February 2019 00:46:40 +0000 (0:00:00.169) 0:12:00.989 ******* FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.15.186:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=416 changed=115 unreachable=0 failed=1 kube2 : ok=319 changed=91 unreachable=0 failed=0 kube3 : ok=281 changed=77 unreachable=0 failed=0 Sunday 03 February 2019 00:55:34 +0000 (0:08:53.670) 0:20:54.660 ******* =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 533.67s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 86.29s download : container_download | download images for kubeadm config images -- 37.78s kubernetes/master : kubeadm | Initialize first master ------------------ 26.95s kubernetes/master : kubeadm | Init other uninitialized masters --------- 24.83s Install packages ------------------------------------------------------- 23.56s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 19.79s Wait for host to be available ------------------------------------------ 16.49s etcd : Gen_certs | Write etcd master certs ----------------------------- 13.12s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 12.34s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 11.95s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 11.86s Extend root VG --------------------------------------------------------- 10.90s etcd : reload etcd ----------------------------------------------------- 10.83s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 10.83s container-engine/docker : Docker | pause while Docker restarts --------- 10.15s download : file_download | Download item -------------------------------- 8.44s etcd : wait for etcd up ------------------------------------------------- 7.67s gather facts from all instances ----------------------------------------- 7.45s kubernetes/master : kubeadm | write out kubeadm certs ------------------- 7.45s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Mon Feb 4 01:07:56 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 4 Feb 2019 01:07:56 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #69 In-Reply-To: <480277898.7629.1549155334507.JavaMail.jenkins@jenkins.ci.centos.org> References: <480277898.7629.1549155334507.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1068256014.7792.1549242476776.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 458.47 KB...] changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Monday 04 February 2019 00:55:46 +0000 (0:00:32.216) 0:17:53.837 ******* included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Monday 04 February 2019 00:55:47 +0000 (0:00:00.248) 0:17:54.086 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Monday 04 February 2019 00:55:47 +0000 (0:00:00.512) 0:17:54.598 ******* changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Monday 04 February 2019 00:55:49 +0000 (0:00:02.190) 0:17:56.789 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Monday 04 February 2019 00:55:50 +0000 (0:00:00.542) 0:17:57.331 ******* changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Monday 04 February 2019 00:55:52 +0000 (0:00:02.167) 0:17:59.499 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Monday 04 February 2019 00:55:53 +0000 (0:00:00.567) 0:18:00.066 ******* changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Monday 04 February 2019 00:55:55 +0000 (0:00:02.048) 0:18:02.114 ******* ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Monday 04 February 2019 00:55:56 +0000 (0:00:01.795) 0:18:03.910 ******* ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Monday 04 February 2019 00:55:58 +0000 (0:00:01.740) 0:18:05.651 ******* FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Monday 04 February 2019 00:56:11 +0000 (0:00:12.397) 0:18:18.048 ******* ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Monday 04 February 2019 00:56:12 +0000 (0:00:01.714) 0:18:19.763 ******* ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Monday 04 February 2019 00:56:14 +0000 (0:00:01.425) 0:18:21.188 ******* ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Monday 04 February 2019 00:56:15 +0000 (0:00:01.448) 0:18:22.637 ******* ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Monday 04 February 2019 00:56:17 +0000 (0:00:01.779) 0:18:24.417 ******* ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Monday 04 February 2019 00:56:19 +0000 (0:00:01.825) 0:18:26.242 ******* changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Monday 04 February 2019 00:56:20 +0000 (0:00:01.323) 0:18:27.566 ******* ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Monday 04 February 2019 00:56:21 +0000 (0:00:00.459) 0:18:28.025 ******* FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (42 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Monday 04 February 2019 00:58:08 +0000 (0:01:47.033) 0:20:15.059 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Monday 04 February 2019 00:58:09 +0000 (0:00:01.684) 0:20:16.744 ******* included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Monday 04 February 2019 00:58:10 +0000 (0:00:00.190) 0:20:16.935 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Monday 04 February 2019 00:58:10 +0000 (0:00:00.331) 0:20:17.266 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Monday 04 February 2019 00:58:11 +0000 (0:00:01.511) 0:20:18.778 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Monday 04 February 2019 00:58:12 +0000 (0:00:00.349) 0:20:19.127 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Monday 04 February 2019 00:58:13 +0000 (0:00:01.527) 0:20:20.655 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Monday 04 February 2019 00:58:14 +0000 (0:00:00.364) 0:20:21.020 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Monday 04 February 2019 00:58:15 +0000 (0:00:01.648) 0:20:22.668 ******* changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Monday 04 February 2019 00:58:17 +0000 (0:00:01.718) 0:20:24.387 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Monday 04 February 2019 00:58:17 +0000 (0:00:00.309) 0:20:24.697 ******* FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.51.253:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=416 changed=115 unreachable=0 failed=1 kube2 : ok=319 changed=91 unreachable=0 failed=0 kube3 : ok=281 changed=77 unreachable=0 failed=0 Monday 04 February 2019 01:07:56 +0000 (0:09:38.502) 0:30:03.200 ******* =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 578.50s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------ 107.03s kubernetes/master : kubeadm | Initialize first master ------------------ 40.60s kubernetes/master : kubeadm | Init other uninitialized masters --------- 38.47s etcd : Gen_certs | Write etcd master certs ----------------------------- 33.58s download : container_download | download images for kubeadm config images -- 32.40s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 32.22s Install packages ------------------------------------------------------- 30.49s Wait for host to be available ------------------------------------------ 21.05s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.88s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 18.63s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 16.92s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests -- 15.44s gather facts from all instances ---------------------------------------- 13.44s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 13.30s etcd : Gen_certs | Gather etcd master certs ---------------------------- 12.89s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.40s etcd : reload etcd ----------------------------------------------------- 11.76s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.25s container-engine/docker : Docker | pause while Docker restarts --------- 10.40s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Tue Feb 5 02:40:37 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 5 Feb 2019 02:40:37 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #70 In-Reply-To: <1068256014.7792.1549242476776.JavaMail.jenkins@jenkins.ci.centos.org> References: <1068256014.7792.1549242476776.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1247034518.7999.1549334437629.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 466.23 KB...] ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Tuesday 05 February 2019 00:59:08 +0000 (0:00:01.625) 0:21:01.215 ****** changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Tuesday 05 February 2019 00:59:10 +0000 (0:00:01.542) 0:21:02.758 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Tuesday 05 February 2019 00:59:10 +0000 (0:00:00.324) 0:21:03.083 ****** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Add devices] ***************************************** Tuesday 05 February 2019 01:02:40 +0000 (0:03:30.510) 0:24:33.593 ****** included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 TASK [GCS | GD2 Cluster | Add devices | Set facts] ***************************** Tuesday 05 February 2019 01:02:41 +0000 (0:00:00.299) 0:24:33.892 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Add devices | Add devices for kube1] ***************** Tuesday 05 February 2019 01:02:41 +0000 (0:00:00.439) 0:24:34.332 ****** FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (1 retries left). failed: [kube1] (item=/dev/vdc) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdc", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.15.255:24007/v1/devices/782d44ae-824f-42c9-a3a0-55ff79140350"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (1 retries left). failed: [kube1] (item=/dev/vdd) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdd", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.15.255:24007/v1/devices/782d44ae-824f-42c9-a3a0-55ff79140350"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (1 retries left). failed: [kube1] (item=/dev/vde) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vde", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.15.255:24007/v1/devices/782d44ae-824f-42c9-a3a0-55ff79140350"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=426 changed=119 unreachable=0 failed=1 kube2 : ok=320 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Tuesday 05 February 2019 02:40:37 +0000 (1:37:55.579) 2:02:29.912 ****** =============================================================================== GCS | GD2 Cluster | Add devices | Add devices for kube1 -------------- 5875.58s GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 210.51s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------ 118.54s download : container_download | download images for kubeadm config images -- 41.37s kubernetes/master : kubeadm | Init other uninitialized masters --------- 39.57s kubernetes/master : kubeadm | Initialize first master ------------------ 38.91s etcd : Gen_certs | Write etcd master certs ----------------------------- 33.12s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 31.74s Install packages ------------------------------------------------------- 31.43s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.84s Wait for host to be available ------------------------------------------ 20.50s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 19.86s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 18.84s gather facts from all instances ---------------------------------------- 13.36s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 12.91s etcd : Gen_certs | Gather etcd master certs ---------------------------- 12.81s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.17s etcd : reload etcd ----------------------------------------------------- 11.88s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.32s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests -- 11.25s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed Feb 6 01:07:41 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 6 Feb 2019 01:07:41 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #71 In-Reply-To: <1247034518.7999.1549334437629.JavaMail.jenkins@jenkins.ci.centos.org> References: <1247034518.7999.1549334437629.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1231590501.8094.1549415261764.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [ndevos] Add ansiwen to admin-list in heketi-functional job ------------------------------------------ [...truncated 459.52 KB...] changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-fs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-virtblock-csi.yml) changed: [kube1] => (item=gcs-storage-virtblock.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Wednesday 06 February 2019 00:55:56 +0000 (0:00:35.595) 0:18:06.269 **** included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Wednesday 06 February 2019 00:55:57 +0000 (0:00:00.295) 0:18:06.565 **** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Wednesday 06 February 2019 00:55:57 +0000 (0:00:00.344) 0:18:06.909 **** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Wednesday 06 February 2019 00:55:59 +0000 (0:00:02.120) 0:18:09.030 **** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Wednesday 06 February 2019 00:55:59 +0000 (0:00:00.430) 0:18:09.460 **** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Wednesday 06 February 2019 00:56:02 +0000 (0:00:02.095) 0:18:11.555 **** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Wednesday 06 February 2019 00:56:02 +0000 (0:00:00.384) 0:18:11.940 **** changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Wednesday 06 February 2019 00:56:04 +0000 (0:00:02.166) 0:18:14.106 **** ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Wednesday 06 February 2019 00:56:06 +0000 (0:00:01.480) 0:18:15.587 **** ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Wednesday 06 February 2019 00:56:07 +0000 (0:00:01.643) 0:18:17.231 **** FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Wednesday 06 February 2019 00:56:19 +0000 (0:00:12.032) 0:18:29.264 **** ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Wednesday 06 February 2019 00:56:21 +0000 (0:00:01.635) 0:18:30.900 **** ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Wednesday 06 February 2019 00:56:22 +0000 (0:00:01.229) 0:18:32.130 **** ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Wednesday 06 February 2019 00:56:23 +0000 (0:00:01.337) 0:18:33.467 **** ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Wednesday 06 February 2019 00:56:25 +0000 (0:00:01.655) 0:18:35.123 **** ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Wednesday 06 February 2019 00:56:27 +0000 (0:00:01.896) 0:18:37.019 **** changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Wednesday 06 February 2019 00:56:28 +0000 (0:00:01.256) 0:18:38.276 **** ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Wednesday 06 February 2019 00:56:29 +0000 (0:00:00.361) 0:18:38.638 **** FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (44 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Wednesday 06 February 2019 00:57:52 +0000 (0:01:23.709) 0:20:02.347 **** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Wednesday 06 February 2019 00:57:54 +0000 (0:00:01.649) 0:20:03.997 **** included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Wednesday 06 February 2019 00:57:54 +0000 (0:00:00.215) 0:20:04.212 **** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Wednesday 06 February 2019 00:57:55 +0000 (0:00:00.348) 0:20:04.561 **** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Wednesday 06 February 2019 00:57:56 +0000 (0:00:01.716) 0:20:06.277 **** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Wednesday 06 February 2019 00:57:57 +0000 (0:00:00.328) 0:20:06.606 **** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Wednesday 06 February 2019 00:57:58 +0000 (0:00:01.493) 0:20:08.099 **** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Wednesday 06 February 2019 00:57:58 +0000 (0:00:00.398) 0:20:08.498 **** ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Wednesday 06 February 2019 00:58:00 +0000 (0:00:01.503) 0:20:10.001 **** changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Wednesday 06 February 2019 00:58:01 +0000 (0:00:01.508) 0:20:11.510 **** ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Wednesday 06 February 2019 00:58:02 +0000 (0:00:00.340) 0:20:11.851 **** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.36.135:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=420 changed=119 unreachable=0 failed=1 kube2 : ok=321 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Wednesday 06 February 2019 01:07:40 +0000 (0:09:38.549) 0:29:50.400 **** =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 578.55s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 83.71s kubernetes/master : kubeadm | Initialize first master ------------------ 38.84s kubernetes/master : kubeadm | Init other uninitialized masters --------- 38.72s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 35.60s etcd : Gen_certs | Write etcd master certs ----------------------------- 33.22s download : container_download | download images for kubeadm config images -- 32.67s Install packages ------------------------------------------------------- 30.49s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 21.27s Wait for host to be available ------------------------------------------ 20.70s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 18.83s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 17.05s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 13.70s gather facts from all instances ---------------------------------------- 13.34s etcd : Gen_certs | Gather etcd master certs ---------------------------- 12.76s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.03s etcd : reload etcd ----------------------------------------------------- 11.67s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.11s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests -- 10.99s container-engine/docker : Docker | pause while Docker restarts --------- 10.45s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu Feb 7 02:03:32 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 7 Feb 2019 02:03:32 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #72 In-Reply-To: <1231590501.8094.1549415261764.JavaMail.jenkins@jenkins.ci.centos.org> References: <1231590501.8094.1549415261764.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <908938873.8176.1549505012456.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 465.50 KB...] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Thursday 07 February 2019 00:59:24 +0000 (0:00:01.534) 0:20:09.369 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Thursday 07 February 2019 00:59:25 +0000 (0:00:00.335) 0:20:09.705 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Thursday 07 February 2019 00:59:26 +0000 (0:00:01.644) 0:20:11.350 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Thursday 07 February 2019 00:59:27 +0000 (0:00:00.314) 0:20:11.664 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Thursday 07 February 2019 00:59:28 +0000 (0:00:01.645) 0:20:13.310 ***** changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Thursday 07 February 2019 00:59:30 +0000 (0:00:01.286) 0:20:14.596 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Thursday 07 February 2019 00:59:30 +0000 (0:00:00.378) 0:20:14.975 ***** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Add devices] ***************************************** Thursday 07 February 2019 01:00:09 +0000 (0:00:38.710) 0:20:53.685 ***** included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 TASK [GCS | GD2 Cluster | Add devices | Set facts] ***************************** Thursday 07 February 2019 01:00:09 +0000 (0:00:00.287) 0:20:53.973 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Add devices | Add devices for kube3] ***************** Thursday 07 February 2019 01:00:09 +0000 (0:00:00.339) 0:20:54.312 ***** FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (1 retries left). failed: [kube1] (item=/dev/vdc) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdc", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.40.19:24007/v1/devices/3f4bda1c-0ff0-4af2-951f-f68f7c50b0c8"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (1 retries left). failed: [kube1] (item=/dev/vdd) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdd", "msg": "Status code was -1 and not [201]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.40.19:24007/v1/devices/3f4bda1c-0ff0-4af2-951f-f68f7c50b0c8"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (1 retries left). failed: [kube1] (item=/dev/vde) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vde", "msg": "Status code was -1 and not [201]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.40.19:24007/v1/devices/3f4bda1c-0ff0-4af2-951f-f68f7c50b0c8"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=426 changed=119 unreachable=0 failed=1 kube2 : ok=320 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Thursday 07 February 2019 02:03:32 +0000 (1:03:22.314) 1:24:16.627 ***** =============================================================================== GCS | GD2 Cluster | Add devices | Add devices for kube3 -------------- 3802.31s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 84.56s kubernetes/master : kubeadm | Initialize first master ------------------ 39.23s GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready --------- 38.71s kubernetes/master : kubeadm | Init other uninitialized masters --------- 38.15s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 35.38s etcd : Gen_certs | Write etcd master certs ----------------------------- 33.35s download : container_download | download images for kubeadm config images -- 31.83s Install packages ------------------------------------------------------- 30.41s Wait for host to be available ------------------------------------------ 20.83s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.67s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 19.23s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 17.57s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 13.50s etcd : Gen_certs | Gather etcd master certs ---------------------------- 13.25s gather facts from all instances ---------------------------------------- 12.37s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.28s etcd : reload etcd ----------------------------------------------------- 12.09s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests -- 11.62s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.41s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri Feb 8 01:04:39 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 8 Feb 2019 01:04:39 +0000 (UTC) Subject: [CI-results] Jenkins build is back to normal : gluster_anteater_gcs #73 In-Reply-To: <908938873.8176.1549505012456.JavaMail.jenkins@jenkins.ci.centos.org> References: <908938873.8176.1549505012456.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1899261084.8334.1549587879267.JavaMail.jenkins@jenkins.ci.centos.org> See From ci at centos.org Sat Feb 9 01:03:11 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 9 Feb 2019 01:03:11 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #74 Message-ID: <226468009.8630.1549674191239.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [ndevos] Add GlusterFS release-5 branch to tests [ndevos] Add GlusterFS release-6 branch to tests ------------------------------------------ [...truncated 398.58 KB...] TASK [network_plugin/contiv : Contiv | Copy the generated certificate on nodes] *** Saturday 09 February 2019 00:52:30 +0000 (0:00:00.187) 0:14:29.291 ***** TASK [network_plugin/contiv : Contiv | Set cni directory permissions] ********** Saturday 09 February 2019 00:52:31 +0000 (0:00:00.367) 0:14:29.658 ***** TASK [network_plugin/contiv : Contiv | Copy cni plugins] *********************** Saturday 09 February 2019 00:52:31 +0000 (0:00:00.306) 0:14:29.965 ***** TASK [network_plugin/contiv : Contiv | Copy netctl binary from docker container] *** Saturday 09 February 2019 00:52:31 +0000 (0:00:00.333) 0:14:30.299 ***** TASK [network_plugin/kube-router : kube-router | Add annotations on kube-master] *** Saturday 09 February 2019 00:52:32 +0000 (0:00:00.362) 0:14:30.661 ***** TASK [network_plugin/kube-router : kube-router | Add annotations on kube-node] *** Saturday 09 February 2019 00:52:32 +0000 (0:00:00.284) 0:14:30.946 ***** TASK [network_plugin/kube-router : kube-router | Add common annotations on all servers] *** Saturday 09 February 2019 00:52:32 +0000 (0:00:00.339) 0:14:31.285 ***** TASK [network_plugin/kube-router : kube-roter | Set cni directory permissions] *** Saturday 09 February 2019 00:52:32 +0000 (0:00:00.297) 0:14:31.583 ***** TASK [network_plugin/kube-router : kube-router | Copy cni plugins] ************* Saturday 09 February 2019 00:52:33 +0000 (0:00:00.302) 0:14:31.885 ***** TASK [network_plugin/kube-router : kube-router | Create manifest] ************** Saturday 09 February 2019 00:52:33 +0000 (0:00:00.281) 0:14:32.167 ***** TASK [network_plugin/cloud : Cloud | Set cni directory permissions] ************ Saturday 09 February 2019 00:52:33 +0000 (0:00:00.274) 0:14:32.441 ***** TASK [network_plugin/cloud : Canal | Copy cni plugins] ************************* Saturday 09 February 2019 00:52:34 +0000 (0:00:00.253) 0:14:32.695 ***** TASK [network_plugin/multus : Multus | Copy manifest files] ******************** Saturday 09 February 2019 00:52:34 +0000 (0:00:00.254) 0:14:32.950 ***** TASK [network_plugin/multus : Multus | Copy manifest templates] **************** Saturday 09 February 2019 00:52:34 +0000 (0:00:00.408) 0:14:33.359 ***** RUNNING HANDLER [kubernetes/kubeadm : restart kubelet] ************************* Saturday 09 February 2019 00:52:34 +0000 (0:00:00.232) 0:14:33.592 ***** changed: [kube3] PLAY [kube-master[0]] ********************************************************** TASK [download : include_tasks] ************************************************ Saturday 09 February 2019 00:52:36 +0000 (0:00:01.411) 0:14:35.003 ***** TASK [download : Download items] *********************************************** Saturday 09 February 2019 00:52:36 +0000 (0:00:00.172) 0:14:35.175 ***** TASK [download : Sync container] *********************************************** Saturday 09 February 2019 00:52:38 +0000 (0:00:01.761) 0:14:36.936 ***** TASK [download : include_tasks] ************************************************ Saturday 09 February 2019 00:52:40 +0000 (0:00:01.803) 0:14:38.740 ***** TASK [kubespray-defaults : Configure defaults] ********************************* Saturday 09 February 2019 00:52:40 +0000 (0:00:00.164) 0:14:38.905 ***** ok: [kube1] => { "msg": "Check roles/kubespray-defaults/defaults/main.yml" } TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Get default token name] *** Saturday 09 February 2019 00:52:40 +0000 (0:00:00.489) 0:14:39.394 ***** ok: [kube1] TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Get default token data] *** Saturday 09 February 2019 00:52:42 +0000 (0:00:01.382) 0:14:40.776 ***** ok: [kube1] TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Test if default certificate is expired] *** Saturday 09 February 2019 00:52:43 +0000 (0:00:01.440) 0:14:42.217 ***** ok: [kube1] TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Determine if certificate is expired] *** Saturday 09 February 2019 00:52:45 +0000 (0:00:01.876) 0:14:44.094 ***** ok: [kube1] TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Get all serviceaccount tokens to expire] *** Saturday 09 February 2019 00:52:45 +0000 (0:00:00.436) 0:14:44.531 ***** TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Delete expired tokens] *** Saturday 09 February 2019 00:52:45 +0000 (0:00:00.120) 0:14:44.652 ***** TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Delete pods in system namespace] *** Saturday 09 February 2019 00:52:46 +0000 (0:00:00.158) 0:14:44.810 ***** TASK [win_nodes/kubernetes_patch : Ensure that user manifests directory exists] *** Saturday 09 February 2019 00:52:46 +0000 (0:00:00.129) 0:14:44.940 ***** changed: [kube1] TASK [win_nodes/kubernetes_patch : Copy kube-proxy daemonset hostnameOverride patch] *** Saturday 09 February 2019 00:52:47 +0000 (0:00:01.178) 0:14:46.119 ***** changed: [kube1] TASK [win_nodes/kubernetes_patch : Check current command for kube-proxy daemonset] *** Saturday 09 February 2019 00:52:49 +0000 (0:00:02.277) 0:14:48.396 ***** changed: [kube1] TASK [win_nodes/kubernetes_patch : Apply hostnameOverride patch for kube-proxy daemonset] *** Saturday 09 February 2019 00:52:51 +0000 (0:00:01.430) 0:14:49.827 ***** changed: [kube1] TASK [win_nodes/kubernetes_patch : debug] ************************************** Saturday 09 February 2019 00:52:52 +0000 (0:00:01.475) 0:14:51.302 ***** ok: [kube1] => { "msg": [ "daemonset.extensions/kube-proxy patched" ] } TASK [win_nodes/kubernetes_patch : debug] ************************************** Saturday 09 February 2019 00:52:53 +0000 (0:00:00.404) 0:14:51.707 ***** ok: [kube1] => { "msg": [] } TASK [win_nodes/kubernetes_patch : Copy kube-proxy daemonset nodeselector patch] *** Saturday 09 February 2019 00:52:53 +0000 (0:00:00.443) 0:14:52.150 ***** changed: [kube1] TASK [win_nodes/kubernetes_patch : Check current nodeselector for kube-proxy daemonset] *** Saturday 09 February 2019 00:52:55 +0000 (0:00:02.323) 0:14:54.474 ***** changed: [kube1] TASK [win_nodes/kubernetes_patch : Apply nodeselector patch for kube-proxy daemonset] *** Saturday 09 February 2019 00:52:57 +0000 (0:00:01.404) 0:14:55.878 ***** changed: [kube1] TASK [win_nodes/kubernetes_patch : debug] ************************************** Saturday 09 February 2019 00:52:58 +0000 (0:00:01.452) 0:14:57.331 ***** ok: [kube1] => { "msg": [ "daemonset.extensions/kube-proxy patched" ] } TASK [win_nodes/kubernetes_patch : debug] ************************************** Saturday 09 February 2019 00:52:59 +0000 (0:00:00.445) 0:14:57.776 ***** ok: [kube1] => { "msg": [] } PLAY [kube-master] ************************************************************* TASK [download : include_tasks] ************************************************ Saturday 09 February 2019 00:52:59 +0000 (0:00:00.626) 0:14:58.403 ***** TASK [download : Download items] *********************************************** Saturday 09 February 2019 00:52:59 +0000 (0:00:00.205) 0:14:58.608 ***** TASK [download : Sync container] *********************************************** Saturday 09 February 2019 00:53:01 +0000 (0:00:01.681) 0:15:00.290 ***** TASK [download : include_tasks] ************************************************ Saturday 09 February 2019 00:53:03 +0000 (0:00:01.824) 0:15:02.115 ***** TASK [kubespray-defaults : Configure defaults] ********************************* Saturday 09 February 2019 00:53:03 +0000 (0:00:00.221) 0:15:02.336 ***** ok: [kube1] => { "msg": "Check roles/kubespray-defaults/defaults/main.yml" } ok: [kube2] => { "msg": "Check roles/kubespray-defaults/defaults/main.yml" } TASK [kubernetes-apps/network_plugin/cilium : Cilium | Start Resources] ******** Saturday 09 February 2019 00:53:04 +0000 (0:00:00.620) 0:15:02.957 ***** TASK [kubernetes-apps/network_plugin/cilium : Cilium | Wait for pods to run] *** Saturday 09 February 2019 00:53:04 +0000 (0:00:00.387) 0:15:03.345 ***** TASK [kubernetes-apps/network_plugin/calico : Start Calico resources] ********** Saturday 09 February 2019 00:53:04 +0000 (0:00:00.219) 0:15:03.564 ***** TASK [kubernetes-apps/network_plugin/calico : calico upgrade complete] ********* Saturday 09 February 2019 00:53:05 +0000 (0:00:00.241) 0:15:03.806 ***** TASK [kubernetes-apps/network_plugin/canal : Canal | Start Resources] ********** Saturday 09 February 2019 00:53:05 +0000 (0:00:00.232) 0:15:04.038 ***** TASK [kubernetes-apps/network_plugin/flannel : Flannel | Start Resources] ****** Saturday 09 February 2019 00:53:05 +0000 (0:00:00.364) 0:15:04.403 ***** ok: [kube1] => (item={'_ansible_parsed': True, u'md5sum': u'973704ff91b4c9341dccaf1da6003177', u'uid': 0, u'dest': u'/etc/kubernetes/cni-flannel-rbac.yml', '_ansible_item_result': True, '_ansible_no_log': False, u'owner': u'root', 'diff': [], u'size': 836, u'src': u'/home/vagrant/.ansible/tmp/ansible-tmp-1549673538.54-185605657055457/source', u'group': u'root', '_ansible_item_label': {u'type': u'sa', u'name': u'flannel', u'file': u'cni-flannel-rbac.yml'}, 'item': {u'type': u'sa', u'name': u'flannel', u'file': u'cni-flannel-rbac.yml'}, u'checksum': u'8c69db180ab422f55a122372bee4620dfb2ad0ed', u'changed': True, 'failed': False, u'state': u'file', u'gid': 0, u'secontext': u'system_u:object_r:etc_t:s0', u'mode': u'0644', u'invocation': {u'module_args': {u'directory_mode': None, u'force': True, u'remote_src': None, u'dest': u'/etc/kubernetes/cni-flannel-rbac.yml', u'selevel': None, u'_original_basename': u'cni-flannel-rbac.yml.j2', u'delimiter': None, u'regexp': None, u'owner': None, u'follow': False, u'validate': None, u'local_follow': None, u'src': u'/home/vagrant/.ansible/tmp/ansible-tmp-1549673538.54-185605657055457/source', u'group': None, u'unsafe_writes': None, u'checksum': u'8c69db180ab422f55a122372bee4620dfb2ad0ed', u'seuser': None, u'serole': None, u'content': None, u'setype': None, u'mode': None, u'attributes': None, u'backup': False}}, '_ansible_ignore_errors': None}) ok: [kube1] => (item={'_ansible_parsed': True, u'md5sum': u'51829ca2a2d540389c94291f63118112', u'uid': 0, u'dest': u'/etc/kubernetes/cni-flannel.yml', '_ansible_item_result': True, '_ansible_no_log': False, u'owner': u'root', 'diff': [], u'size': 3198, u'src': u'/home/vagrant/.ansible/tmp/ansible-tmp-1549673540.09-200080475438809/source', u'group': u'root', '_ansible_item_label': {u'type': u'ds', u'name': u'kube-flannel', u'file': u'cni-flannel.yml'}, 'item': {u'type': u'ds', u'name': u'kube-flannel', u'file': u'cni-flannel.yml'}, u'checksum': u'0b1393229c9e863d63eff80c96bda56568b58e82', u'changed': True, 'failed': False, u'state': u'file', u'gid': 0, u'secontext': u'system_u:object_r:etc_t:s0', u'mode': u'0644', u'invocation': {u'module_args': {u'directory_mode': None, u'force': True, u'remote_src': None, u'dest': u'/etc/kubernetes/cni-flannel.yml', u'selevel': None, u'_original_basename': u'cni-flannel.yml.j2', u'delimiter': None, u'regexp': None, u'owner': None, u'follow': False, u'validate': None, u'local_follow': None, u'src': u'/home/vagrant/.ansible/tmp/ansible-tmp-1549673540.09-200080475438809/source', u'group': None, u'unsafe_writes': None, u'checksum': u'0b1393229c9e863d63eff80c96bda56568b58e82', u'seuser': None, u'serole': None, u'content': None, u'setype': None, u'mode': None, u'attributes': None, u'backup': False}}, '_ansible_ignore_errors': None}) TASK [kubernetes-apps/network_plugin/flannel : Flannel | Wait for flannel subnet.env file presence] *** Saturday 09 February 2019 00:53:08 +0000 (0:00:03.135) 0:15:07.539 ***** ok: [kube1] fatal: [kube2]: FAILED! => {"changed": false, "elapsed": 600, "msg": "Timeout when waiting for file /run/flannel/subnet.env"} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=364 changed=103 unreachable=0 failed=0 kube2 : ok=315 changed=91 unreachable=0 failed=1 kube3 : ok=282 changed=78 unreachable=0 failed=0 Saturday 09 February 2019 01:03:10 +0000 (0:10:01.972) 0:25:09.511 ***** =============================================================================== kubernetes-apps/network_plugin/flannel : Flannel | Wait for flannel subnet.env file presence - 601.97s kubernetes/master : kubeadm | Initialize first master ------------------ 39.72s download : container_download | download images for kubeadm config images -- 39.37s kubernetes/master : kubeadm | Init other uninitialized masters --------- 38.84s etcd : Gen_certs | Write etcd master certs ----------------------------- 33.98s Wait for host to be available ------------------------------------------ 32.06s Install packages ------------------------------------------------------- 30.08s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.47s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 17.57s etcd : Gen_certs | Gather etcd master certs ---------------------------- 12.95s gather facts from all instances ---------------------------------------- 12.18s download : file_download | Download item ------------------------------- 10.50s container-engine/docker : Docker | pause while Docker restarts --------- 10.39s download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 9.60s download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 9.25s kubernetes/master : slurp kubeadm certs --------------------------------- 8.42s download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 7.74s Persist loaded modules -------------------------------------------------- 5.59s etcd : Configure | Check if etcd cluster is healthy --------------------- 5.33s download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 4.89s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun Feb 10 01:00:35 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 10 Feb 2019 01:00:35 +0000 (UTC) Subject: [CI-results] Jenkins build is back to normal : gluster_anteater_gcs #75 In-Reply-To: <226468009.8630.1549674191239.JavaMail.jenkins@jenkins.ci.centos.org> References: <226468009.8630.1549674191239.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <96162208.8797.1549760435409.JavaMail.jenkins@jenkins.ci.centos.org> See From ci at centos.org Mon Feb 11 00:55:43 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 11 Feb 2019 00:55:43 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #76 Message-ID: <439001688.8970.1549846543239.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 459.39 KB...] changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-fs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-virtblock-csi.yml) changed: [kube1] => (item=gcs-storage-virtblock.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Monday 11 February 2019 00:45:18 +0000 (0:00:11.954) 0:10:34.344 ******* included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Monday 11 February 2019 00:45:18 +0000 (0:00:00.099) 0:10:34.444 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Monday 11 February 2019 00:45:18 +0000 (0:00:00.218) 0:10:34.663 ******* changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Monday 11 February 2019 00:45:19 +0000 (0:00:00.819) 0:10:35.482 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Monday 11 February 2019 00:45:19 +0000 (0:00:00.206) 0:10:35.689 ******* changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Monday 11 February 2019 00:45:20 +0000 (0:00:00.787) 0:10:36.477 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Monday 11 February 2019 00:45:20 +0000 (0:00:00.206) 0:10:36.683 ******* changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Monday 11 February 2019 00:45:21 +0000 (0:00:00.811) 0:10:37.495 ******* ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Monday 11 February 2019 00:45:22 +0000 (0:00:00.708) 0:10:38.203 ******* ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Monday 11 February 2019 00:45:23 +0000 (0:00:00.736) 0:10:38.940 ******* FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Monday 11 February 2019 00:45:34 +0000 (0:00:10.920) 0:10:49.860 ******* ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Monday 11 February 2019 00:45:34 +0000 (0:00:00.640) 0:10:50.500 ******* ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Monday 11 February 2019 00:45:35 +0000 (0:00:00.457) 0:10:50.958 ******* ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Monday 11 February 2019 00:45:35 +0000 (0:00:00.468) 0:10:51.426 ******* ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Monday 11 February 2019 00:45:36 +0000 (0:00:00.716) 0:10:52.143 ******* ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Monday 11 February 2019 00:45:37 +0000 (0:00:00.864) 0:10:53.007 ******* FAILED - RETRYING: GCS | ETCD Cluster | Get etcd-client service (5 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Get etcd-client service (4 retries left). changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Monday 11 February 2019 00:45:49 +0000 (0:00:11.894) 0:11:04.902 ******* ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Monday 11 February 2019 00:45:49 +0000 (0:00:00.149) 0:11:05.052 ******* FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Monday 11 February 2019 00:46:43 +0000 (0:00:54.634) 0:11:59.687 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Monday 11 February 2019 00:46:44 +0000 (0:00:00.791) 0:12:00.478 ******* included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Monday 11 February 2019 00:46:44 +0000 (0:00:00.110) 0:12:00.588 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Monday 11 February 2019 00:46:44 +0000 (0:00:00.143) 0:12:00.732 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Monday 11 February 2019 00:46:45 +0000 (0:00:00.674) 0:12:01.407 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Monday 11 February 2019 00:46:45 +0000 (0:00:00.149) 0:12:01.557 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Monday 11 February 2019 00:46:46 +0000 (0:00:00.837) 0:12:02.395 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Monday 11 February 2019 00:46:46 +0000 (0:00:00.154) 0:12:02.550 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Monday 11 February 2019 00:46:47 +0000 (0:00:00.774) 0:12:03.324 ******* changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Monday 11 February 2019 00:46:48 +0000 (0:00:00.538) 0:12:03.863 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Monday 11 February 2019 00:46:48 +0000 (0:00:00.149) 0:12:04.012 ******* FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.59.33:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=421 changed=119 unreachable=0 failed=1 kube2 : ok=320 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Monday 11 February 2019 00:55:42 +0000 (0:08:54.779) 0:20:58.792 ******* =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 534.78s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 54.63s download : container_download | download images for kubeadm config images -- 34.82s kubernetes/master : kubeadm | Initialize first master ------------------ 27.95s kubernetes/master : kubeadm | Init other uninitialized masters --------- 24.91s Install packages ------------------------------------------------------- 24.03s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 19.68s Wait for host to be available ------------------------------------------ 16.44s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 13.73s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 13.41s etcd : Gen_certs | Write etcd master certs ----------------------------- 13.08s Extend root VG --------------------------------------------------------- 12.05s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 11.95s GCS | ETCD Cluster | Get etcd-client service --------------------------- 11.89s kubernetes/node : install | Copy hyperkube binary from download dir ---- 11.12s etcd : reload etcd ----------------------------------------------------- 11.11s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 10.92s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 10.76s container-engine/docker : Docker | pause while Docker restarts --------- 10.22s gather facts from all instances ----------------------------------------- 8.34s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Tue Feb 12 00:57:17 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 12 Feb 2019 00:57:17 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #77 In-Reply-To: <439001688.8970.1549846543239.JavaMail.jenkins@jenkins.ci.centos.org> References: <439001688.8970.1549846543239.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <682881858.9167.1549933037927.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 459.37 KB...] changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-fs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-virtblock-csi.yml) changed: [kube1] => (item=gcs-storage-virtblock.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Tuesday 12 February 2019 00:46:48 +0000 (0:00:11.969) 0:10:33.900 ****** included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Tuesday 12 February 2019 00:46:48 +0000 (0:00:00.090) 0:10:33.991 ****** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Tuesday 12 February 2019 00:46:49 +0000 (0:00:00.205) 0:10:34.197 ****** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Tuesday 12 February 2019 00:46:49 +0000 (0:00:00.724) 0:10:34.921 ****** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Tuesday 12 February 2019 00:46:50 +0000 (0:00:00.226) 0:10:35.148 ****** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Tuesday 12 February 2019 00:46:50 +0000 (0:00:00.793) 0:10:35.942 ****** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Tuesday 12 February 2019 00:46:51 +0000 (0:00:00.208) 0:10:36.150 ****** changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Tuesday 12 February 2019 00:46:51 +0000 (0:00:00.704) 0:10:36.854 ****** ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Tuesday 12 February 2019 00:46:52 +0000 (0:00:00.742) 0:10:37.596 ****** ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Tuesday 12 February 2019 00:46:53 +0000 (0:00:00.749) 0:10:38.345 ****** FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Tuesday 12 February 2019 00:47:04 +0000 (0:00:11.515) 0:10:49.861 ****** ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Tuesday 12 February 2019 00:47:05 +0000 (0:00:00.650) 0:10:50.511 ****** ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Tuesday 12 February 2019 00:47:05 +0000 (0:00:00.538) 0:10:51.050 ****** ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Tuesday 12 February 2019 00:47:06 +0000 (0:00:00.534) 0:10:51.585 ****** ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Tuesday 12 February 2019 00:47:07 +0000 (0:00:00.799) 0:10:52.385 ****** ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Tuesday 12 February 2019 00:47:08 +0000 (0:00:00.964) 0:10:53.350 ****** FAILED - RETRYING: GCS | ETCD Cluster | Get etcd-client service (5 retries left). changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Tuesday 12 February 2019 00:47:14 +0000 (0:00:06.275) 0:10:59.625 ****** ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Tuesday 12 February 2019 00:47:14 +0000 (0:00:00.228) 0:10:59.854 ****** FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (45 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Tuesday 12 February 2019 00:48:20 +0000 (0:01:06.108) 0:12:05.963 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Tuesday 12 February 2019 00:48:21 +0000 (0:00:00.829) 0:12:06.793 ****** included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Tuesday 12 February 2019 00:48:21 +0000 (0:00:00.109) 0:12:06.903 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Tuesday 12 February 2019 00:48:22 +0000 (0:00:00.225) 0:12:07.128 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Tuesday 12 February 2019 00:48:22 +0000 (0:00:00.763) 0:12:07.892 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Tuesday 12 February 2019 00:48:23 +0000 (0:00:00.228) 0:12:08.120 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Tuesday 12 February 2019 00:48:23 +0000 (0:00:00.895) 0:12:09.015 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Tuesday 12 February 2019 00:48:24 +0000 (0:00:00.216) 0:12:09.231 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Tuesday 12 February 2019 00:48:24 +0000 (0:00:00.839) 0:12:10.071 ****** changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Tuesday 12 February 2019 00:48:25 +0000 (0:00:00.634) 0:12:10.705 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Tuesday 12 February 2019 00:48:25 +0000 (0:00:00.234) 0:12:10.940 ****** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.46.243:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=421 changed=119 unreachable=0 failed=1 kube2 : ok=320 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Tuesday 12 February 2019 00:57:17 +0000 (0:08:51.880) 0:21:02.821 ****** =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 531.88s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 66.11s download : container_download | download images for kubeadm config images -- 38.34s kubernetes/master : kubeadm | Initialize first master ------------------ 29.49s kubernetes/master : kubeadm | Init other uninitialized masters --------- 25.02s Install packages ------------------------------------------------------- 22.81s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 21.81s Wait for host to be available ------------------------------------------ 16.62s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 15.83s etcd : Gen_certs | Write etcd master certs ----------------------------- 12.91s Extend root VG --------------------------------------------------------- 12.64s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 11.97s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 11.52s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 11.12s etcd : reload etcd ----------------------------------------------------- 10.84s container-engine/docker : Docker | pause while Docker restarts --------- 10.26s kubernetes/node : Enable kubelet ---------------------------------------- 8.45s etcd : wait for etcd up ------------------------------------------------- 8.26s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests --- 8.20s download : file_download | Download item -------------------------------- 7.95s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed Feb 13 00:15:58 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 13 Feb 2019 00:15:58 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #274 Message-ID: <858700220.9359.1550016958869.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 36.55 KB...] ================================================================================ Install 3 Packages (+43 Dependent packages) Total download size: 141 M Installed size: 413 M Downloading packages: warning: /var/cache/yum/x86_64/7/epel/packages/distribution-gpg-keys-1.27-1.el7.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 352c64e5: NOKEY Public key for distribution-gpg-keys-1.27-1.el7.noarch.rpm is not installed -------------------------------------------------------------------------------- Total 73 MB/s | 141 MB 00:01 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/46 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/46 Installing : apr-util-1.5.2-6.el7.x86_64 3/46 Installing : libmpc-1.0.1-3.el7.x86_64 4/46 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/46 Installing : python-six-1.9.0-2.el7.noarch 6/46 Installing : cpp-4.8.5-36.el7.x86_64 7/46 Installing : elfutils-0.172-2.el7.x86_64 8/46 Installing : pakchois-0.4-10.el7.x86_64 9/46 Installing : perl-srpm-macros-1-8.el7.noarch 10/46 Installing : unzip-6.0-19.el7.x86_64 11/46 Installing : dwz-0.11-3.el7.x86_64 12/46 Installing : pigz-2.3.4-1.el7.x86_64 13/46 Installing : usermode-1.111-5.el7.x86_64 14/46 Installing : python2-distro-1.2.0-1.el7.noarch 15/46 Installing : patch-2.7.1-10.el7_5.x86_64 16/46 Installing : python-backports-1.0-8.el7.x86_64 17/46 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/46 Installing : python-urllib3-1.10.2-5.el7.noarch 19/46 Installing : python-requests-2.6.0-1.el7_1.noarch 20/46 Installing : libmodman-2.0.1-8.el7.x86_64 21/46 Installing : libproxy-0.4.11-11.el7.x86_64 22/46 Installing : gdb-7.6.1-114.el7.x86_64 23/46 Installing : perl-Thread-Queue-3.02-2.el7.noarch 24/46 Installing : golang-src-1.11.4-1.el7.noarch 25/46 Installing : bzip2-1.0.6-13.el7.x86_64 26/46 Installing : distribution-gpg-keys-1.27-1.el7.noarch 27/46 Installing : mock-core-configs-29.4-1.el7.noarch 28/46 Installing : python2-pyroute2-0.4.13-1.el7.noarch 29/46 Installing : nettle-2.7.1-8.el7.x86_64 30/46 Installing : zip-3.0-11.el7.x86_64 31/46 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 32/46 Installing : mercurial-2.6.2-8.el7_4.x86_64 33/46 Installing : kernel-headers-3.10.0-957.5.1.el7.x86_64 34/46 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 35/46 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 36/46 Installing : gcc-4.8.5-36.el7.x86_64 37/46 Installing : trousers-0.3.14-2.el7.x86_64 38/46 Installing : gnutls-3.3.29-8.el7.x86_64 39/46 Installing : neon-0.30.0-3.el7.x86_64 40/46 Installing : subversion-libs-1.7.14-14.el7.x86_64 41/46 Installing : subversion-1.7.14-14.el7.x86_64 42/46 Installing : golang-1.11.4-1.el7.x86_64 43/46 Installing : golang-bin-1.11.4-1.el7.x86_64 44/46 Installing : rpm-build-4.11.3-35.el7.x86_64 45/46 Installing : mock-1.4.13-1.el7.noarch 46/46 Verifying : trousers-0.3.14-2.el7.x86_64 1/46 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/46 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/46 Verifying : kernel-headers-3.10.0-957.5.1.el7.x86_64 4/46 Verifying : rpm-build-4.11.3-35.el7.x86_64 5/46 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/46 Verifying : zip-3.0-11.el7.x86_64 7/46 Verifying : nettle-2.7.1-8.el7.x86_64 8/46 Verifying : gnutls-3.3.29-8.el7.x86_64 9/46 Verifying : cpp-4.8.5-36.el7.x86_64 10/46 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 11/46 Verifying : distribution-gpg-keys-1.27-1.el7.noarch 12/46 Verifying : golang-1.11.4-1.el7.x86_64 13/46 Verifying : golang-bin-1.11.4-1.el7.x86_64 14/46 Verifying : bzip2-1.0.6-13.el7.x86_64 15/46 Verifying : gcc-4.8.5-36.el7.x86_64 16/46 Verifying : golang-src-1.11.4-1.el7.noarch 17/46 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 18/46 Verifying : apr-1.4.8-3.el7_4.1.x86_64 19/46 Verifying : gdb-7.6.1-114.el7.x86_64 20/46 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/46 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/46 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 23/46 Verifying : libmodman-2.0.1-8.el7.x86_64 24/46 Verifying : mock-1.4.13-1.el7.noarch 25/46 Verifying : mpfr-3.1.1-4.el7.x86_64 26/46 Verifying : apr-util-1.5.2-6.el7.x86_64 27/46 Verifying : python-backports-1.0-8.el7.x86_64 28/46 Verifying : patch-2.7.1-10.el7_5.x86_64 29/46 Verifying : libmpc-1.0.1-3.el7.x86_64 30/46 Verifying : python2-distro-1.2.0-1.el7.noarch 31/46 Verifying : usermode-1.111-5.el7.x86_64 32/46 Verifying : python-six-1.9.0-2.el7.noarch 33/46 Verifying : libproxy-0.4.11-11.el7.x86_64 34/46 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 35/46 Verifying : neon-0.30.0-3.el7.x86_64 36/46 Verifying : python-requests-2.6.0-1.el7_1.noarch 37/46 Verifying : pigz-2.3.4-1.el7.x86_64 38/46 Verifying : subversion-1.7.14-14.el7.x86_64 39/46 Verifying : python-ipaddress-1.0.16-2.el7.noarch 40/46 Verifying : dwz-0.11-3.el7.x86_64 41/46 Verifying : unzip-6.0-19.el7.x86_64 42/46 Verifying : perl-srpm-macros-1-8.el7.noarch 43/46 Verifying : mock-core-configs-29.4-1.el7.noarch 44/46 Verifying : pakchois-0.4-10.el7.x86_64 45/46 Verifying : elfutils-0.172-2.el7.x86_64 46/46 Installed: golang.x86_64 0:1.11.4-1.el7 mock.noarch 0:1.4.13-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7 distribution-gpg-keys.noarch 0:1.27-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-8.el7 golang-bin.x86_64 0:1.11.4-1.el7 golang-src.noarch 0:1.11.4-1.el7 kernel-headers.x86_64 0:3.10.0-957.5.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:29.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1959 0 --:--:-- --:--:-- --:--:-- 1970 100 8513k 100 8513k 0 0 15.5M 0 --:--:-- --:--:-- --:--:-- 15.5M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2113 0 --:--:-- --:--:-- --:--:-- 2118 0 38.3M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 38.3M 100 38.3M 0 0 41.3M 0 --:--:-- --:--:-- --:--:-- 82.7M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 551 0 --:--:-- --:--:-- --:--:-- 550 0 0 0 620 0 0 1671 0 --:--:-- --:--:-- --:--:-- 1671 100 10.7M 100 10.7M 0 0 16.6M 0 --:--:-- --:--:-- --:--:-- 16.6M ~/nightlyrpmiHSEd4/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmiHSEd4/glusterd2-v6.0-dev.134.git830c8c9-vendor.tar.xz Created dist archive /root/nightlyrpmiHSEd4/glusterd2-v6.0-dev.134.git830c8c9-vendor.tar.xz ~ ~/nightlyrpmiHSEd4 ~ INFO: mock.py version 1.4.13 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmiHSEd4/rpmbuild/SRPMS/glusterd2-5.0-0.dev.134.git830c8c9.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.13 INFO: Mock Version: 1.4.13 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.134.git830c8c9.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.134.git830c8c9.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.134.git830c8c9.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.134.git830c8c9.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmiHSEd4/rpmbuild/SRPMS/glusterd2-5.0-0.dev.134.git830c8c9.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 21 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 981579881a454d51a5935f54f9edd09c -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.qdJYRR:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins141371690002751158.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done b25d1a92 +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 139 | n12.crusty | 172.19.2.12 | crusty | 3135 | Deployed | b25d1a92 | None | None | 7 | x86_64 | 1 | 2110 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Wed Feb 13 02:04:10 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 13 Feb 2019 02:04:10 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #78 In-Reply-To: <682881858.9167.1549933037927.JavaMail.jenkins@jenkins.ci.centos.org> References: <682881858.9167.1549933037927.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <271037888.9374.1550023451275.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 466.08 KB...] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Wednesday 13 February 2019 00:58:11 +0000 (0:00:01.449) 0:20:15.375 **** changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Wednesday 13 February 2019 00:58:13 +0000 (0:00:01.664) 0:20:17.040 **** ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Wednesday 13 February 2019 00:58:13 +0000 (0:00:00.359) 0:20:17.399 **** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Add devices] ***************************************** Wednesday 13 February 2019 00:59:04 +0000 (0:00:50.228) 0:21:07.628 **** included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 TASK [GCS | GD2 Cluster | Add devices | Set facts] ***************************** Wednesday 13 February 2019 00:59:04 +0000 (0:00:00.270) 0:21:07.898 **** ok: [kube1] TASK [GCS | GD2 Cluster | Add devices | Add devices for kube1] ***************** Wednesday 13 February 2019 00:59:04 +0000 (0:00:00.398) 0:21:08.297 **** FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (49 retries left). ok: [kube1] => (item=/dev/vdc) FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (49 retries left). ok: [kube1] => (item=/dev/vdd) FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (49 retries left). ok: [kube1] => (item=/dev/vde) TASK [GCS | GD2 Cluster | Add devices | Set facts] ***************************** Wednesday 13 February 2019 01:00:19 +0000 (0:01:14.420) 0:22:22.717 **** ok: [kube1] TASK [GCS | GD2 Cluster | Add devices | Add devices for kube3] ***************** Wednesday 13 February 2019 01:00:19 +0000 (0:00:00.431) 0:22:23.148 **** FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (1 retries left). failed: [kube1] (item=/dev/vdc) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdc", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.23.158:24007/v1/devices/44b01243-6402-4901-b361-4a13b7cda90c"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (1 retries left). failed: [kube1] (item=/dev/vdd) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdd", "msg": "Status code was -1 and not [201]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.23.158:24007/v1/devices/44b01243-6402-4901-b361-4a13b7cda90c"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (1 retries left). failed: [kube1] (item=/dev/vde) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vde", "msg": "Status code was -1 and not [201]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.23.158:24007/v1/devices/44b01243-6402-4901-b361-4a13b7cda90c"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=428 changed=119 unreachable=0 failed=1 kube2 : ok=320 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Wednesday 13 February 2019 02:04:10 +0000 (1:03:50.819) 1:26:13.968 **** =============================================================================== GCS | GD2 Cluster | Add devices | Add devices for kube3 -------------- 3830.82s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 83.98s GCS | GD2 Cluster | Add devices | Add devices for kube1 ---------------- 74.42s GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready --------- 50.23s kubernetes/master : kubeadm | Init other uninitialized masters --------- 39.37s kubernetes/master : kubeadm | Initialize first master ------------------ 39.12s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 35.49s etcd : Gen_certs | Write etcd master certs ----------------------------- 33.50s download : container_download | download images for kubeadm config images -- 32.77s Install packages ------------------------------------------------------- 30.28s Wait for host to be available ------------------------------------------ 20.46s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.37s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 18.68s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 17.51s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 13.06s etcd : Gen_certs | Gather etcd master certs ---------------------------- 12.77s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.25s etcd : reload etcd ----------------------------------------------------- 12.10s gather facts from all instances ---------------------------------------- 12.06s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests -- 11.28s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu Feb 14 00:15:53 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 14 Feb 2019 00:15:53 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #275 In-Reply-To: <858700220.9359.1550016958869.JavaMail.jenkins@jenkins.ci.centos.org> References: <858700220.9359.1550016958869.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1799588165.9523.1550103353076.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 36.19 KB...] ================================================================================ Install 3 Packages (+43 Dependent packages) Total download size: 141 M Installed size: 413 M Downloading packages: warning: /var/cache/yum/x86_64/7/epel/packages/distribution-gpg-keys-1.27-1.el7.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 352c64e5: NOKEY Public key for distribution-gpg-keys-1.27-1.el7.noarch.rpm is not installed -------------------------------------------------------------------------------- Total 59 MB/s | 141 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/46 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/46 Installing : apr-util-1.5.2-6.el7.x86_64 3/46 Installing : libmpc-1.0.1-3.el7.x86_64 4/46 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/46 Installing : python-six-1.9.0-2.el7.noarch 6/46 Installing : cpp-4.8.5-36.el7.x86_64 7/46 Installing : elfutils-0.172-2.el7.x86_64 8/46 Installing : pakchois-0.4-10.el7.x86_64 9/46 Installing : perl-srpm-macros-1-8.el7.noarch 10/46 Installing : unzip-6.0-19.el7.x86_64 11/46 Installing : dwz-0.11-3.el7.x86_64 12/46 Installing : pigz-2.3.4-1.el7.x86_64 13/46 Installing : usermode-1.111-5.el7.x86_64 14/46 Installing : python2-distro-1.2.0-1.el7.noarch 15/46 Installing : patch-2.7.1-10.el7_5.x86_64 16/46 Installing : python-backports-1.0-8.el7.x86_64 17/46 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/46 Installing : python-urllib3-1.10.2-5.el7.noarch 19/46 Installing : python-requests-2.6.0-1.el7_1.noarch 20/46 Installing : libmodman-2.0.1-8.el7.x86_64 21/46 Installing : libproxy-0.4.11-11.el7.x86_64 22/46 Installing : gdb-7.6.1-114.el7.x86_64 23/46 Installing : perl-Thread-Queue-3.02-2.el7.noarch 24/46 Installing : golang-src-1.11.4-1.el7.noarch 25/46 Installing : bzip2-1.0.6-13.el7.x86_64 26/46 Installing : distribution-gpg-keys-1.27-1.el7.noarch 27/46 Installing : mock-core-configs-29.4-1.el7.noarch 28/46 Installing : python2-pyroute2-0.4.13-1.el7.noarch 29/46 Installing : nettle-2.7.1-8.el7.x86_64 30/46 Installing : zip-3.0-11.el7.x86_64 31/46 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 32/46 Installing : mercurial-2.6.2-8.el7_4.x86_64 33/46 Installing : kernel-headers-3.10.0-957.5.1.el7.x86_64 34/46 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 35/46 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 36/46 Installing : gcc-4.8.5-36.el7.x86_64 37/46 Installing : trousers-0.3.14-2.el7.x86_64 38/46 Installing : gnutls-3.3.29-8.el7.x86_64 39/46 Installing : neon-0.30.0-3.el7.x86_64 40/46 Installing : subversion-libs-1.7.14-14.el7.x86_64 41/46 Installing : subversion-1.7.14-14.el7.x86_64 42/46 Installing : golang-1.11.4-1.el7.x86_64 43/46 Installing : golang-bin-1.11.4-1.el7.x86_64 44/46 Installing : rpm-build-4.11.3-35.el7.x86_64 45/46 Installing : mock-1.4.13-1.el7.noarch 46/46 Verifying : trousers-0.3.14-2.el7.x86_64 1/46 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/46 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/46 Verifying : kernel-headers-3.10.0-957.5.1.el7.x86_64 4/46 Verifying : rpm-build-4.11.3-35.el7.x86_64 5/46 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/46 Verifying : zip-3.0-11.el7.x86_64 7/46 Verifying : nettle-2.7.1-8.el7.x86_64 8/46 Verifying : gnutls-3.3.29-8.el7.x86_64 9/46 Verifying : cpp-4.8.5-36.el7.x86_64 10/46 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 11/46 Verifying : distribution-gpg-keys-1.27-1.el7.noarch 12/46 Verifying : golang-1.11.4-1.el7.x86_64 13/46 Verifying : golang-bin-1.11.4-1.el7.x86_64 14/46 Verifying : bzip2-1.0.6-13.el7.x86_64 15/46 Verifying : gcc-4.8.5-36.el7.x86_64 16/46 Verifying : golang-src-1.11.4-1.el7.noarch 17/46 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 18/46 Verifying : apr-1.4.8-3.el7_4.1.x86_64 19/46 Verifying : gdb-7.6.1-114.el7.x86_64 20/46 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/46 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/46 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 23/46 Verifying : libmodman-2.0.1-8.el7.x86_64 24/46 Verifying : mock-1.4.13-1.el7.noarch 25/46 Verifying : mpfr-3.1.1-4.el7.x86_64 26/46 Verifying : apr-util-1.5.2-6.el7.x86_64 27/46 Verifying : python-backports-1.0-8.el7.x86_64 28/46 Verifying : patch-2.7.1-10.el7_5.x86_64 29/46 Verifying : libmpc-1.0.1-3.el7.x86_64 30/46 Verifying : python2-distro-1.2.0-1.el7.noarch 31/46 Verifying : usermode-1.111-5.el7.x86_64 32/46 Verifying : python-six-1.9.0-2.el7.noarch 33/46 Verifying : libproxy-0.4.11-11.el7.x86_64 34/46 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 35/46 Verifying : neon-0.30.0-3.el7.x86_64 36/46 Verifying : python-requests-2.6.0-1.el7_1.noarch 37/46 Verifying : pigz-2.3.4-1.el7.x86_64 38/46 Verifying : subversion-1.7.14-14.el7.x86_64 39/46 Verifying : python-ipaddress-1.0.16-2.el7.noarch 40/46 Verifying : dwz-0.11-3.el7.x86_64 41/46 Verifying : unzip-6.0-19.el7.x86_64 42/46 Verifying : perl-srpm-macros-1-8.el7.noarch 43/46 Verifying : mock-core-configs-29.4-1.el7.noarch 44/46 Verifying : pakchois-0.4-10.el7.x86_64 45/46 Verifying : elfutils-0.172-2.el7.x86_64 46/46 Installed: golang.x86_64 0:1.11.4-1.el7 mock.noarch 0:1.4.13-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7 distribution-gpg-keys.noarch 0:1.27-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-8.el7 golang-bin.x86_64 0:1.11.4-1.el7 golang-src.noarch 0:1.11.4-1.el7 kernel-headers.x86_64 0:3.10.0-957.5.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:29.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1981 0 --:--:-- --:--:-- --:--:-- 1983 100 8513k 100 8513k 0 0 12.3M 0 --:--:-- --:--:-- --:--:-- 12.3M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2157 0 --:--:-- --:--:-- --:--:-- 2162 0 38.3M 0 68127 0 0 143k 0 0:04:34 --:--:-- 0:04:34 143k100 38.3M 100 38.3M 0 0 32.4M 0 0:00:01 0:00:01 --:--:-- 53.4M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 530 0 --:--:-- --:--:-- --:--:-- 533 0 0 0 620 0 0 1651 0 --:--:-- --:--:-- --:--:-- 1651 100 10.7M 100 10.7M 0 0 15.4M 0 --:--:-- --:--:-- --:--:-- 15.4M ~/nightlyrpmzHFEd8/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmzHFEd8/glusterd2-v6.0-dev.134.git830c8c9-vendor.tar.xz Created dist archive /root/nightlyrpmzHFEd8/glusterd2-v6.0-dev.134.git830c8c9-vendor.tar.xz ~ ~/nightlyrpmzHFEd8 ~ INFO: mock.py version 1.4.13 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmzHFEd8/rpmbuild/SRPMS/glusterd2-5.0-0.dev.134.git830c8c9.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.13 INFO: Mock Version: 1.4.13 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.134.git830c8c9.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.134.git830c8c9.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.134.git830c8c9.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.134.git830c8c9.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmzHFEd8/rpmbuild/SRPMS/glusterd2-5.0-0.dev.134.git830c8c9.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 22 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 805d06a8c6e54301be76be02afbba55e -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.G98PQo:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins6505568251136391671.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 2d196d98 +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 284 | n29.gusty | 172.19.2.157 | gusty | 3184 | Deployed | 2d196d98 | None | None | 7 | x86_64 | 1 | 2280 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Thu Feb 14 02:03:42 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 14 Feb 2019 02:03:42 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #79 In-Reply-To: <271037888.9374.1550023451275.JavaMail.jenkins@jenkins.ci.centos.org> References: <271037888.9374.1550023451275.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <248198202.9547.1550109822203.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 465.51 KB...] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Thursday 14 February 2019 00:59:54 +0000 (0:00:01.520) 0:20:16.389 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Thursday 14 February 2019 00:59:54 +0000 (0:00:00.350) 0:20:16.739 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Thursday 14 February 2019 00:59:56 +0000 (0:00:01.642) 0:20:18.382 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Thursday 14 February 2019 00:59:56 +0000 (0:00:00.330) 0:20:18.712 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Thursday 14 February 2019 00:59:58 +0000 (0:00:01.563) 0:20:20.276 ***** changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Thursday 14 February 2019 00:59:59 +0000 (0:00:01.476) 0:20:21.752 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Thursday 14 February 2019 00:59:59 +0000 (0:00:00.305) 0:20:22.058 ***** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Add devices] ***************************************** Thursday 14 February 2019 01:00:49 +0000 (0:00:50.021) 0:21:12.080 ***** included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 TASK [GCS | GD2 Cluster | Add devices | Set facts] ***************************** Thursday 14 February 2019 01:00:50 +0000 (0:00:00.250) 0:21:12.331 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Add devices | Add devices for kube2] ***************** Thursday 14 February 2019 01:00:50 +0000 (0:00:00.369) 0:21:12.701 ***** FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (1 retries left). failed: [kube1] (item=/dev/vdc) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdc", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.31.148:24007/v1/devices/71521e5a-22a1-4bd3-8799-184840157e74"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (1 retries left). failed: [kube1] (item=/dev/vdd) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdd", "msg": "Status code was -1 and not [201]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.31.148:24007/v1/devices/71521e5a-22a1-4bd3-8799-184840157e74"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (1 retries left). failed: [kube1] (item=/dev/vde) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vde", "msg": "Status code was -1 and not [201]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.31.148:24007/v1/devices/71521e5a-22a1-4bd3-8799-184840157e74"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=426 changed=119 unreachable=0 failed=1 kube2 : ok=320 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Thursday 14 February 2019 02:03:41 +0000 (1:02:51.279) 1:24:03.981 ***** =============================================================================== GCS | GD2 Cluster | Add devices | Add devices for kube2 -------------- 3771.28s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 83.90s GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready --------- 50.02s kubernetes/master : kubeadm | Initialize first master ------------------ 39.48s kubernetes/master : kubeadm | Init other uninitialized masters --------- 38.67s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 35.59s download : container_download | download images for kubeadm config images -- 33.68s etcd : Gen_certs | Write etcd master certs ----------------------------- 32.47s Install packages ------------------------------------------------------- 30.19s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.87s Wait for host to be available ------------------------------------------ 20.53s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 19.00s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 18.24s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 13.83s etcd : Gen_certs | Gather etcd master certs ---------------------------- 13.22s gather facts from all instances ---------------------------------------- 13.11s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.24s etcd : reload etcd ----------------------------------------------------- 12.00s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.41s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests -- 11.19s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri Feb 15 05:49:34 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 15 Feb 2019 05:49:34 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #276 In-Reply-To: <1799588165.9523.1550103353076.JavaMail.jenkins@jenkins.ci.centos.org> References: <1799588165.9523.1550103353076.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2115791639.9744.1550209774867.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 36.18 KB...] ================================================================================ Install 3 Packages (+43 Dependent packages) Total download size: 141 M Installed size: 413 M Downloading packages: warning: /var/cache/yum/x86_64/7/epel/packages/distribution-gpg-keys-1.27-1.el7.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 352c64e5: NOKEY Public key for distribution-gpg-keys-1.27-1.el7.noarch.rpm is not installed -------------------------------------------------------------------------------- Total 69 MB/s | 141 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/46 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/46 Installing : apr-util-1.5.2-6.el7.x86_64 3/46 Installing : libmpc-1.0.1-3.el7.x86_64 4/46 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/46 Installing : python-six-1.9.0-2.el7.noarch 6/46 Installing : cpp-4.8.5-36.el7.x86_64 7/46 Installing : elfutils-0.172-2.el7.x86_64 8/46 Installing : pakchois-0.4-10.el7.x86_64 9/46 Installing : perl-srpm-macros-1-8.el7.noarch 10/46 Installing : unzip-6.0-19.el7.x86_64 11/46 Installing : dwz-0.11-3.el7.x86_64 12/46 Installing : pigz-2.3.4-1.el7.x86_64 13/46 Installing : usermode-1.111-5.el7.x86_64 14/46 Installing : python2-distro-1.2.0-1.el7.noarch 15/46 Installing : patch-2.7.1-10.el7_5.x86_64 16/46 Installing : python-backports-1.0-8.el7.x86_64 17/46 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/46 Installing : python-urllib3-1.10.2-5.el7.noarch 19/46 Installing : python-requests-2.6.0-1.el7_1.noarch 20/46 Installing : libmodman-2.0.1-8.el7.x86_64 21/46 Installing : libproxy-0.4.11-11.el7.x86_64 22/46 Installing : gdb-7.6.1-114.el7.x86_64 23/46 Installing : perl-Thread-Queue-3.02-2.el7.noarch 24/46 Installing : golang-src-1.11.4-1.el7.noarch 25/46 Installing : bzip2-1.0.6-13.el7.x86_64 26/46 Installing : distribution-gpg-keys-1.27-1.el7.noarch 27/46 Installing : mock-core-configs-29.4-1.el7.noarch 28/46 Installing : python2-pyroute2-0.4.13-1.el7.noarch 29/46 Installing : nettle-2.7.1-8.el7.x86_64 30/46 Installing : zip-3.0-11.el7.x86_64 31/46 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 32/46 Installing : mercurial-2.6.2-8.el7_4.x86_64 33/46 Installing : kernel-headers-3.10.0-957.5.1.el7.x86_64 34/46 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 35/46 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 36/46 Installing : gcc-4.8.5-36.el7.x86_64 37/46 Installing : trousers-0.3.14-2.el7.x86_64 38/46 Installing : gnutls-3.3.29-8.el7.x86_64 39/46 Installing : neon-0.30.0-3.el7.x86_64 40/46 Installing : subversion-libs-1.7.14-14.el7.x86_64 41/46 Installing : subversion-1.7.14-14.el7.x86_64 42/46 Installing : golang-1.11.4-1.el7.x86_64 43/46 Installing : golang-bin-1.11.4-1.el7.x86_64 44/46 Installing : rpm-build-4.11.3-35.el7.x86_64 45/46 Installing : mock-1.4.13-1.el7.noarch 46/46 Verifying : trousers-0.3.14-2.el7.x86_64 1/46 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/46 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/46 Verifying : kernel-headers-3.10.0-957.5.1.el7.x86_64 4/46 Verifying : rpm-build-4.11.3-35.el7.x86_64 5/46 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/46 Verifying : zip-3.0-11.el7.x86_64 7/46 Verifying : nettle-2.7.1-8.el7.x86_64 8/46 Verifying : gnutls-3.3.29-8.el7.x86_64 9/46 Verifying : cpp-4.8.5-36.el7.x86_64 10/46 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 11/46 Verifying : distribution-gpg-keys-1.27-1.el7.noarch 12/46 Verifying : golang-1.11.4-1.el7.x86_64 13/46 Verifying : golang-bin-1.11.4-1.el7.x86_64 14/46 Verifying : bzip2-1.0.6-13.el7.x86_64 15/46 Verifying : gcc-4.8.5-36.el7.x86_64 16/46 Verifying : golang-src-1.11.4-1.el7.noarch 17/46 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 18/46 Verifying : apr-1.4.8-3.el7_4.1.x86_64 19/46 Verifying : gdb-7.6.1-114.el7.x86_64 20/46 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/46 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/46 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 23/46 Verifying : libmodman-2.0.1-8.el7.x86_64 24/46 Verifying : mock-1.4.13-1.el7.noarch 25/46 Verifying : mpfr-3.1.1-4.el7.x86_64 26/46 Verifying : apr-util-1.5.2-6.el7.x86_64 27/46 Verifying : python-backports-1.0-8.el7.x86_64 28/46 Verifying : patch-2.7.1-10.el7_5.x86_64 29/46 Verifying : libmpc-1.0.1-3.el7.x86_64 30/46 Verifying : python2-distro-1.2.0-1.el7.noarch 31/46 Verifying : usermode-1.111-5.el7.x86_64 32/46 Verifying : python-six-1.9.0-2.el7.noarch 33/46 Verifying : libproxy-0.4.11-11.el7.x86_64 34/46 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 35/46 Verifying : neon-0.30.0-3.el7.x86_64 36/46 Verifying : python-requests-2.6.0-1.el7_1.noarch 37/46 Verifying : pigz-2.3.4-1.el7.x86_64 38/46 Verifying : subversion-1.7.14-14.el7.x86_64 39/46 Verifying : python-ipaddress-1.0.16-2.el7.noarch 40/46 Verifying : dwz-0.11-3.el7.x86_64 41/46 Verifying : unzip-6.0-19.el7.x86_64 42/46 Verifying : perl-srpm-macros-1-8.el7.noarch 43/46 Verifying : mock-core-configs-29.4-1.el7.noarch 44/46 Verifying : pakchois-0.4-10.el7.x86_64 45/46 Verifying : elfutils-0.172-2.el7.x86_64 46/46 Installed: golang.x86_64 0:1.11.4-1.el7 mock.noarch 0:1.4.13-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7 distribution-gpg-keys.noarch 0:1.27-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-8.el7 golang-bin.x86_64 0:1.11.4-1.el7 golang-src.noarch 0:1.11.4-1.el7 kernel-headers.x86_64 0:3.10.0-957.5.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:29.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1855 0 --:--:-- --:--:-- --:--:-- 1867 9 8513k 9 798k 0 0 1496k 0 0:00:05 --:--:-- 0:00:05 1496k100 8513k 100 8513k 0 0 11.5M 0 --:--:-- --:--:-- --:--:-- 40.7M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2234 0 --:--:-- --:--:-- --:--:-- 2239 25 38.3M 25 9.8M 0 0 12.4M 0 0:00:03 --:--:-- 0:00:03 12.4M100 38.3M 100 38.3M 0 0 23.0M 0 0:00:01 0:00:01 --:--:-- 32.8M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 561 0 --:--:-- --:--:-- --:--:-- 564 0 0 0 620 0 0 1774 0 --:--:-- --:--:-- --:--:-- 1774 22 10.7M 22 2464k 0 0 3512k 0 0:00:03 --:--:-- 0:00:03 3512k100 10.7M 100 10.7M 0 0 12.6M 0 --:--:-- --:--:-- --:--:-- 57.8M ~/nightlyrpm0krFeH/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpm0krFeH/glusterd2-v6.0-dev.136.git17bc6f4-vendor.tar.xz Created dist archive /root/nightlyrpm0krFeH/glusterd2-v6.0-dev.136.git17bc6f4-vendor.tar.xz ~ ~/nightlyrpm0krFeH ~ INFO: mock.py version 1.4.13 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpm0krFeH/rpmbuild/SRPMS/glusterd2-5.0-0.dev.136.git17bc6f4.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.13 INFO: Mock Version: 1.4.13 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.136.git17bc6f4.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.136.git17bc6f4.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.136.git17bc6f4.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.136.git17bc6f4.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpm0krFeH/rpmbuild/SRPMS/glusterd2-5.0-0.dev.136.git17bc6f4.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 21 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 838653ee4d034ec399f282007ceaca15 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.6SJRld:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins305897811460454951.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 10f9b607 +---------+----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 256 | n1.gusty | 172.19.2.129 | gusty | 3191 | Deployed | 10f9b607 | None | None | 7 | x86_64 | 1 | 2000 | None | +---------+----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Fri Feb 15 07:23:11 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 15 Feb 2019 07:23:11 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #80 In-Reply-To: <248198202.9547.1550109822203.JavaMail.jenkins@jenkins.ci.centos.org> References: <248198202.9547.1550109822203.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1695185325.9752.1550215391618.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 466.15 KB...] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Friday 15 February 2019 06:17:41 +0000 (0:00:01.450) 0:20:21.539 ******* changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Friday 15 February 2019 06:17:42 +0000 (0:00:01.313) 0:20:22.852 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Friday 15 February 2019 06:17:43 +0000 (0:00:00.345) 0:20:23.197 ******* FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Add devices] ***************************************** Friday 15 February 2019 06:18:33 +0000 (0:00:50.119) 0:21:13.317 ******* included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 TASK [GCS | GD2 Cluster | Add devices | Set facts] ***************************** Friday 15 February 2019 06:18:33 +0000 (0:00:00.300) 0:21:13.617 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Add devices | Add devices for kube2] ***************** Friday 15 February 2019 06:18:33 +0000 (0:00:00.378) 0:21:13.996 ******* FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (49 retries left). ok: [kube1] => (item=/dev/vdc) FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (49 retries left). ok: [kube1] => (item=/dev/vdd) FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (49 retries left). ok: [kube1] => (item=/dev/vde) TASK [GCS | GD2 Cluster | Add devices | Set facts] ***************************** Friday 15 February 2019 06:19:48 +0000 (0:01:14.418) 0:22:28.415 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Add devices | Add devices for kube3] ***************** Friday 15 February 2019 06:19:48 +0000 (0:00:00.427) 0:22:28.843 ******* FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (1 retries left). failed: [kube1] (item=/dev/vdc) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdc", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.53.142:24007/v1/devices/b99d0518-9dca-48ed-b42e-e6068ef77f61"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (1 retries left). failed: [kube1] (item=/dev/vdd) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdd", "msg": "Status code was -1 and not [201]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.53.142:24007/v1/devices/b99d0518-9dca-48ed-b42e-e6068ef77f61"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (1 retries left). failed: [kube1] (item=/dev/vde) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vde", "msg": "Status code was -1 and not [201]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.53.142:24007/v1/devices/b99d0518-9dca-48ed-b42e-e6068ef77f61"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=428 changed=119 unreachable=0 failed=1 kube2 : ok=320 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Friday 15 February 2019 07:23:11 +0000 (1:03:22.359) 1:25:51.202 ******* =============================================================================== GCS | GD2 Cluster | Add devices | Add devices for kube3 -------------- 3802.36s GCS | GD2 Cluster | Add devices | Add devices for kube2 ---------------- 74.42s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 73.00s GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready --------- 50.12s kubernetes/master : kubeadm | Initialize first master ------------------ 40.54s kubernetes/master : kubeadm | Init other uninitialized masters --------- 38.80s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 35.65s etcd : Gen_certs | Write etcd master certs ----------------------------- 34.45s download : container_download | download images for kubeadm config images -- 33.39s Install packages ------------------------------------------------------- 31.10s Wait for host to be available ------------------------------------------ 20.81s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.08s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 18.58s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 17.83s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 17.05s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests -- 13.43s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 13.41s etcd : Gen_certs | Gather etcd master certs ---------------------------- 12.77s gather facts from all instances ---------------------------------------- 12.39s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.23s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sat Feb 16 22:08:20 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 16 Feb 2019 22:08:20 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #277 In-Reply-To: <2115791639.9744.1550209774867.JavaMail.jenkins@jenkins.ci.centos.org> References: <2115791639.9744.1550209774867.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <689653863.9906.1550354900616.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 36.18 KB...] ================================================================================ Install 3 Packages (+43 Dependent packages) Total download size: 141 M Installed size: 413 M Downloading packages: warning: /var/cache/yum/x86_64/7/epel/packages/distribution-gpg-keys-1.28-1.el7.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 352c64e5: NOKEY Public key for distribution-gpg-keys-1.28-1.el7.noarch.rpm is not installed -------------------------------------------------------------------------------- Total 69 MB/s | 141 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/46 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/46 Installing : apr-util-1.5.2-6.el7.x86_64 3/46 Installing : libmpc-1.0.1-3.el7.x86_64 4/46 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/46 Installing : python-six-1.9.0-2.el7.noarch 6/46 Installing : cpp-4.8.5-36.el7.x86_64 7/46 Installing : elfutils-0.172-2.el7.x86_64 8/46 Installing : pakchois-0.4-10.el7.x86_64 9/46 Installing : perl-srpm-macros-1-8.el7.noarch 10/46 Installing : unzip-6.0-19.el7.x86_64 11/46 Installing : dwz-0.11-3.el7.x86_64 12/46 Installing : pigz-2.3.4-1.el7.x86_64 13/46 Installing : usermode-1.111-5.el7.x86_64 14/46 Installing : python2-distro-1.2.0-1.el7.noarch 15/46 Installing : patch-2.7.1-10.el7_5.x86_64 16/46 Installing : python-backports-1.0-8.el7.x86_64 17/46 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/46 Installing : python-urllib3-1.10.2-5.el7.noarch 19/46 Installing : python-requests-2.6.0-1.el7_1.noarch 20/46 Installing : libmodman-2.0.1-8.el7.x86_64 21/46 Installing : libproxy-0.4.11-11.el7.x86_64 22/46 Installing : gdb-7.6.1-114.el7.x86_64 23/46 Installing : perl-Thread-Queue-3.02-2.el7.noarch 24/46 Installing : golang-src-1.11.4-1.el7.noarch 25/46 Installing : bzip2-1.0.6-13.el7.x86_64 26/46 Installing : python2-pyroute2-0.4.13-1.el7.noarch 27/46 Installing : nettle-2.7.1-8.el7.x86_64 28/46 Installing : zip-3.0-11.el7.x86_64 29/46 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 30/46 Installing : mercurial-2.6.2-8.el7_4.x86_64 31/46 Installing : distribution-gpg-keys-1.28-1.el7.noarch 32/46 Installing : mock-core-configs-29.4-1.el7.noarch 33/46 Installing : kernel-headers-3.10.0-957.5.1.el7.x86_64 34/46 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 35/46 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 36/46 Installing : gcc-4.8.5-36.el7.x86_64 37/46 Installing : trousers-0.3.14-2.el7.x86_64 38/46 Installing : gnutls-3.3.29-8.el7.x86_64 39/46 Installing : neon-0.30.0-3.el7.x86_64 40/46 Installing : subversion-libs-1.7.14-14.el7.x86_64 41/46 Installing : subversion-1.7.14-14.el7.x86_64 42/46 Installing : golang-1.11.4-1.el7.x86_64 43/46 Installing : golang-bin-1.11.4-1.el7.x86_64 44/46 Installing : mock-1.4.13-1.el7.noarch 45/46 Installing : rpm-build-4.11.3-35.el7.x86_64 46/46 Verifying : trousers-0.3.14-2.el7.x86_64 1/46 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/46 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/46 Verifying : kernel-headers-3.10.0-957.5.1.el7.x86_64 4/46 Verifying : rpm-build-4.11.3-35.el7.x86_64 5/46 Verifying : distribution-gpg-keys-1.28-1.el7.noarch 6/46 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/46 Verifying : zip-3.0-11.el7.x86_64 8/46 Verifying : nettle-2.7.1-8.el7.x86_64 9/46 Verifying : gnutls-3.3.29-8.el7.x86_64 10/46 Verifying : cpp-4.8.5-36.el7.x86_64 11/46 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 12/46 Verifying : golang-1.11.4-1.el7.x86_64 13/46 Verifying : golang-bin-1.11.4-1.el7.x86_64 14/46 Verifying : bzip2-1.0.6-13.el7.x86_64 15/46 Verifying : gcc-4.8.5-36.el7.x86_64 16/46 Verifying : golang-src-1.11.4-1.el7.noarch 17/46 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 18/46 Verifying : apr-1.4.8-3.el7_4.1.x86_64 19/46 Verifying : gdb-7.6.1-114.el7.x86_64 20/46 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/46 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/46 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 23/46 Verifying : libmodman-2.0.1-8.el7.x86_64 24/46 Verifying : mock-1.4.13-1.el7.noarch 25/46 Verifying : mpfr-3.1.1-4.el7.x86_64 26/46 Verifying : apr-util-1.5.2-6.el7.x86_64 27/46 Verifying : python-backports-1.0-8.el7.x86_64 28/46 Verifying : patch-2.7.1-10.el7_5.x86_64 29/46 Verifying : libmpc-1.0.1-3.el7.x86_64 30/46 Verifying : python2-distro-1.2.0-1.el7.noarch 31/46 Verifying : usermode-1.111-5.el7.x86_64 32/46 Verifying : python-six-1.9.0-2.el7.noarch 33/46 Verifying : libproxy-0.4.11-11.el7.x86_64 34/46 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 35/46 Verifying : neon-0.30.0-3.el7.x86_64 36/46 Verifying : python-requests-2.6.0-1.el7_1.noarch 37/46 Verifying : pigz-2.3.4-1.el7.x86_64 38/46 Verifying : subversion-1.7.14-14.el7.x86_64 39/46 Verifying : python-ipaddress-1.0.16-2.el7.noarch 40/46 Verifying : dwz-0.11-3.el7.x86_64 41/46 Verifying : unzip-6.0-19.el7.x86_64 42/46 Verifying : perl-srpm-macros-1-8.el7.noarch 43/46 Verifying : mock-core-configs-29.4-1.el7.noarch 44/46 Verifying : pakchois-0.4-10.el7.x86_64 45/46 Verifying : elfutils-0.172-2.el7.x86_64 46/46 Installed: golang.x86_64 0:1.11.4-1.el7 mock.noarch 0:1.4.13-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7 distribution-gpg-keys.noarch 0:1.28-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-8.el7 golang-bin.x86_64 0:1.11.4-1.el7 golang-src.noarch 0:1.11.4-1.el7 kernel-headers.x86_64 0:3.10.0-957.5.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:29.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2898 0 --:--:-- --:--:-- --:--:-- 2908 100 8513k 100 8513k 0 0 12.9M 0 --:--:-- --:--:-- --:--:-- 12.9M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2871 0 --:--:-- --:--:-- --:--:-- 2863 2 38.3M 2 866k 0 0 810k 0 0:00:48 0:00:01 0:00:47 810k100 38.3M 100 38.3M 0 0 19.9M 0 0:00:01 0:00:01 --:--:-- 44.0M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 869 0 --:--:-- --:--:-- --:--:-- 874 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 620 0 0 1995 0 --:--:-- --:--:-- --:--:-- 605k 100 10.7M 100 10.7M 0 0 12.0M 0 --:--:-- --:--:-- --:--:-- 12.0M ~/nightlyrpmmaQuf3/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmmaQuf3/glusterd2-v6.0-dev.136.git17bc6f4-vendor.tar.xz Created dist archive /root/nightlyrpmmaQuf3/glusterd2-v6.0-dev.136.git17bc6f4-vendor.tar.xz ~ ~/nightlyrpmmaQuf3 ~ INFO: mock.py version 1.4.13 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmmaQuf3/rpmbuild/SRPMS/glusterd2-5.0-0.dev.136.git17bc6f4.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.13 INFO: Mock Version: 1.4.13 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.136.git17bc6f4.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.136.git17bc6f4.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.136.git17bc6f4.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.136.git17bc6f4.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmmaQuf3/rpmbuild/SRPMS/glusterd2-5.0-0.dev.136.git17bc6f4.el7.src.rpm) Config(epel-7-x86_64) 1 minutes 28 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 69029e6e1e004300af162ae7761dbbf1 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.tFC94G:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins4582225681205764986.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done a2cc87e6 +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 235 | n44.dusty | 172.19.2.108 | dusty | 3198 | Deployed | a2cc87e6 | None | None | 7 | x86_64 | 1 | 2430 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sat Feb 16 23:34:20 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 16 Feb 2019 23:34:20 +0000 (UTC) Subject: [CI-results] Jenkins build is back to normal : gluster_anteater_gcs #81 In-Reply-To: <1695185325.9752.1550215391618.JavaMail.jenkins@jenkins.ci.centos.org> References: <1695185325.9752.1550215391618.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <171820718.9918.1550360060893.JavaMail.jenkins@jenkins.ci.centos.org> See From ci at centos.org Mon Feb 18 14:42:47 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 18 Feb 2019 14:42:47 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #278 In-Reply-To: <689653863.9906.1550354900616.JavaMail.jenkins@jenkins.ci.centos.org> References: <689653863.9906.1550354900616.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2033834035.10139.1550500967610.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 36.17 KB...] ================================================================================ Install 3 Packages (+43 Dependent packages) Total download size: 141 M Installed size: 413 M Downloading packages: warning: /var/cache/yum/x86_64/7/epel/packages/distribution-gpg-keys-1.28-1.el7.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 352c64e5: NOKEY Public key for distribution-gpg-keys-1.28-1.el7.noarch.rpm is not installed -------------------------------------------------------------------------------- Total 71 MB/s | 141 MB 00:01 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/46 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/46 Installing : apr-util-1.5.2-6.el7.x86_64 3/46 Installing : libmpc-1.0.1-3.el7.x86_64 4/46 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/46 Installing : python-six-1.9.0-2.el7.noarch 6/46 Installing : cpp-4.8.5-36.el7.x86_64 7/46 Installing : elfutils-0.172-2.el7.x86_64 8/46 Installing : pakchois-0.4-10.el7.x86_64 9/46 Installing : perl-srpm-macros-1-8.el7.noarch 10/46 Installing : unzip-6.0-19.el7.x86_64 11/46 Installing : dwz-0.11-3.el7.x86_64 12/46 Installing : pigz-2.3.4-1.el7.x86_64 13/46 Installing : usermode-1.111-5.el7.x86_64 14/46 Installing : python2-distro-1.2.0-1.el7.noarch 15/46 Installing : patch-2.7.1-10.el7_5.x86_64 16/46 Installing : python-backports-1.0-8.el7.x86_64 17/46 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/46 Installing : python-urllib3-1.10.2-5.el7.noarch 19/46 Installing : python-requests-2.6.0-1.el7_1.noarch 20/46 Installing : libmodman-2.0.1-8.el7.x86_64 21/46 Installing : libproxy-0.4.11-11.el7.x86_64 22/46 Installing : gdb-7.6.1-114.el7.x86_64 23/46 Installing : perl-Thread-Queue-3.02-2.el7.noarch 24/46 Installing : golang-src-1.11.4-1.el7.noarch 25/46 Installing : bzip2-1.0.6-13.el7.x86_64 26/46 Installing : python2-pyroute2-0.4.13-1.el7.noarch 27/46 Installing : nettle-2.7.1-8.el7.x86_64 28/46 Installing : zip-3.0-11.el7.x86_64 29/46 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 30/46 Installing : mercurial-2.6.2-8.el7_4.x86_64 31/46 Installing : distribution-gpg-keys-1.28-1.el7.noarch 32/46 Installing : mock-core-configs-29.4-1.el7.noarch 33/46 Installing : kernel-headers-3.10.0-957.5.1.el7.x86_64 34/46 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 35/46 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 36/46 Installing : gcc-4.8.5-36.el7.x86_64 37/46 Installing : trousers-0.3.14-2.el7.x86_64 38/46 Installing : gnutls-3.3.29-8.el7.x86_64 39/46 Installing : neon-0.30.0-3.el7.x86_64 40/46 Installing : subversion-libs-1.7.14-14.el7.x86_64 41/46 Installing : subversion-1.7.14-14.el7.x86_64 42/46 Installing : golang-1.11.4-1.el7.x86_64 43/46 Installing : golang-bin-1.11.4-1.el7.x86_64 44/46 Installing : mock-1.4.13-1.el7.noarch 45/46 Installing : rpm-build-4.11.3-35.el7.x86_64 46/46 Verifying : trousers-0.3.14-2.el7.x86_64 1/46 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/46 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/46 Verifying : kernel-headers-3.10.0-957.5.1.el7.x86_64 4/46 Verifying : rpm-build-4.11.3-35.el7.x86_64 5/46 Verifying : distribution-gpg-keys-1.28-1.el7.noarch 6/46 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/46 Verifying : zip-3.0-11.el7.x86_64 8/46 Verifying : nettle-2.7.1-8.el7.x86_64 9/46 Verifying : gnutls-3.3.29-8.el7.x86_64 10/46 Verifying : cpp-4.8.5-36.el7.x86_64 11/46 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 12/46 Verifying : golang-1.11.4-1.el7.x86_64 13/46 Verifying : golang-bin-1.11.4-1.el7.x86_64 14/46 Verifying : bzip2-1.0.6-13.el7.x86_64 15/46 Verifying : gcc-4.8.5-36.el7.x86_64 16/46 Verifying : golang-src-1.11.4-1.el7.noarch 17/46 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 18/46 Verifying : apr-1.4.8-3.el7_4.1.x86_64 19/46 Verifying : gdb-7.6.1-114.el7.x86_64 20/46 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/46 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/46 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 23/46 Verifying : libmodman-2.0.1-8.el7.x86_64 24/46 Verifying : mock-1.4.13-1.el7.noarch 25/46 Verifying : mpfr-3.1.1-4.el7.x86_64 26/46 Verifying : apr-util-1.5.2-6.el7.x86_64 27/46 Verifying : python-backports-1.0-8.el7.x86_64 28/46 Verifying : patch-2.7.1-10.el7_5.x86_64 29/46 Verifying : libmpc-1.0.1-3.el7.x86_64 30/46 Verifying : python2-distro-1.2.0-1.el7.noarch 31/46 Verifying : usermode-1.111-5.el7.x86_64 32/46 Verifying : python-six-1.9.0-2.el7.noarch 33/46 Verifying : libproxy-0.4.11-11.el7.x86_64 34/46 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 35/46 Verifying : neon-0.30.0-3.el7.x86_64 36/46 Verifying : python-requests-2.6.0-1.el7_1.noarch 37/46 Verifying : pigz-2.3.4-1.el7.x86_64 38/46 Verifying : subversion-1.7.14-14.el7.x86_64 39/46 Verifying : python-ipaddress-1.0.16-2.el7.noarch 40/46 Verifying : dwz-0.11-3.el7.x86_64 41/46 Verifying : unzip-6.0-19.el7.x86_64 42/46 Verifying : perl-srpm-macros-1-8.el7.noarch 43/46 Verifying : mock-core-configs-29.4-1.el7.noarch 44/46 Verifying : pakchois-0.4-10.el7.x86_64 45/46 Verifying : elfutils-0.172-2.el7.x86_64 46/46 Installed: golang.x86_64 0:1.11.4-1.el7 mock.noarch 0:1.4.13-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7 distribution-gpg-keys.noarch 0:1.28-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-8.el7 golang-bin.x86_64 0:1.11.4-1.el7 golang-src.noarch 0:1.11.4-1.el7 kernel-headers.x86_64 0:3.10.0-957.5.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:29.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1851 0 --:--:-- --:--:-- --:--:-- 1861 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 8513k 100 8513k 0 0 10.3M 0 --:--:-- --:--:-- --:--:-- 23.8M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2020 0 --:--:-- --:--:-- --:--:-- 2029 26 38.3M 26 10.1M 0 0 17.1M 0 0:00:02 --:--:-- 0:00:02 17.1M100 38.3M 100 38.3M 0 0 46.1M 0 --:--:-- --:--:-- --:--:-- 116M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 454 0 --:--:-- --:--:-- --:--:-- 454 0 0 0 620 0 0 1320 0 --:--:-- --:--:-- --:--:-- 1320 100 10.7M 100 10.7M 0 0 14.7M 0 --:--:-- --:--:-- --:--:-- 14.7M ~/nightlyrpm8OQzaT/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpm8OQzaT/glusterd2-v6.0-dev.139.gitf46dee6-vendor.tar.xz Created dist archive /root/nightlyrpm8OQzaT/glusterd2-v6.0-dev.139.gitf46dee6-vendor.tar.xz ~ ~/nightlyrpm8OQzaT ~ INFO: mock.py version 1.4.13 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpm8OQzaT/rpmbuild/SRPMS/glusterd2-5.0-0.dev.139.gitf46dee6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.13 INFO: Mock Version: 1.4.13 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.139.gitf46dee6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.139.gitf46dee6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.139.gitf46dee6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.139.gitf46dee6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpm8OQzaT/rpmbuild/SRPMS/glusterd2-5.0-0.dev.139.gitf46dee6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 21 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 73b212d277114961a2b8685e65731b0f -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.lmaJG3:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins5621803607727317499.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 62cfe771 +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 164 | n37.crusty | 172.19.2.37 | crusty | 3206 | Deployed | 62cfe771 | None | None | 7 | x86_64 | 1 | 2360 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Mon Feb 18 15:52:12 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 18 Feb 2019 15:52:12 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #82 Message-ID: <468679206.10158.1550505132588.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 459.44 KB...] changed: [kube1] => (item=gcs-namespace.yml) changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-fs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-virtblock-csi.yml) changed: [kube1] => (item=gcs-storage-virtblock.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Monday 18 February 2019 15:40:56 +0000 (0:00:12.033) 0:10:31.081 ******* included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Monday 18 February 2019 15:40:56 +0000 (0:00:00.092) 0:10:31.173 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Monday 18 February 2019 15:40:56 +0000 (0:00:00.147) 0:10:31.321 ******* changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Monday 18 February 2019 15:40:57 +0000 (0:00:00.751) 0:10:32.072 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Monday 18 February 2019 15:40:57 +0000 (0:00:00.145) 0:10:32.217 ******* changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Monday 18 February 2019 15:40:58 +0000 (0:00:00.731) 0:10:32.949 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Monday 18 February 2019 15:40:58 +0000 (0:00:00.133) 0:10:33.082 ******* changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Monday 18 February 2019 15:40:59 +0000 (0:00:00.726) 0:10:33.809 ******* ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Monday 18 February 2019 15:40:59 +0000 (0:00:00.606) 0:10:34.416 ******* ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Monday 18 February 2019 15:41:00 +0000 (0:00:00.668) 0:10:35.084 ******* FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Monday 18 February 2019 15:41:11 +0000 (0:00:10.869) 0:10:45.954 ******* ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Monday 18 February 2019 15:41:11 +0000 (0:00:00.645) 0:10:46.600 ******* ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Monday 18 February 2019 15:41:12 +0000 (0:00:00.479) 0:10:47.079 ******* ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Monday 18 February 2019 15:41:12 +0000 (0:00:00.483) 0:10:47.562 ******* ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Monday 18 February 2019 15:41:13 +0000 (0:00:00.675) 0:10:48.237 ******* ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Monday 18 February 2019 15:41:14 +0000 (0:00:00.917) 0:10:49.155 ******* FAILED - RETRYING: GCS | ETCD Cluster | Get etcd-client service (5 retries left). changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Monday 18 February 2019 15:41:20 +0000 (0:00:06.092) 0:10:55.248 ******* ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Monday 18 February 2019 15:41:20 +0000 (0:00:00.144) 0:10:55.392 ******* FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Monday 18 February 2019 15:42:15 +0000 (0:00:54.621) 0:11:50.013 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Monday 18 February 2019 15:42:16 +0000 (0:00:00.827) 0:11:50.841 ******* included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Monday 18 February 2019 15:42:16 +0000 (0:00:00.120) 0:11:50.962 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Monday 18 February 2019 15:42:16 +0000 (0:00:00.140) 0:11:51.102 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Monday 18 February 2019 15:42:17 +0000 (0:00:00.668) 0:11:51.771 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Monday 18 February 2019 15:42:17 +0000 (0:00:00.157) 0:11:51.929 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Monday 18 February 2019 15:42:18 +0000 (0:00:00.772) 0:11:52.702 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Monday 18 February 2019 15:42:18 +0000 (0:00:00.165) 0:11:52.867 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Monday 18 February 2019 15:42:19 +0000 (0:00:00.790) 0:11:53.658 ******* changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Monday 18 February 2019 15:42:19 +0000 (0:00:00.604) 0:11:54.262 ******* ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Monday 18 February 2019 15:42:19 +0000 (0:00:00.237) 0:11:54.500 ******* FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "connection": "close", "content": "{\"errors\":[{\"code\":1,\"message\":\"context deadline exceeded\"}]}\n", "content_length": "62", "content_type": "application/json; charset=UTF-8", "date": "Mon, 18 Feb 2019 15:52:12 GMT", "json": {"errors": [{"code": 1, "message": "context deadline exceeded"}]}, "msg": "Status code was 500 and not [200]: HTTP Error 500: Internal Server Error", "redirected": false, "status": 500, "url": "http://10.233.56.165:24007/v1/peers", "x_gluster_cluster_id": "06858b40-1d48-4e89-87b7-71e689188071", "x_gluster_peer_id": "bc1db13c-8dd4-4567-a215-b6ccc4d65ced", "x_request_id": "0afef76f-293a-4caa-a20a-326dc979b874"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=421 changed=119 unreachable=0 failed=1 kube2 : ok=320 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Monday 18 February 2019 15:52:12 +0000 (0:09:52.502) 0:21:47.003 ******* =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 592.50s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 54.62s download : container_download | download images for kubeadm config images -- 39.51s kubernetes/master : kubeadm | Initialize first master ------------------ 28.09s kubernetes/master : kubeadm | Init other uninitialized masters --------- 26.27s Install packages ------------------------------------------------------- 23.17s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 21.10s Wait for host to be available ------------------------------------------ 16.57s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 13.16s etcd : Gen_certs | Write etcd master certs ----------------------------- 12.93s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 12.03s Extend root VG --------------------------------------------------------- 11.91s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 11.19s etcd : reload etcd ----------------------------------------------------- 10.90s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 10.87s container-engine/docker : Docker | pause while Docker restarts --------- 10.25s kubernetes/node : install | Copy hyperkube binary from download dir ---- 10.23s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests --- 8.81s gather facts from all instances ----------------------------------------- 8.01s etcd : wait for etcd up ------------------------------------------------- 7.90s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Tue Feb 19 02:42:47 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 19 Feb 2019 02:42:47 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #279 In-Reply-To: <2033834035.10139.1550500967610.JavaMail.jenkins@jenkins.ci.centos.org> References: <2033834035.10139.1550500967610.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1078172268.22.1550544167989.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 36.24 KB...] ================================================================================ Install 3 Packages (+43 Dependent packages) Total download size: 141 M Installed size: 413 M Downloading packages: warning: /var/cache/yum/x86_64/7/epel/packages/distribution-gpg-keys-1.28-1.el7.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 352c64e5: NOKEY Public key for distribution-gpg-keys-1.28-1.el7.noarch.rpm is not installed -------------------------------------------------------------------------------- Total 57 MB/s | 141 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/46 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/46 Installing : apr-util-1.5.2-6.el7.x86_64 3/46 Installing : libmpc-1.0.1-3.el7.x86_64 4/46 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/46 Installing : python-six-1.9.0-2.el7.noarch 6/46 Installing : cpp-4.8.5-36.el7.x86_64 7/46 Installing : elfutils-0.172-2.el7.x86_64 8/46 Installing : pakchois-0.4-10.el7.x86_64 9/46 Installing : perl-srpm-macros-1-8.el7.noarch 10/46 Installing : unzip-6.0-19.el7.x86_64 11/46 Installing : dwz-0.11-3.el7.x86_64 12/46 Installing : pigz-2.3.4-1.el7.x86_64 13/46 Installing : usermode-1.111-5.el7.x86_64 14/46 Installing : python2-distro-1.2.0-1.el7.noarch 15/46 Installing : patch-2.7.1-10.el7_5.x86_64 16/46 Installing : python-backports-1.0-8.el7.x86_64 17/46 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 18/46 Installing : python-urllib3-1.10.2-5.el7.noarch 19/46 Installing : python-requests-2.6.0-1.el7_1.noarch 20/46 Installing : libmodman-2.0.1-8.el7.x86_64 21/46 Installing : libproxy-0.4.11-11.el7.x86_64 22/46 Installing : gdb-7.6.1-114.el7.x86_64 23/46 Installing : perl-Thread-Queue-3.02-2.el7.noarch 24/46 Installing : golang-src-1.11.4-1.el7.noarch 25/46 Installing : bzip2-1.0.6-13.el7.x86_64 26/46 Installing : python2-pyroute2-0.4.13-1.el7.noarch 27/46 Installing : nettle-2.7.1-8.el7.x86_64 28/46 Installing : zip-3.0-11.el7.x86_64 29/46 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 30/46 Installing : mercurial-2.6.2-8.el7_4.x86_64 31/46 Installing : distribution-gpg-keys-1.28-1.el7.noarch 32/46 Installing : mock-core-configs-29.4-1.el7.noarch 33/46 Installing : kernel-headers-3.10.0-957.5.1.el7.x86_64 34/46 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 35/46 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 36/46 Installing : gcc-4.8.5-36.el7.x86_64 37/46 Installing : trousers-0.3.14-2.el7.x86_64 38/46 Installing : gnutls-3.3.29-8.el7.x86_64 39/46 Installing : neon-0.30.0-3.el7.x86_64 40/46 Installing : subversion-libs-1.7.14-14.el7.x86_64 41/46 Installing : subversion-1.7.14-14.el7.x86_64 42/46 Installing : golang-1.11.4-1.el7.x86_64 43/46 Installing : golang-bin-1.11.4-1.el7.x86_64 44/46 Installing : mock-1.4.13-1.el7.noarch 45/46 Installing : rpm-build-4.11.3-35.el7.x86_64 46/46 Verifying : trousers-0.3.14-2.el7.x86_64 1/46 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/46 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/46 Verifying : kernel-headers-3.10.0-957.5.1.el7.x86_64 4/46 Verifying : rpm-build-4.11.3-35.el7.x86_64 5/46 Verifying : distribution-gpg-keys-1.28-1.el7.noarch 6/46 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/46 Verifying : zip-3.0-11.el7.x86_64 8/46 Verifying : nettle-2.7.1-8.el7.x86_64 9/46 Verifying : gnutls-3.3.29-8.el7.x86_64 10/46 Verifying : cpp-4.8.5-36.el7.x86_64 11/46 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 12/46 Verifying : golang-1.11.4-1.el7.x86_64 13/46 Verifying : golang-bin-1.11.4-1.el7.x86_64 14/46 Verifying : bzip2-1.0.6-13.el7.x86_64 15/46 Verifying : gcc-4.8.5-36.el7.x86_64 16/46 Verifying : golang-src-1.11.4-1.el7.noarch 17/46 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 18/46 Verifying : apr-1.4.8-3.el7_4.1.x86_64 19/46 Verifying : gdb-7.6.1-114.el7.x86_64 20/46 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/46 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/46 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 23/46 Verifying : libmodman-2.0.1-8.el7.x86_64 24/46 Verifying : mock-1.4.13-1.el7.noarch 25/46 Verifying : mpfr-3.1.1-4.el7.x86_64 26/46 Verifying : apr-util-1.5.2-6.el7.x86_64 27/46 Verifying : python-backports-1.0-8.el7.x86_64 28/46 Verifying : patch-2.7.1-10.el7_5.x86_64 29/46 Verifying : libmpc-1.0.1-3.el7.x86_64 30/46 Verifying : python2-distro-1.2.0-1.el7.noarch 31/46 Verifying : usermode-1.111-5.el7.x86_64 32/46 Verifying : python-six-1.9.0-2.el7.noarch 33/46 Verifying : libproxy-0.4.11-11.el7.x86_64 34/46 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 35/46 Verifying : neon-0.30.0-3.el7.x86_64 36/46 Verifying : python-requests-2.6.0-1.el7_1.noarch 37/46 Verifying : pigz-2.3.4-1.el7.x86_64 38/46 Verifying : subversion-1.7.14-14.el7.x86_64 39/46 Verifying : python-ipaddress-1.0.16-2.el7.noarch 40/46 Verifying : dwz-0.11-3.el7.x86_64 41/46 Verifying : unzip-6.0-19.el7.x86_64 42/46 Verifying : perl-srpm-macros-1-8.el7.noarch 43/46 Verifying : mock-core-configs-29.4-1.el7.noarch 44/46 Verifying : pakchois-0.4-10.el7.x86_64 45/46 Verifying : elfutils-0.172-2.el7.x86_64 46/46 Installed: golang.x86_64 0:1.11.4-1.el7 mock.noarch 0:1.4.13-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7 distribution-gpg-keys.noarch 0:1.28-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-8.el7 golang-bin.x86_64 0:1.11.4-1.el7 golang-src.noarch 0:1.11.4-1.el7 kernel-headers.x86_64 0:3.10.0-957.5.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:29.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1995 0 --:--:-- --:--:-- --:--:-- 2003 37 8513k 37 3195k 0 0 3420k 0 0:00:02 --:--:-- 0:00:02 3420k 94 8513k 94 8073k 0 0 4188k 0 0:00:02 0:00:01 0:00:01 4907k100 8513k 100 8513k 0 0 4274k 0 0:00:01 0:00:01 --:--:-- 5030k Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1881 0 --:--:-- --:--:-- --:--:-- 1888 6 38.3M 6 2548k 0 0 2762k 0 0:00:14 --:--:-- 0:00:14 2762k 19 38.3M 19 7852k 0 0 4106k 0 0:00:09 0:00:01 0:00:08 5357k 38 38.3M 38 14.7M 0 0 5200k 0 0:00:07 0:00:02 0:00:05 6330k 66 38.3M 66 25.6M 0 0 6705k 0 0:00:05 0:00:03 0:00:02 7922k100 38.3M 100 38.3M 0 0 8114k 0 0:00:04 0:00:04 --:--:-- 9374k Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 539 0 --:--:-- --:--:-- --:--:-- 540 0 0 0 620 0 0 1706 0 --:--:-- --:--:-- --:--:-- 1706 48 10.7M 48 5369k 0 0 8087k 0 0:00:01 --:--:-- 0:00:01 8087k100 10.7M 100 10.7M 0 0 14.8M 0 --:--:-- --:--:-- --:--:-- 93.0M ~/nightlyrpmCKOlSv/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmCKOlSv/glusterd2-v6.0-dev.139.gitf46dee6-vendor.tar.xz Created dist archive /root/nightlyrpmCKOlSv/glusterd2-v6.0-dev.139.gitf46dee6-vendor.tar.xz ~ ~/nightlyrpmCKOlSv ~ INFO: mock.py version 1.4.13 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmCKOlSv/rpmbuild/SRPMS/glusterd2-5.0-0.dev.139.gitf46dee6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.13 INFO: Mock Version: 1.4.13 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.139.gitf46dee6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.139.gitf46dee6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.139.gitf46dee6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.139.gitf46dee6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmCKOlSv/rpmbuild/SRPMS/glusterd2-5.0-0.dev.139.gitf46dee6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 21 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 0f8c528b1fef4f94aab370d70a8a7601 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.GS4geS:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins3310982205862527440.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 629c18d0 +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 117 | n53.pufty | 172.19.3.117 | pufty | 3208 | Deployed | 629c18d0 | None | None | 7 | x86_64 | 1 | 2520 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Tue Feb 19 04:22:11 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 19 Feb 2019 04:22:11 +0000 (UTC) Subject: [CI-results] Jenkins build is back to normal : gluster_anteater_gcs #83 In-Reply-To: <468679206.10158.1550505132588.JavaMail.jenkins@jenkins.ci.centos.org> References: <468679206.10158.1550505132588.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <498642545.33.1550550131143.JavaMail.jenkins@jenkins.ci.centos.org> See From ci at centos.org Wed Feb 20 11:25:45 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 20 Feb 2019 11:25:45 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #280 In-Reply-To: <1078172268.22.1550544167989.JavaMail.jenkins@jenkins.ci.centos.org> References: <1078172268.22.1550544167989.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <630652179.255.1550661945999.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 36.21 KB...] ================================================================================ Install 3 Packages (+43 Dependent packages) Total download size: 141 M Installed size: 413 M Downloading packages: warning: /var/cache/yum/x86_64/7/epel/packages/distribution-gpg-keys-1.28-1.el7.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 352c64e5: NOKEY Public key for distribution-gpg-keys-1.28-1.el7.noarch.rpm is not installed -------------------------------------------------------------------------------- Total 97 MB/s | 141 MB 00:01 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : apr-1.4.8-3.el7_4.1.x86_64 1/46 Installing : mpfr-3.1.1-4.el7.x86_64 2/46 Installing : libmpc-1.0.1-3.el7.x86_64 3/46 Installing : apr-util-1.5.2-6.el7.x86_64 4/46 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/46 Installing : python-six-1.9.0-2.el7.noarch 6/46 Installing : cpp-4.8.5-36.el7.x86_64 7/46 Installing : elfutils-0.172-2.el7.x86_64 8/46 Installing : pakchois-0.4-10.el7.x86_64 9/46 Installing : perl-srpm-macros-1-8.el7.noarch 10/46 Installing : unzip-6.0-19.el7.x86_64 11/46 Installing : dwz-0.11-3.el7.x86_64 12/46 Installing : zip-3.0-11.el7.x86_64 13/46 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 14/46 Installing : pigz-2.3.4-1.el7.x86_64 15/46 Installing : usermode-1.111-5.el7.x86_64 16/46 Installing : python2-distro-1.2.0-1.el7.noarch 17/46 Installing : patch-2.7.1-10.el7_5.x86_64 18/46 Installing : python-backports-1.0-8.el7.x86_64 19/46 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 20/46 Installing : python-urllib3-1.10.2-5.el7.noarch 21/46 Installing : python-requests-2.6.0-1.el7_1.noarch 22/46 Installing : bzip2-1.0.6-13.el7.x86_64 23/46 Installing : libmodman-2.0.1-8.el7.x86_64 24/46 Installing : libproxy-0.4.11-11.el7.x86_64 25/46 Installing : gdb-7.6.1-114.el7.x86_64 26/46 Installing : perl-Thread-Queue-3.02-2.el7.noarch 27/46 Installing : golang-src-1.11.5-1.el7.noarch 28/46 Installing : python2-pyroute2-0.4.13-1.el7.noarch 29/46 Installing : nettle-2.7.1-8.el7.x86_64 30/46 Installing : mercurial-2.6.2-8.el7_4.x86_64 31/46 Installing : distribution-gpg-keys-1.28-1.el7.noarch 32/46 Installing : mock-core-configs-29.4-1.el7.noarch 33/46 Installing : kernel-headers-3.10.0-957.5.1.el7.x86_64 34/46 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 35/46 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 36/46 Installing : gcc-4.8.5-36.el7.x86_64 37/46 Installing : trousers-0.3.14-2.el7.x86_64 38/46 Installing : gnutls-3.3.29-8.el7.x86_64 39/46 Installing : neon-0.30.0-3.el7.x86_64 40/46 Installing : subversion-libs-1.7.14-14.el7.x86_64 41/46 Installing : subversion-1.7.14-14.el7.x86_64 42/46 Installing : golang-1.11.5-1.el7.x86_64 43/46 Installing : golang-bin-1.11.5-1.el7.x86_64 44/46 Installing : mock-1.4.13-1.el7.noarch 45/46 Installing : rpm-build-4.11.3-35.el7.x86_64 46/46 Verifying : trousers-0.3.14-2.el7.x86_64 1/46 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/46 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/46 Verifying : kernel-headers-3.10.0-957.5.1.el7.x86_64 4/46 Verifying : rpm-build-4.11.3-35.el7.x86_64 5/46 Verifying : distribution-gpg-keys-1.28-1.el7.noarch 6/46 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/46 Verifying : mpfr-3.1.1-4.el7.x86_64 8/46 Verifying : nettle-2.7.1-8.el7.x86_64 9/46 Verifying : gnutls-3.3.29-8.el7.x86_64 10/46 Verifying : cpp-4.8.5-36.el7.x86_64 11/46 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 12/46 Verifying : golang-src-1.11.5-1.el7.noarch 13/46 Verifying : subversion-1.7.14-14.el7.x86_64 14/46 Verifying : gcc-4.8.5-36.el7.x86_64 15/46 Verifying : golang-1.11.5-1.el7.x86_64 16/46 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 17/46 Verifying : apr-1.4.8-3.el7_4.1.x86_64 18/46 Verifying : golang-bin-1.11.5-1.el7.x86_64 19/46 Verifying : gdb-7.6.1-114.el7.x86_64 20/46 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/46 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/46 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 23/46 Verifying : libmodman-2.0.1-8.el7.x86_64 24/46 Verifying : mock-1.4.13-1.el7.noarch 25/46 Verifying : bzip2-1.0.6-13.el7.x86_64 26/46 Verifying : python-backports-1.0-8.el7.x86_64 27/46 Verifying : apr-util-1.5.2-6.el7.x86_64 28/46 Verifying : patch-2.7.1-10.el7_5.x86_64 29/46 Verifying : libmpc-1.0.1-3.el7.x86_64 30/46 Verifying : python2-distro-1.2.0-1.el7.noarch 31/46 Verifying : usermode-1.111-5.el7.x86_64 32/46 Verifying : python-six-1.9.0-2.el7.noarch 33/46 Verifying : libproxy-0.4.11-11.el7.x86_64 34/46 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 35/46 Verifying : neon-0.30.0-3.el7.x86_64 36/46 Verifying : python-requests-2.6.0-1.el7_1.noarch 37/46 Verifying : pigz-2.3.4-1.el7.x86_64 38/46 Verifying : zip-3.0-11.el7.x86_64 39/46 Verifying : python-ipaddress-1.0.16-2.el7.noarch 40/46 Verifying : dwz-0.11-3.el7.x86_64 41/46 Verifying : unzip-6.0-19.el7.x86_64 42/46 Verifying : perl-srpm-macros-1-8.el7.noarch 43/46 Verifying : mock-core-configs-29.4-1.el7.noarch 44/46 Verifying : pakchois-0.4-10.el7.x86_64 45/46 Verifying : elfutils-0.172-2.el7.x86_64 46/46 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.13-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7 distribution-gpg-keys.noarch 0:1.28-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-8.el7 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.5.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:29.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2548 0 --:--:-- --:--:-- --:--:-- 2563 100 8513k 100 8513k 0 0 12.6M 0 --:--:-- --:--:-- --:--:-- 12.6M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2083 0 --:--:-- --:--:-- --:--:-- 2083 100 38.3M 100 38.3M 0 0 42.5M 0 --:--:-- --:--:-- --:--:-- 42.5M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 900 0 --:--:-- --:--:-- --:--:-- 905 0 0 0 620 0 0 2334 0 --:--:-- --:--:-- --:--:-- 2334 50 10.7M 50 5516k 0 0 12.9M 0 --:--:-- --:--:-- --:--:-- 12.9M100 10.7M 100 10.7M 0 0 23.1M 0 --:--:-- --:--:-- --:--:-- 113M ~/nightlyrpmPI33TF/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmPI33TF/glusterd2-v6.0-dev.140.gita94bec2-vendor.tar.xz Created dist archive /root/nightlyrpmPI33TF/glusterd2-v6.0-dev.140.gita94bec2-vendor.tar.xz ~ ~/nightlyrpmPI33TF ~ INFO: mock.py version 1.4.13 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmPI33TF/rpmbuild/SRPMS/glusterd2-5.0-0.dev.140.gita94bec2.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.13 INFO: Mock Version: 1.4.13 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.140.gita94bec2.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.140.gita94bec2.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.140.gita94bec2.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.140.gita94bec2.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmPI33TF/rpmbuild/SRPMS/glusterd2-5.0-0.dev.140.gita94bec2.el7.src.rpm) Config(epel-7-x86_64) 1 minutes 29 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 4b1e7893c92440a185767c9f79497a46 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.JUNezG:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins7042439592400532725.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 4704c9d7 +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 244 | n53.dusty | 172.19.2.117 | dusty | 3216 | Deployed | 4704c9d7 | None | None | 7 | x86_64 | 1 | 2520 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Wed Feb 20 12:05:31 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 20 Feb 2019 12:05:31 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #84 Message-ID: <226254180.266.1550664331955.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 459.61 KB...] changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-fs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-virtblock-csi.yml) changed: [kube1] => (item=gcs-storage-virtblock.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Wednesday 20 February 2019 11:53:34 +0000 (0:00:35.725) 0:18:18.188 **** included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Wednesday 20 February 2019 11:53:34 +0000 (0:00:00.290) 0:18:18.478 **** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Wednesday 20 February 2019 11:53:35 +0000 (0:00:00.383) 0:18:18.861 **** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Wednesday 20 February 2019 11:53:37 +0000 (0:00:02.038) 0:18:20.900 **** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Wednesday 20 February 2019 11:53:37 +0000 (0:00:00.441) 0:18:21.342 **** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Wednesday 20 February 2019 11:53:39 +0000 (0:00:02.126) 0:18:23.468 **** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Wednesday 20 February 2019 11:53:40 +0000 (0:00:00.438) 0:18:23.906 **** changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Wednesday 20 February 2019 11:53:42 +0000 (0:00:02.122) 0:18:26.029 **** ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Wednesday 20 February 2019 11:53:43 +0000 (0:00:01.514) 0:18:27.544 **** ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Wednesday 20 February 2019 11:53:45 +0000 (0:00:01.592) 0:18:29.136 **** FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Wednesday 20 February 2019 11:53:57 +0000 (0:00:12.311) 0:18:41.447 **** ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Wednesday 20 February 2019 11:53:59 +0000 (0:00:01.633) 0:18:43.081 **** ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Wednesday 20 February 2019 11:54:00 +0000 (0:00:01.249) 0:18:44.331 **** ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Wednesday 20 February 2019 11:54:01 +0000 (0:00:01.143) 0:18:45.474 **** ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Wednesday 20 February 2019 11:54:03 +0000 (0:00:01.504) 0:18:46.979 **** ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Wednesday 20 February 2019 11:54:05 +0000 (0:00:01.829) 0:18:48.808 **** changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Wednesday 20 February 2019 11:54:06 +0000 (0:00:01.284) 0:18:50.093 **** ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Wednesday 20 February 2019 11:54:06 +0000 (0:00:00.327) 0:18:50.421 **** FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (43 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Wednesday 20 February 2019 11:55:42 +0000 (0:01:36.204) 0:20:26.626 **** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Wednesday 20 February 2019 11:55:44 +0000 (0:00:01.794) 0:20:28.421 **** included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Wednesday 20 February 2019 11:55:45 +0000 (0:00:00.211) 0:20:28.632 **** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Wednesday 20 February 2019 11:55:45 +0000 (0:00:00.372) 0:20:29.004 **** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Wednesday 20 February 2019 11:55:47 +0000 (0:00:01.647) 0:20:30.652 **** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Wednesday 20 February 2019 11:55:47 +0000 (0:00:00.351) 0:20:31.004 **** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Wednesday 20 February 2019 11:55:49 +0000 (0:00:01.746) 0:20:32.751 **** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Wednesday 20 February 2019 11:55:49 +0000 (0:00:00.351) 0:20:33.103 **** ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Wednesday 20 February 2019 11:55:51 +0000 (0:00:01.559) 0:20:34.662 **** changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Wednesday 20 February 2019 11:55:52 +0000 (0:00:01.425) 0:20:36.088 **** ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Wednesday 20 February 2019 11:55:52 +0000 (0:00:00.341) 0:20:36.430 **** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.20.94:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=420 changed=119 unreachable=0 failed=1 kube2 : ok=321 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Wednesday 20 February 2019 12:05:31 +0000 (0:09:38.596) 0:30:15.026 **** =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 578.60s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 96.20s kubernetes/master : kubeadm | Init other uninitialized masters --------- 40.48s kubernetes/master : kubeadm | Initialize first master ------------------ 39.83s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 35.73s etcd : Gen_certs | Write etcd master certs ----------------------------- 33.44s download : container_download | download images for kubeadm config images -- 31.91s Install packages ------------------------------------------------------- 30.47s Wait for host to be available ------------------------------------------ 20.83s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.63s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 18.75s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 16.94s gather facts from all instances ---------------------------------------- 13.65s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 13.52s etcd : Gen_certs | Gather etcd master certs ---------------------------- 13.33s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.31s etcd : reload etcd ----------------------------------------------------- 11.91s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.50s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests -- 11.33s container-engine/docker : Docker | pause while Docker restarts --------- 10.40s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu Feb 21 00:40:03 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 21 Feb 2019 00:40:03 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #281 In-Reply-To: <630652179.255.1550661945999.JavaMail.jenkins@jenkins.ci.centos.org> References: <630652179.255.1550661945999.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1509810118.24.1550709603381.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 36.22 KB...] ================================================================================ Install 3 Packages (+43 Dependent packages) Total download size: 141 M Installed size: 413 M Downloading packages: warning: /var/cache/yum/x86_64/7/epel/packages/distribution-gpg-keys-1.28-1.el7.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 352c64e5: NOKEY Public key for distribution-gpg-keys-1.28-1.el7.noarch.rpm is not installed -------------------------------------------------------------------------------- Total 56 MB/s | 141 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : apr-1.4.8-3.el7_4.1.x86_64 1/46 Installing : mpfr-3.1.1-4.el7.x86_64 2/46 Installing : libmpc-1.0.1-3.el7.x86_64 3/46 Installing : apr-util-1.5.2-6.el7.x86_64 4/46 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/46 Installing : python-six-1.9.0-2.el7.noarch 6/46 Installing : cpp-4.8.5-36.el7.x86_64 7/46 Installing : elfutils-0.172-2.el7.x86_64 8/46 Installing : pakchois-0.4-10.el7.x86_64 9/46 Installing : perl-srpm-macros-1-8.el7.noarch 10/46 Installing : unzip-6.0-19.el7.x86_64 11/46 Installing : dwz-0.11-3.el7.x86_64 12/46 Installing : zip-3.0-11.el7.x86_64 13/46 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 14/46 Installing : pigz-2.3.4-1.el7.x86_64 15/46 Installing : usermode-1.111-5.el7.x86_64 16/46 Installing : python2-distro-1.2.0-1.el7.noarch 17/46 Installing : patch-2.7.1-10.el7_5.x86_64 18/46 Installing : python-backports-1.0-8.el7.x86_64 19/46 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 20/46 Installing : python-urllib3-1.10.2-5.el7.noarch 21/46 Installing : python-requests-2.6.0-1.el7_1.noarch 22/46 Installing : bzip2-1.0.6-13.el7.x86_64 23/46 Installing : libmodman-2.0.1-8.el7.x86_64 24/46 Installing : libproxy-0.4.11-11.el7.x86_64 25/46 Installing : gdb-7.6.1-114.el7.x86_64 26/46 Installing : perl-Thread-Queue-3.02-2.el7.noarch 27/46 Installing : golang-src-1.11.5-1.el7.noarch 28/46 Installing : python2-pyroute2-0.4.13-1.el7.noarch 29/46 Installing : nettle-2.7.1-8.el7.x86_64 30/46 Installing : mercurial-2.6.2-8.el7_4.x86_64 31/46 Installing : distribution-gpg-keys-1.28-1.el7.noarch 32/46 Installing : mock-core-configs-29.4-1.el7.noarch 33/46 Installing : kernel-headers-3.10.0-957.5.1.el7.x86_64 34/46 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 35/46 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 36/46 Installing : gcc-4.8.5-36.el7.x86_64 37/46 Installing : trousers-0.3.14-2.el7.x86_64 38/46 Installing : gnutls-3.3.29-8.el7.x86_64 39/46 Installing : neon-0.30.0-3.el7.x86_64 40/46 Installing : subversion-libs-1.7.14-14.el7.x86_64 41/46 Installing : subversion-1.7.14-14.el7.x86_64 42/46 Installing : golang-1.11.5-1.el7.x86_64 43/46 Installing : golang-bin-1.11.5-1.el7.x86_64 44/46 Installing : mock-1.4.13-1.el7.noarch 45/46 Installing : rpm-build-4.11.3-35.el7.x86_64 46/46 Verifying : trousers-0.3.14-2.el7.x86_64 1/46 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/46 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/46 Verifying : kernel-headers-3.10.0-957.5.1.el7.x86_64 4/46 Verifying : rpm-build-4.11.3-35.el7.x86_64 5/46 Verifying : distribution-gpg-keys-1.28-1.el7.noarch 6/46 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/46 Verifying : mpfr-3.1.1-4.el7.x86_64 8/46 Verifying : nettle-2.7.1-8.el7.x86_64 9/46 Verifying : gnutls-3.3.29-8.el7.x86_64 10/46 Verifying : cpp-4.8.5-36.el7.x86_64 11/46 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 12/46 Verifying : golang-src-1.11.5-1.el7.noarch 13/46 Verifying : subversion-1.7.14-14.el7.x86_64 14/46 Verifying : gcc-4.8.5-36.el7.x86_64 15/46 Verifying : golang-1.11.5-1.el7.x86_64 16/46 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 17/46 Verifying : apr-1.4.8-3.el7_4.1.x86_64 18/46 Verifying : golang-bin-1.11.5-1.el7.x86_64 19/46 Verifying : gdb-7.6.1-114.el7.x86_64 20/46 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/46 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/46 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 23/46 Verifying : libmodman-2.0.1-8.el7.x86_64 24/46 Verifying : mock-1.4.13-1.el7.noarch 25/46 Verifying : bzip2-1.0.6-13.el7.x86_64 26/46 Verifying : python-backports-1.0-8.el7.x86_64 27/46 Verifying : apr-util-1.5.2-6.el7.x86_64 28/46 Verifying : patch-2.7.1-10.el7_5.x86_64 29/46 Verifying : libmpc-1.0.1-3.el7.x86_64 30/46 Verifying : python2-distro-1.2.0-1.el7.noarch 31/46 Verifying : usermode-1.111-5.el7.x86_64 32/46 Verifying : python-six-1.9.0-2.el7.noarch 33/46 Verifying : libproxy-0.4.11-11.el7.x86_64 34/46 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 35/46 Verifying : neon-0.30.0-3.el7.x86_64 36/46 Verifying : python-requests-2.6.0-1.el7_1.noarch 37/46 Verifying : pigz-2.3.4-1.el7.x86_64 38/46 Verifying : zip-3.0-11.el7.x86_64 39/46 Verifying : python-ipaddress-1.0.16-2.el7.noarch 40/46 Verifying : dwz-0.11-3.el7.x86_64 41/46 Verifying : unzip-6.0-19.el7.x86_64 42/46 Verifying : perl-srpm-macros-1-8.el7.noarch 43/46 Verifying : mock-core-configs-29.4-1.el7.noarch 44/46 Verifying : pakchois-0.4-10.el7.x86_64 45/46 Verifying : elfutils-0.172-2.el7.x86_64 46/46 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.13-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7 distribution-gpg-keys.noarch 0:1.28-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-8.el7 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.5.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:29.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1835 0 --:--:-- --:--:-- --:--:-- 1838 100 8513k 100 8513k 0 0 14.5M 0 --:--:-- --:--:-- --:--:-- 14.5M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2057 0 --:--:-- --:--:-- --:--:-- 2062 44 38.3M 44 17.2M 0 0 27.4M 0 0:00:01 --:--:-- 0:00:01 27.4M100 38.3M 100 38.3M 0 0 46.2M 0 --:--:-- --:--:-- --:--:-- 105M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 547 0 --:--:-- --:--:-- --:--:-- 548 0 0 0 620 0 0 1747 0 --:--:-- --:--:-- --:--:-- 1747 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 10.7M 100 10.7M 0 0 19.3M 0 --:--:-- --:--:-- --:--:-- 65.8M ~/nightlyrpmMunI25/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmMunI25/glusterd2-v6.0-dev.140.gita94bec2-vendor.tar.xz Created dist archive /root/nightlyrpmMunI25/glusterd2-v6.0-dev.140.gita94bec2-vendor.tar.xz ~ ~/nightlyrpmMunI25 ~ INFO: mock.py version 1.4.13 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmMunI25/rpmbuild/SRPMS/glusterd2-5.0-0.dev.140.gita94bec2.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.13 INFO: Mock Version: 1.4.13 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.140.gita94bec2.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.140.gita94bec2.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.140.gita94bec2.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.140.gita94bec2.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmMunI25/rpmbuild/SRPMS/glusterd2-5.0-0.dev.140.gita94bec2.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 22 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 1d585d5ba7bd4888a1ac787e4d75c1f9 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.nt2yAa:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins5383296170961245117.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done d355a677 +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 109 | n45.pufty | 172.19.3.109 | pufty | 3220 | Deployed | d355a677 | None | None | 7 | x86_64 | 1 | 2440 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Thu Feb 21 02:02:26 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 21 Feb 2019 02:02:26 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #85 In-Reply-To: <226254180.266.1550664331955.JavaMail.jenkins@jenkins.ci.centos.org> References: <226254180.266.1550664331955.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <759387559.36.1550714546264.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 459.46 KB...] changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-fs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-virtblock-csi.yml) changed: [kube1] => (item=gcs-storage-virtblock.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Thursday 21 February 2019 01:50:40 +0000 (0:00:35.112) 0:18:20.784 ***** included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Thursday 21 February 2019 01:50:40 +0000 (0:00:00.286) 0:18:21.071 ***** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Thursday 21 February 2019 01:50:41 +0000 (0:00:00.394) 0:18:21.465 ***** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Thursday 21 February 2019 01:50:43 +0000 (0:00:02.114) 0:18:23.580 ***** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Thursday 21 February 2019 01:50:43 +0000 (0:00:00.439) 0:18:24.019 ***** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Thursday 21 February 2019 01:50:46 +0000 (0:00:02.157) 0:18:26.177 ***** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Thursday 21 February 2019 01:50:46 +0000 (0:00:00.382) 0:18:26.560 ***** changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Thursday 21 February 2019 01:50:48 +0000 (0:00:02.073) 0:18:28.633 ***** ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Thursday 21 February 2019 01:50:50 +0000 (0:00:01.503) 0:18:30.136 ***** ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Thursday 21 February 2019 01:50:51 +0000 (0:00:01.720) 0:18:31.857 ***** FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Thursday 21 February 2019 01:51:03 +0000 (0:00:12.116) 0:18:43.973 ***** ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Thursday 21 February 2019 01:51:05 +0000 (0:00:01.617) 0:18:45.591 ***** ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Thursday 21 February 2019 01:51:06 +0000 (0:00:01.282) 0:18:46.874 ***** ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Thursday 21 February 2019 01:51:08 +0000 (0:00:01.267) 0:18:48.142 ***** ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Thursday 21 February 2019 01:51:09 +0000 (0:00:01.607) 0:18:49.749 ***** ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Thursday 21 February 2019 01:51:11 +0000 (0:00:01.846) 0:18:51.595 ***** changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Thursday 21 February 2019 01:51:12 +0000 (0:00:01.332) 0:18:52.929 ***** ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Thursday 21 February 2019 01:51:13 +0000 (0:00:00.320) 0:18:53.249 ***** FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (44 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Thursday 21 February 2019 01:52:37 +0000 (0:01:23.894) 0:20:17.144 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Thursday 21 February 2019 01:52:38 +0000 (0:00:01.503) 0:20:18.649 ***** included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Thursday 21 February 2019 01:52:38 +0000 (0:00:00.230) 0:20:18.879 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Thursday 21 February 2019 01:52:39 +0000 (0:00:00.330) 0:20:19.210 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Thursday 21 February 2019 01:52:40 +0000 (0:00:01.592) 0:20:20.802 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Thursday 21 February 2019 01:52:41 +0000 (0:00:00.409) 0:20:21.211 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Thursday 21 February 2019 01:52:42 +0000 (0:00:01.630) 0:20:22.842 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Thursday 21 February 2019 01:52:43 +0000 (0:00:00.391) 0:20:23.233 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Thursday 21 February 2019 01:52:44 +0000 (0:00:01.763) 0:20:24.996 ***** changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Thursday 21 February 2019 01:52:46 +0000 (0:00:01.463) 0:20:26.460 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Thursday 21 February 2019 01:52:46 +0000 (0:00:00.317) 0:20:26.777 ***** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.18.106:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=421 changed=119 unreachable=0 failed=1 kube2 : ok=320 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Thursday 21 February 2019 02:02:25 +0000 (0:09:39.139) 0:30:05.917 ***** =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 579.14s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 83.90s kubernetes/master : kubeadm | Initialize first master ------------------ 39.99s kubernetes/master : kubeadm | Init other uninitialized masters --------- 39.42s download : container_download | download images for kubeadm config images -- 37.02s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 35.11s etcd : Gen_certs | Write etcd master certs ----------------------------- 33.41s Install packages ------------------------------------------------------- 29.96s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.72s Wait for host to be available ------------------------------------------ 20.58s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 19.10s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 18.95s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 13.93s etcd : Gen_certs | Gather etcd master certs ---------------------------- 13.16s gather facts from all instances ---------------------------------------- 12.12s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.12s etcd : reload etcd ----------------------------------------------------- 12.02s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests -- 11.67s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.33s container-engine/docker : Docker | pause while Docker restarts --------- 10.42s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri Feb 22 08:57:57 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 22 Feb 2019 08:57:57 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #282 In-Reply-To: <1509810118.24.1550709603381.JavaMail.jenkins@jenkins.ci.centos.org> References: <1509810118.24.1550709603381.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <310567636.244.1550825877560.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [kshlmster] Add Oshank as an admin for the glusterd2 job ------------------------------------------ [...truncated 36.23 KB...] ================================================================================ Install 3 Packages (+43 Dependent packages) Total download size: 141 M Installed size: 413 M Downloading packages: warning: /var/cache/yum/x86_64/7/epel/packages/distribution-gpg-keys-1.28-1.el7.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 352c64e5: NOKEY Public key for distribution-gpg-keys-1.28-1.el7.noarch.rpm is not installed -------------------------------------------------------------------------------- Total 74 MB/s | 141 MB 00:01 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : apr-1.4.8-3.el7_4.1.x86_64 1/46 Installing : mpfr-3.1.1-4.el7.x86_64 2/46 Installing : libmpc-1.0.1-3.el7.x86_64 3/46 Installing : apr-util-1.5.2-6.el7.x86_64 4/46 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/46 Installing : python-six-1.9.0-2.el7.noarch 6/46 Installing : cpp-4.8.5-36.el7.x86_64 7/46 Installing : elfutils-0.172-2.el7.x86_64 8/46 Installing : pakchois-0.4-10.el7.x86_64 9/46 Installing : perl-srpm-macros-1-8.el7.noarch 10/46 Installing : unzip-6.0-19.el7.x86_64 11/46 Installing : dwz-0.11-3.el7.x86_64 12/46 Installing : zip-3.0-11.el7.x86_64 13/46 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 14/46 Installing : pigz-2.3.4-1.el7.x86_64 15/46 Installing : usermode-1.111-5.el7.x86_64 16/46 Installing : python2-distro-1.2.0-1.el7.noarch 17/46 Installing : patch-2.7.1-10.el7_5.x86_64 18/46 Installing : python-backports-1.0-8.el7.x86_64 19/46 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 20/46 Installing : python-urllib3-1.10.2-5.el7.noarch 21/46 Installing : python-requests-2.6.0-1.el7_1.noarch 22/46 Installing : bzip2-1.0.6-13.el7.x86_64 23/46 Installing : libmodman-2.0.1-8.el7.x86_64 24/46 Installing : libproxy-0.4.11-11.el7.x86_64 25/46 Installing : gdb-7.6.1-114.el7.x86_64 26/46 Installing : perl-Thread-Queue-3.02-2.el7.noarch 27/46 Installing : golang-src-1.11.5-1.el7.noarch 28/46 Installing : python2-pyroute2-0.4.13-1.el7.noarch 29/46 Installing : nettle-2.7.1-8.el7.x86_64 30/46 Installing : mercurial-2.6.2-8.el7_4.x86_64 31/46 Installing : distribution-gpg-keys-1.28-1.el7.noarch 32/46 Installing : mock-core-configs-29.4-1.el7.noarch 33/46 Installing : kernel-headers-3.10.0-957.5.1.el7.x86_64 34/46 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 35/46 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 36/46 Installing : gcc-4.8.5-36.el7.x86_64 37/46 Installing : trousers-0.3.14-2.el7.x86_64 38/46 Installing : gnutls-3.3.29-8.el7.x86_64 39/46 Installing : neon-0.30.0-3.el7.x86_64 40/46 Installing : subversion-libs-1.7.14-14.el7.x86_64 41/46 Installing : subversion-1.7.14-14.el7.x86_64 42/46 Installing : golang-1.11.5-1.el7.x86_64 43/46 Installing : golang-bin-1.11.5-1.el7.x86_64 44/46 Installing : mock-1.4.13-1.el7.noarch 45/46 Installing : rpm-build-4.11.3-35.el7.x86_64 46/46 Verifying : trousers-0.3.14-2.el7.x86_64 1/46 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/46 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/46 Verifying : kernel-headers-3.10.0-957.5.1.el7.x86_64 4/46 Verifying : rpm-build-4.11.3-35.el7.x86_64 5/46 Verifying : distribution-gpg-keys-1.28-1.el7.noarch 6/46 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/46 Verifying : mpfr-3.1.1-4.el7.x86_64 8/46 Verifying : nettle-2.7.1-8.el7.x86_64 9/46 Verifying : gnutls-3.3.29-8.el7.x86_64 10/46 Verifying : cpp-4.8.5-36.el7.x86_64 11/46 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 12/46 Verifying : golang-src-1.11.5-1.el7.noarch 13/46 Verifying : subversion-1.7.14-14.el7.x86_64 14/46 Verifying : gcc-4.8.5-36.el7.x86_64 15/46 Verifying : golang-1.11.5-1.el7.x86_64 16/46 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 17/46 Verifying : apr-1.4.8-3.el7_4.1.x86_64 18/46 Verifying : golang-bin-1.11.5-1.el7.x86_64 19/46 Verifying : gdb-7.6.1-114.el7.x86_64 20/46 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/46 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/46 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 23/46 Verifying : libmodman-2.0.1-8.el7.x86_64 24/46 Verifying : mock-1.4.13-1.el7.noarch 25/46 Verifying : bzip2-1.0.6-13.el7.x86_64 26/46 Verifying : python-backports-1.0-8.el7.x86_64 27/46 Verifying : apr-util-1.5.2-6.el7.x86_64 28/46 Verifying : patch-2.7.1-10.el7_5.x86_64 29/46 Verifying : libmpc-1.0.1-3.el7.x86_64 30/46 Verifying : python2-distro-1.2.0-1.el7.noarch 31/46 Verifying : usermode-1.111-5.el7.x86_64 32/46 Verifying : python-six-1.9.0-2.el7.noarch 33/46 Verifying : libproxy-0.4.11-11.el7.x86_64 34/46 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 35/46 Verifying : neon-0.30.0-3.el7.x86_64 36/46 Verifying : python-requests-2.6.0-1.el7_1.noarch 37/46 Verifying : pigz-2.3.4-1.el7.x86_64 38/46 Verifying : zip-3.0-11.el7.x86_64 39/46 Verifying : python-ipaddress-1.0.16-2.el7.noarch 40/46 Verifying : dwz-0.11-3.el7.x86_64 41/46 Verifying : unzip-6.0-19.el7.x86_64 42/46 Verifying : perl-srpm-macros-1-8.el7.noarch 43/46 Verifying : mock-core-configs-29.4-1.el7.noarch 44/46 Verifying : pakchois-0.4-10.el7.x86_64 45/46 Verifying : elfutils-0.172-2.el7.x86_64 46/46 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.13-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7 distribution-gpg-keys.noarch 0:1.28-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-8.el7 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.5.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:29.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 571 0 --:--:-- 0:00:01 --:--:-- 571 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0100 8513k 100 8513k 0 0 6557k 0 0:00:01 0:00:01 --:--:-- 44.2M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1900 0 --:--:-- --:--:-- --:--:-- 1905 32 38.3M 32 12.6M 0 0 16.3M 0 0:00:02 --:--:-- 0:00:02 16.3M 96 38.3M 96 37.0M 0 0 20.9M 0 0:00:01 0:00:01 --:--:-- 24.4M100 38.3M 100 38.3M 0 0 21.1M 0 0:00:01 0:00:01 --:--:-- 24.7M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 539 0 --:--:-- --:--:-- --:--:-- 540 0 0 0 620 0 0 1604 0 --:--:-- --:--:-- --:--:-- 1604 15 10.7M 15 1746k 0 0 3450k 0 0:00:03 --:--:-- 0:00:03 3450k100 10.7M 100 10.7M 0 0 18.1M 0 --:--:-- --:--:-- --:--:-- 106M ~/nightlyrpmISSVih/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmISSVih/glusterd2-v6.0-dev.143.git9093acb-vendor.tar.xz Created dist archive /root/nightlyrpmISSVih/glusterd2-v6.0-dev.143.git9093acb-vendor.tar.xz ~ ~/nightlyrpmISSVih ~ INFO: mock.py version 1.4.13 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmISSVih/rpmbuild/SRPMS/glusterd2-5.0-0.dev.143.git9093acb.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.13 INFO: Mock Version: 1.4.13 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.143.git9093acb.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.143.git9093acb.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.143.git9093acb.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.143.git9093acb.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmISSVih/rpmbuild/SRPMS/glusterd2-5.0-0.dev.143.git9093acb.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 22 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M f07388a01b52442cb6e51d510fc5ba96 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.qAkJ4E:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins3289904191593357705.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 693a0df7 +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 306 | n51.gusty | 172.19.2.179 | gusty | 3229 | Deployed | 693a0df7 | None | None | 7 | x86_64 | 1 | 2500 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Fri Feb 22 10:18:35 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 22 Feb 2019 10:18:35 +0000 (UTC) Subject: [CI-results] Jenkins build is back to normal : gluster_anteater_gcs #86 In-Reply-To: <759387559.36.1550714546264.JavaMail.jenkins@jenkins.ci.centos.org> References: <759387559.36.1550714546264.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1518525393.259.1550830715737.JavaMail.jenkins@jenkins.ci.centos.org> See From ci at centos.org Sat Feb 23 23:23:07 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 23 Feb 2019 23:23:07 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #283 In-Reply-To: <310567636.244.1550825877560.JavaMail.jenkins@jenkins.ci.centos.org> References: <310567636.244.1550825877560.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1605126916.475.1550964187868.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 36.21 KB...] ================================================================================ Install 3 Packages (+43 Dependent packages) Total download size: 141 M Installed size: 413 M Downloading packages: Public key for distribution-gpg-keys-1.28-1.el7.noarch.rpm is not installed warning: /var/cache/yum/x86_64/7/epel/packages/distribution-gpg-keys-1.28-1.el7.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 352c64e5: NOKEY -------------------------------------------------------------------------------- Total 92 MB/s | 141 MB 00:01 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : apr-1.4.8-3.el7_4.1.x86_64 1/46 Installing : mpfr-3.1.1-4.el7.x86_64 2/46 Installing : libmpc-1.0.1-3.el7.x86_64 3/46 Installing : apr-util-1.5.2-6.el7.x86_64 4/46 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/46 Installing : python-six-1.9.0-2.el7.noarch 6/46 Installing : cpp-4.8.5-36.el7.x86_64 7/46 Installing : elfutils-0.172-2.el7.x86_64 8/46 Installing : pakchois-0.4-10.el7.x86_64 9/46 Installing : perl-srpm-macros-1-8.el7.noarch 10/46 Installing : unzip-6.0-19.el7.x86_64 11/46 Installing : dwz-0.11-3.el7.x86_64 12/46 Installing : zip-3.0-11.el7.x86_64 13/46 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 14/46 Installing : pigz-2.3.4-1.el7.x86_64 15/46 Installing : usermode-1.111-5.el7.x86_64 16/46 Installing : python2-distro-1.2.0-1.el7.noarch 17/46 Installing : patch-2.7.1-10.el7_5.x86_64 18/46 Installing : python-backports-1.0-8.el7.x86_64 19/46 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 20/46 Installing : python-urllib3-1.10.2-5.el7.noarch 21/46 Installing : python-requests-2.6.0-1.el7_1.noarch 22/46 Installing : bzip2-1.0.6-13.el7.x86_64 23/46 Installing : libmodman-2.0.1-8.el7.x86_64 24/46 Installing : libproxy-0.4.11-11.el7.x86_64 25/46 Installing : gdb-7.6.1-114.el7.x86_64 26/46 Installing : perl-Thread-Queue-3.02-2.el7.noarch 27/46 Installing : golang-src-1.11.5-1.el7.noarch 28/46 Installing : python2-pyroute2-0.4.13-1.el7.noarch 29/46 Installing : nettle-2.7.1-8.el7.x86_64 30/46 Installing : mercurial-2.6.2-8.el7_4.x86_64 31/46 Installing : distribution-gpg-keys-1.28-1.el7.noarch 32/46 Installing : mock-core-configs-29.4-1.el7.noarch 33/46 Installing : kernel-headers-3.10.0-957.5.1.el7.x86_64 34/46 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 35/46 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 36/46 Installing : gcc-4.8.5-36.el7.x86_64 37/46 Installing : trousers-0.3.14-2.el7.x86_64 38/46 Installing : gnutls-3.3.29-8.el7.x86_64 39/46 Installing : neon-0.30.0-3.el7.x86_64 40/46 Installing : subversion-libs-1.7.14-14.el7.x86_64 41/46 Installing : subversion-1.7.14-14.el7.x86_64 42/46 Installing : golang-1.11.5-1.el7.x86_64 43/46 Installing : golang-bin-1.11.5-1.el7.x86_64 44/46 Installing : mock-1.4.13-1.el7.noarch 45/46 Installing : rpm-build-4.11.3-35.el7.x86_64 46/46 Verifying : trousers-0.3.14-2.el7.x86_64 1/46 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/46 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/46 Verifying : kernel-headers-3.10.0-957.5.1.el7.x86_64 4/46 Verifying : rpm-build-4.11.3-35.el7.x86_64 5/46 Verifying : distribution-gpg-keys-1.28-1.el7.noarch 6/46 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/46 Verifying : mpfr-3.1.1-4.el7.x86_64 8/46 Verifying : nettle-2.7.1-8.el7.x86_64 9/46 Verifying : gnutls-3.3.29-8.el7.x86_64 10/46 Verifying : cpp-4.8.5-36.el7.x86_64 11/46 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 12/46 Verifying : golang-src-1.11.5-1.el7.noarch 13/46 Verifying : subversion-1.7.14-14.el7.x86_64 14/46 Verifying : gcc-4.8.5-36.el7.x86_64 15/46 Verifying : golang-1.11.5-1.el7.x86_64 16/46 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 17/46 Verifying : apr-1.4.8-3.el7_4.1.x86_64 18/46 Verifying : golang-bin-1.11.5-1.el7.x86_64 19/46 Verifying : gdb-7.6.1-114.el7.x86_64 20/46 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/46 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/46 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 23/46 Verifying : libmodman-2.0.1-8.el7.x86_64 24/46 Verifying : mock-1.4.13-1.el7.noarch 25/46 Verifying : bzip2-1.0.6-13.el7.x86_64 26/46 Verifying : python-backports-1.0-8.el7.x86_64 27/46 Verifying : apr-util-1.5.2-6.el7.x86_64 28/46 Verifying : patch-2.7.1-10.el7_5.x86_64 29/46 Verifying : libmpc-1.0.1-3.el7.x86_64 30/46 Verifying : python2-distro-1.2.0-1.el7.noarch 31/46 Verifying : usermode-1.111-5.el7.x86_64 32/46 Verifying : python-six-1.9.0-2.el7.noarch 33/46 Verifying : libproxy-0.4.11-11.el7.x86_64 34/46 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 35/46 Verifying : neon-0.30.0-3.el7.x86_64 36/46 Verifying : python-requests-2.6.0-1.el7_1.noarch 37/46 Verifying : pigz-2.3.4-1.el7.x86_64 38/46 Verifying : zip-3.0-11.el7.x86_64 39/46 Verifying : python-ipaddress-1.0.16-2.el7.noarch 40/46 Verifying : dwz-0.11-3.el7.x86_64 41/46 Verifying : unzip-6.0-19.el7.x86_64 42/46 Verifying : perl-srpm-macros-1-8.el7.noarch 43/46 Verifying : mock-core-configs-29.4-1.el7.noarch 44/46 Verifying : pakchois-0.4-10.el7.x86_64 45/46 Verifying : elfutils-0.172-2.el7.x86_64 46/46 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.13-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7 distribution-gpg-keys.noarch 0:1.28-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-8.el7 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.5.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:29.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2982 0 --:--:-- --:--:-- --:--:-- 2980 100 8513k 100 8513k 0 0 16.6M 0 --:--:-- --:--:-- --:--:-- 16.6M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 3546 0 --:--:-- --:--:-- --:--:-- 3562 50 38.3M 50 19.4M 0 0 36.5M 0 0:00:01 --:--:-- 0:00:01 36.5M100 38.3M 100 38.3M 0 0 51.3M 0 --:--:-- --:--:-- --:--:-- 88.2M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 909 0 --:--:-- --:--:-- --:--:-- 910 0 0 0 620 0 0 2423 0 --:--:-- --:--:-- --:--:-- 2423 100 10.7M 100 10.7M 0 0 19.7M 0 --:--:-- --:--:-- --:--:-- 19.7M ~/nightlyrpm3MfpP0/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpm3MfpP0/glusterd2-v6.0-dev.143.git9093acb-vendor.tar.xz Created dist archive /root/nightlyrpm3MfpP0/glusterd2-v6.0-dev.143.git9093acb-vendor.tar.xz ~ ~/nightlyrpm3MfpP0 ~ INFO: mock.py version 1.4.13 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpm3MfpP0/rpmbuild/SRPMS/glusterd2-5.0-0.dev.143.git9093acb.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.13 INFO: Mock Version: 1.4.13 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.143.git9093acb.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.143.git9093acb.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.143.git9093acb.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.143.git9093acb.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpm3MfpP0/rpmbuild/SRPMS/glusterd2-5.0-0.dev.143.git9093acb.el7.src.rpm) Config(epel-7-x86_64) 1 minutes 31 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 1a88742d82f14faf89a02699a9b423bc -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.nQtWCL:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins1961887856477412236.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done e4435fa6 +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 206 | n15.dusty | 172.19.2.79 | dusty | 3215 | Deployed | e4435fa6 | None | None | 7 | x86_64 | 1 | 2140 | None | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Mon Feb 25 12:35:55 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 25 Feb 2019 12:35:55 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #284 In-Reply-To: <1605126916.475.1550964187868.JavaMail.jenkins@jenkins.ci.centos.org> References: <1605126916.475.1550964187868.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <215656283.690.1551098155158.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 36.60 KB...] ================================================================================ Install 3 Packages (+43 Dependent packages) Total download size: 141 M Installed size: 413 M Downloading packages: warning: /var/cache/yum/x86_64/7/epel/packages/distribution-gpg-keys-1.28-1.el7.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 352c64e5: NOKEY Public key for distribution-gpg-keys-1.28-1.el7.noarch.rpm is not installed -------------------------------------------------------------------------------- Total 93 MB/s | 141 MB 00:01 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : apr-1.4.8-3.el7_4.1.x86_64 1/46 Installing : mpfr-3.1.1-4.el7.x86_64 2/46 Installing : libmpc-1.0.1-3.el7.x86_64 3/46 Installing : apr-util-1.5.2-6.el7.x86_64 4/46 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/46 Installing : python-six-1.9.0-2.el7.noarch 6/46 Installing : cpp-4.8.5-36.el7.x86_64 7/46 Installing : elfutils-0.172-2.el7.x86_64 8/46 Installing : pakchois-0.4-10.el7.x86_64 9/46 Installing : perl-srpm-macros-1-8.el7.noarch 10/46 Installing : unzip-6.0-19.el7.x86_64 11/46 Installing : dwz-0.11-3.el7.x86_64 12/46 Installing : zip-3.0-11.el7.x86_64 13/46 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 14/46 Installing : pigz-2.3.4-1.el7.x86_64 15/46 Installing : usermode-1.111-5.el7.x86_64 16/46 Installing : python2-distro-1.2.0-1.el7.noarch 17/46 Installing : patch-2.7.1-10.el7_5.x86_64 18/46 Installing : python-backports-1.0-8.el7.x86_64 19/46 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 20/46 Installing : python-urllib3-1.10.2-5.el7.noarch 21/46 Installing : python-requests-2.6.0-1.el7_1.noarch 22/46 Installing : bzip2-1.0.6-13.el7.x86_64 23/46 Installing : libmodman-2.0.1-8.el7.x86_64 24/46 Installing : libproxy-0.4.11-11.el7.x86_64 25/46 Installing : gdb-7.6.1-114.el7.x86_64 26/46 Installing : perl-Thread-Queue-3.02-2.el7.noarch 27/46 Installing : golang-src-1.11.5-1.el7.noarch 28/46 Installing : python2-pyroute2-0.4.13-1.el7.noarch 29/46 Installing : nettle-2.7.1-8.el7.x86_64 30/46 Installing : mercurial-2.6.2-8.el7_4.x86_64 31/46 Installing : distribution-gpg-keys-1.28-1.el7.noarch 32/46 Installing : mock-core-configs-29.4-1.el7.noarch 33/46 Installing : kernel-headers-3.10.0-957.5.1.el7.x86_64 34/46 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 35/46 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 36/46 Installing : gcc-4.8.5-36.el7.x86_64 37/46 Installing : trousers-0.3.14-2.el7.x86_64 38/46 Installing : gnutls-3.3.29-8.el7.x86_64 39/46 Installing : neon-0.30.0-3.el7.x86_64 40/46 Installing : subversion-libs-1.7.14-14.el7.x86_64 41/46 Installing : subversion-1.7.14-14.el7.x86_64 42/46 Installing : golang-1.11.5-1.el7.x86_64 43/46 Installing : golang-bin-1.11.5-1.el7.x86_64 44/46 Installing : mock-1.4.13-1.el7.noarch 45/46 Installing : rpm-build-4.11.3-35.el7.x86_64 46/46 Verifying : trousers-0.3.14-2.el7.x86_64 1/46 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/46 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/46 Verifying : kernel-headers-3.10.0-957.5.1.el7.x86_64 4/46 Verifying : rpm-build-4.11.3-35.el7.x86_64 5/46 Verifying : distribution-gpg-keys-1.28-1.el7.noarch 6/46 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/46 Verifying : mpfr-3.1.1-4.el7.x86_64 8/46 Verifying : nettle-2.7.1-8.el7.x86_64 9/46 Verifying : gnutls-3.3.29-8.el7.x86_64 10/46 Verifying : cpp-4.8.5-36.el7.x86_64 11/46 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 12/46 Verifying : golang-src-1.11.5-1.el7.noarch 13/46 Verifying : subversion-1.7.14-14.el7.x86_64 14/46 Verifying : gcc-4.8.5-36.el7.x86_64 15/46 Verifying : golang-1.11.5-1.el7.x86_64 16/46 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 17/46 Verifying : apr-1.4.8-3.el7_4.1.x86_64 18/46 Verifying : golang-bin-1.11.5-1.el7.x86_64 19/46 Verifying : gdb-7.6.1-114.el7.x86_64 20/46 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/46 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/46 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 23/46 Verifying : libmodman-2.0.1-8.el7.x86_64 24/46 Verifying : mock-1.4.13-1.el7.noarch 25/46 Verifying : bzip2-1.0.6-13.el7.x86_64 26/46 Verifying : python-backports-1.0-8.el7.x86_64 27/46 Verifying : apr-util-1.5.2-6.el7.x86_64 28/46 Verifying : patch-2.7.1-10.el7_5.x86_64 29/46 Verifying : libmpc-1.0.1-3.el7.x86_64 30/46 Verifying : python2-distro-1.2.0-1.el7.noarch 31/46 Verifying : usermode-1.111-5.el7.x86_64 32/46 Verifying : python-six-1.9.0-2.el7.noarch 33/46 Verifying : libproxy-0.4.11-11.el7.x86_64 34/46 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 35/46 Verifying : neon-0.30.0-3.el7.x86_64 36/46 Verifying : python-requests-2.6.0-1.el7_1.noarch 37/46 Verifying : pigz-2.3.4-1.el7.x86_64 38/46 Verifying : zip-3.0-11.el7.x86_64 39/46 Verifying : python-ipaddress-1.0.16-2.el7.noarch 40/46 Verifying : dwz-0.11-3.el7.x86_64 41/46 Verifying : unzip-6.0-19.el7.x86_64 42/46 Verifying : perl-srpm-macros-1-8.el7.noarch 43/46 Verifying : mock-core-configs-29.4-1.el7.noarch 44/46 Verifying : pakchois-0.4-10.el7.x86_64 45/46 Verifying : elfutils-0.172-2.el7.x86_64 46/46 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.13-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7 distribution-gpg-keys.noarch 0:1.28-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-8.el7 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.5.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:29.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2740 0 --:--:-- --:--:-- --:--:-- 2750 100 8513k 100 8513k 0 0 14.9M 0 --:--:-- --:--:-- --:--:-- 14.9M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1839 0 --:--:-- --:--:-- --:--:-- 1838 100 38.3M 100 38.3M 0 0 44.5M 0 --:--:-- --:--:-- --:--:-- 44.5M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 1049 0 --:--:-- --:--:-- --:--:-- 1055 0 0 0 620 0 0 2674 0 --:--:-- --:--:-- --:--:-- 2674 100 10.7M 100 10.7M 0 0 24.3M 0 --:--:-- --:--:-- --:--:-- 24.3M ~/nightlyrpmV4AtCP/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmV4AtCP/glusterd2-v6.0-dev.143.git9093acb-vendor.tar.xz Created dist archive /root/nightlyrpmV4AtCP/glusterd2-v6.0-dev.143.git9093acb-vendor.tar.xz ~ ~/nightlyrpmV4AtCP ~ INFO: mock.py version 1.4.13 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmV4AtCP/rpmbuild/SRPMS/glusterd2-5.0-0.dev.143.git9093acb.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.13 INFO: Mock Version: 1.4.13 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.143.git9093acb.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.143.git9093acb.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.143.git9093acb.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.143.git9093acb.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmV4AtCP/rpmbuild/SRPMS/glusterd2-5.0-0.dev.143.git9093acb.el7.src.rpm) Config(epel-7-x86_64) 1 minutes 31 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 7a76935499b54a20b1837cc8398385f7 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.iHpvmd:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins742417179037599050.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done d5c92ffe +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 240 | n49.dusty | 172.19.2.113 | dusty | 3232 | Deployed | d5c92ffe | None | None | 7 | x86_64 | 1 | 2480 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Mon Feb 25 13:16:38 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 25 Feb 2019 13:16:38 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #88 Message-ID: <383625342.697.1551100598283.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 456.98 KB...] Monday 25 February 2019 13:03:43 +0000 (0:00:00.241) 0:17:37.735 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Apply Configuration To Secret] *** Monday 25 February 2019 13:03:43 +0000 (0:00:00.192) 0:17:37.928 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Apply Configuration] *** Monday 25 February 2019 13:03:43 +0000 (0:00:00.214) 0:17:38.143 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Download Controller Manifest] *** Monday 25 February 2019 13:03:43 +0000 (0:00:00.232) 0:17:38.375 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Apply Controller Manifest] *** Monday 25 February 2019 13:03:44 +0000 (0:00:00.269) 0:17:38.645 ******* PLAY [Fetch config] ************************************************************ TASK [Retrieve kubectl config] ************************************************* Monday 25 February 2019 13:03:44 +0000 (0:00:00.216) 0:17:38.861 ******* changed: [kube1] PLAY [Copy kube config for vagrant user] *************************************** TASK [Create a directory] ****************************************************** Monday 25 February 2019 13:03:45 +0000 (0:00:01.192) 0:17:40.054 ******* changed: [kube1] changed: [kube2] TASK [Copy kube config for vagrant user] *************************************** Monday 25 February 2019 13:03:47 +0000 (0:00:01.882) 0:17:41.937 ******* changed: [kube1] changed: [kube2] PLAY [Deploy GCS] ************************************************************** TASK [GCS Pre | Cluster ID | Generate a UUID] ********************************** Monday 25 February 2019 13:03:48 +0000 (0:00:01.276) 0:17:43.213 ******* changed: [kube1] TASK [GCS Pre | Cluster ID | Set gcs_gd2_clusterid fact] *********************** Monday 25 February 2019 13:03:49 +0000 (0:00:01.180) 0:17:44.394 ******* ok: [kube1] TASK [GCS Pre | Manifests directory | Create a temporary directory] ************ Monday 25 February 2019 13:03:50 +0000 (0:00:00.528) 0:17:44.922 ******* changed: [kube1] TASK [GCS Pre | Manifests directory | Set manifests_dir fact] ****************** Monday 25 February 2019 13:03:51 +0000 (0:00:01.308) 0:17:46.230 ******* ok: [kube1] TASK [GCS Pre | Manifests | Sync GCS manifests] ******************************** Monday 25 February 2019 13:03:52 +0000 (0:00:00.570) 0:17:46.800 ******* changed: [kube1] => (item=gcs-namespace.yml) changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-fs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-virtblock-csi.yml) changed: [kube1] => (item=gcs-storage-virtblock.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Monday 25 February 2019 13:04:28 +0000 (0:00:36.149) 0:18:22.950 ******* included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Monday 25 February 2019 13:04:28 +0000 (0:00:00.290) 0:18:23.240 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Monday 25 February 2019 13:04:29 +0000 (0:00:00.426) 0:18:23.667 ******* changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Monday 25 February 2019 13:04:31 +0000 (0:00:02.120) 0:18:25.788 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Monday 25 February 2019 13:04:31 +0000 (0:00:00.423) 0:18:26.212 ******* changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Monday 25 February 2019 13:04:33 +0000 (0:00:02.181) 0:18:28.393 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Monday 25 February 2019 13:04:34 +0000 (0:00:00.414) 0:18:28.808 ******* changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Monday 25 February 2019 13:04:36 +0000 (0:00:02.033) 0:18:30.841 ******* ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Monday 25 February 2019 13:04:38 +0000 (0:00:01.606) 0:18:32.448 ******* ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Monday 25 February 2019 13:04:39 +0000 (0:00:01.660) 0:18:34.108 ******* FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Monday 25 February 2019 13:04:51 +0000 (0:00:12.162) 0:18:46.271 ******* ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Monday 25 February 2019 13:04:53 +0000 (0:00:01.578) 0:18:47.849 ******* ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Monday 25 February 2019 13:04:54 +0000 (0:00:01.362) 0:18:49.212 ******* ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Monday 25 February 2019 13:04:56 +0000 (0:00:01.332) 0:18:50.544 ******* ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Monday 25 February 2019 13:04:57 +0000 (0:00:01.511) 0:18:52.056 ******* ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Monday 25 February 2019 13:04:59 +0000 (0:00:01.797) 0:18:53.853 ******* changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Monday 25 February 2019 13:05:00 +0000 (0:00:01.203) 0:18:55.057 ******* ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Monday 25 February 2019 13:05:00 +0000 (0:00:00.362) 0:18:55.420 ******* FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.24.34:2379/v2/members"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=408 changed=118 unreachable=0 failed=1 kube2 : ok=320 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Monday 25 February 2019 13:16:37 +0000 (0:11:36.809) 0:30:32.230 ******* =============================================================================== GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------ 696.81s kubernetes/master : kubeadm | Initialize first master ------------------ 38.94s kubernetes/master : kubeadm | Init other uninitialized masters --------- 38.37s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 36.15s download : container_download | download images for kubeadm config images -- 33.58s etcd : Gen_certs | Write etcd master certs ----------------------------- 32.96s Install packages ------------------------------------------------------- 30.87s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 21.00s Wait for host to be available ------------------------------------------ 20.81s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 18.67s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 17.22s gather facts from all instances ---------------------------------------- 13.90s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 13.64s etcd : Gen_certs | Gather etcd master certs ---------------------------- 12.67s etcd : reload etcd ----------------------------------------------------- 12.19s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.16s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.23s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests -- 11.02s container-engine/docker : Docker | pause while Docker restarts --------- 10.39s download : file_download | Download item -------------------------------- 9.96s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed Feb 27 03:24:07 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 27 Feb 2019 03:24:07 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #285 In-Reply-To: <215656283.690.1551098155158.JavaMail.jenkins@jenkins.ci.centos.org> References: <215656283.690.1551098155158.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1628859003.894.1551237847321.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 36.23 KB...] ================================================================================ Install 3 Packages (+43 Dependent packages) Total download size: 141 M Installed size: 413 M Downloading packages: warning: /var/cache/yum/x86_64/7/epel/packages/distribution-gpg-keys-1.28-1.el7.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 352c64e5: NOKEY Public key for distribution-gpg-keys-1.28-1.el7.noarch.rpm is not installed -------------------------------------------------------------------------------- Total 42 MB/s | 141 MB 00:03 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : apr-1.4.8-3.el7_4.1.x86_64 1/46 Installing : mpfr-3.1.1-4.el7.x86_64 2/46 Installing : libmpc-1.0.1-3.el7.x86_64 3/46 Installing : apr-util-1.5.2-6.el7.x86_64 4/46 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/46 Installing : python-six-1.9.0-2.el7.noarch 6/46 Installing : cpp-4.8.5-36.el7.x86_64 7/46 Installing : elfutils-0.172-2.el7.x86_64 8/46 Installing : pakchois-0.4-10.el7.x86_64 9/46 Installing : perl-srpm-macros-1-8.el7.noarch 10/46 Installing : unzip-6.0-19.el7.x86_64 11/46 Installing : dwz-0.11-3.el7.x86_64 12/46 Installing : zip-3.0-11.el7.x86_64 13/46 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 14/46 Installing : pigz-2.3.4-1.el7.x86_64 15/46 Installing : usermode-1.111-5.el7.x86_64 16/46 Installing : python2-distro-1.2.0-1.el7.noarch 17/46 Installing : patch-2.7.1-10.el7_5.x86_64 18/46 Installing : python-backports-1.0-8.el7.x86_64 19/46 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 20/46 Installing : python-urllib3-1.10.2-5.el7.noarch 21/46 Installing : python-requests-2.6.0-1.el7_1.noarch 22/46 Installing : bzip2-1.0.6-13.el7.x86_64 23/46 Installing : libmodman-2.0.1-8.el7.x86_64 24/46 Installing : libproxy-0.4.11-11.el7.x86_64 25/46 Installing : gdb-7.6.1-114.el7.x86_64 26/46 Installing : perl-Thread-Queue-3.02-2.el7.noarch 27/46 Installing : golang-src-1.11.5-1.el7.noarch 28/46 Installing : python2-pyroute2-0.4.13-1.el7.noarch 29/46 Installing : nettle-2.7.1-8.el7.x86_64 30/46 Installing : mercurial-2.6.2-8.el7_4.x86_64 31/46 Installing : distribution-gpg-keys-1.28-1.el7.noarch 32/46 Installing : mock-core-configs-29.4-1.el7.noarch 33/46 Installing : kernel-headers-3.10.0-957.5.1.el7.x86_64 34/46 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 35/46 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 36/46 Installing : gcc-4.8.5-36.el7.x86_64 37/46 Installing : trousers-0.3.14-2.el7.x86_64 38/46 Installing : gnutls-3.3.29-8.el7.x86_64 39/46 Installing : neon-0.30.0-3.el7.x86_64 40/46 Installing : subversion-libs-1.7.14-14.el7.x86_64 41/46 Installing : subversion-1.7.14-14.el7.x86_64 42/46 Installing : golang-1.11.5-1.el7.x86_64 43/46 Installing : golang-bin-1.11.5-1.el7.x86_64 44/46 Installing : mock-1.4.13-1.el7.noarch 45/46 Installing : rpm-build-4.11.3-35.el7.x86_64 46/46 Verifying : trousers-0.3.14-2.el7.x86_64 1/46 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/46 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/46 Verifying : kernel-headers-3.10.0-957.5.1.el7.x86_64 4/46 Verifying : rpm-build-4.11.3-35.el7.x86_64 5/46 Verifying : distribution-gpg-keys-1.28-1.el7.noarch 6/46 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/46 Verifying : mpfr-3.1.1-4.el7.x86_64 8/46 Verifying : nettle-2.7.1-8.el7.x86_64 9/46 Verifying : gnutls-3.3.29-8.el7.x86_64 10/46 Verifying : cpp-4.8.5-36.el7.x86_64 11/46 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 12/46 Verifying : golang-src-1.11.5-1.el7.noarch 13/46 Verifying : subversion-1.7.14-14.el7.x86_64 14/46 Verifying : gcc-4.8.5-36.el7.x86_64 15/46 Verifying : golang-1.11.5-1.el7.x86_64 16/46 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 17/46 Verifying : apr-1.4.8-3.el7_4.1.x86_64 18/46 Verifying : golang-bin-1.11.5-1.el7.x86_64 19/46 Verifying : gdb-7.6.1-114.el7.x86_64 20/46 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/46 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/46 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 23/46 Verifying : libmodman-2.0.1-8.el7.x86_64 24/46 Verifying : mock-1.4.13-1.el7.noarch 25/46 Verifying : bzip2-1.0.6-13.el7.x86_64 26/46 Verifying : python-backports-1.0-8.el7.x86_64 27/46 Verifying : apr-util-1.5.2-6.el7.x86_64 28/46 Verifying : patch-2.7.1-10.el7_5.x86_64 29/46 Verifying : libmpc-1.0.1-3.el7.x86_64 30/46 Verifying : python2-distro-1.2.0-1.el7.noarch 31/46 Verifying : usermode-1.111-5.el7.x86_64 32/46 Verifying : python-six-1.9.0-2.el7.noarch 33/46 Verifying : libproxy-0.4.11-11.el7.x86_64 34/46 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 35/46 Verifying : neon-0.30.0-3.el7.x86_64 36/46 Verifying : python-requests-2.6.0-1.el7_1.noarch 37/46 Verifying : pigz-2.3.4-1.el7.x86_64 38/46 Verifying : zip-3.0-11.el7.x86_64 39/46 Verifying : python-ipaddress-1.0.16-2.el7.noarch 40/46 Verifying : dwz-0.11-3.el7.x86_64 41/46 Verifying : unzip-6.0-19.el7.x86_64 42/46 Verifying : perl-srpm-macros-1-8.el7.noarch 43/46 Verifying : mock-core-configs-29.4-1.el7.noarch 44/46 Verifying : pakchois-0.4-10.el7.x86_64 45/46 Verifying : elfutils-0.172-2.el7.x86_64 46/46 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.13-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7 distribution-gpg-keys.noarch 0:1.28-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-8.el7 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.5.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:29.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1793 0 --:--:-- --:--:-- --:--:-- 1795 100 8513k 100 8513k 0 0 14.1M 0 --:--:-- --:--:-- --:--:-- 14.1M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2320 0 --:--:-- --:--:-- --:--:-- 2330 36 38.3M 36 13.8M 0 0 23.6M 0 0:00:01 --:--:-- 0:00:01 23.6M100 38.3M 100 38.3M 0 0 47.2M 0 --:--:-- --:--:-- --:--:-- 107M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 555 0 --:--:-- --:--:-- --:--:-- 556 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 620 0 0 1716 0 --:--:-- --:--:-- --:--:-- 605k 96 10.7M 96 10.3M 0 0 7870k 0 0:00:01 0:00:01 --:--:-- 7870k100 10.7M 100 10.7M 0 0 7033k 0 0:00:01 0:00:01 --:--:-- 1781k ~/nightlyrpmLahBEJ/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmLahBEJ/glusterd2-v6.0-dev.144.git829f22a-vendor.tar.xz Created dist archive /root/nightlyrpmLahBEJ/glusterd2-v6.0-dev.144.git829f22a-vendor.tar.xz ~ ~/nightlyrpmLahBEJ ~ INFO: mock.py version 1.4.13 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmLahBEJ/rpmbuild/SRPMS/glusterd2-5.0-0.dev.144.git829f22a.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.13 INFO: Mock Version: 1.4.13 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.144.git829f22a.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.144.git829f22a.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.144.git829f22a.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.144.git829f22a.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmLahBEJ/rpmbuild/SRPMS/glusterd2-5.0-0.dev.144.git829f22a.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 23 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 1869b4c009c9416bac520b419e7646c1 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.KVcILs:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins4777904592217625872.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 26e5686f +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 111 | n47.pufty | 172.19.3.111 | pufty | 3239 | Deployed | 26e5686f | None | None | 7 | x86_64 | 1 | 2460 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Wed Feb 27 03:59:37 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 27 Feb 2019 03:59:37 +0000 (UTC) Subject: [CI-results] Jenkins build is back to normal : gluster_anteater_gcs #89 In-Reply-To: <383625342.697.1551100598283.JavaMail.jenkins@jenkins.ci.centos.org> References: <383625342.697.1551100598283.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1557124190.900.1551239977763.JavaMail.jenkins@jenkins.ci.centos.org> See From ci at centos.org Thu Feb 28 18:07:18 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 28 Feb 2019 18:07:18 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #286 In-Reply-To: <1628859003.894.1551237847321.JavaMail.jenkins@jenkins.ci.centos.org> References: <1628859003.894.1551237847321.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <829000095.1105.1551377238162.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 36.22 KB...] ================================================================================ Install 3 Packages (+43 Dependent packages) Total download size: 141 M Installed size: 413 M Downloading packages: warning: /var/cache/yum/x86_64/7/epel/packages/distribution-gpg-keys-1.28-1.el7.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 352c64e5: NOKEY Public key for distribution-gpg-keys-1.28-1.el7.noarch.rpm is not installed -------------------------------------------------------------------------------- Total 48 MB/s | 141 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : apr-1.4.8-3.el7_4.1.x86_64 1/46 Installing : mpfr-3.1.1-4.el7.x86_64 2/46 Installing : libmpc-1.0.1-3.el7.x86_64 3/46 Installing : apr-util-1.5.2-6.el7.x86_64 4/46 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/46 Installing : python-six-1.9.0-2.el7.noarch 6/46 Installing : cpp-4.8.5-36.el7.x86_64 7/46 Installing : elfutils-0.172-2.el7.x86_64 8/46 Installing : pakchois-0.4-10.el7.x86_64 9/46 Installing : perl-srpm-macros-1-8.el7.noarch 10/46 Installing : unzip-6.0-19.el7.x86_64 11/46 Installing : dwz-0.11-3.el7.x86_64 12/46 Installing : zip-3.0-11.el7.x86_64 13/46 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 14/46 Installing : pigz-2.3.4-1.el7.x86_64 15/46 Installing : usermode-1.111-5.el7.x86_64 16/46 Installing : python2-distro-1.2.0-1.el7.noarch 17/46 Installing : patch-2.7.1-10.el7_5.x86_64 18/46 Installing : python-backports-1.0-8.el7.x86_64 19/46 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 20/46 Installing : python-urllib3-1.10.2-5.el7.noarch 21/46 Installing : python-requests-2.6.0-1.el7_1.noarch 22/46 Installing : bzip2-1.0.6-13.el7.x86_64 23/46 Installing : libmodman-2.0.1-8.el7.x86_64 24/46 Installing : libproxy-0.4.11-11.el7.x86_64 25/46 Installing : gdb-7.6.1-114.el7.x86_64 26/46 Installing : perl-Thread-Queue-3.02-2.el7.noarch 27/46 Installing : golang-src-1.11.5-1.el7.noarch 28/46 Installing : python2-pyroute2-0.4.13-1.el7.noarch 29/46 Installing : nettle-2.7.1-8.el7.x86_64 30/46 Installing : mercurial-2.6.2-8.el7_4.x86_64 31/46 Installing : distribution-gpg-keys-1.28-1.el7.noarch 32/46 Installing : mock-core-configs-29.4-1.el7.noarch 33/46 Installing : kernel-headers-3.10.0-957.5.1.el7.x86_64 34/46 Installing : glibc-headers-2.17-260.el7_6.3.x86_64 35/46 Installing : glibc-devel-2.17-260.el7_6.3.x86_64 36/46 Installing : gcc-4.8.5-36.el7.x86_64 37/46 Installing : trousers-0.3.14-2.el7.x86_64 38/46 Installing : gnutls-3.3.29-8.el7.x86_64 39/46 Installing : neon-0.30.0-3.el7.x86_64 40/46 Installing : subversion-libs-1.7.14-14.el7.x86_64 41/46 Installing : subversion-1.7.14-14.el7.x86_64 42/46 Installing : golang-1.11.5-1.el7.x86_64 43/46 Installing : golang-bin-1.11.5-1.el7.x86_64 44/46 Installing : mock-1.4.13-1.el7.noarch 45/46 Installing : rpm-build-4.11.3-35.el7.x86_64 46/46 Verifying : trousers-0.3.14-2.el7.x86_64 1/46 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/46 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/46 Verifying : kernel-headers-3.10.0-957.5.1.el7.x86_64 4/46 Verifying : rpm-build-4.11.3-35.el7.x86_64 5/46 Verifying : distribution-gpg-keys-1.28-1.el7.noarch 6/46 Verifying : mercurial-2.6.2-8.el7_4.x86_64 7/46 Verifying : mpfr-3.1.1-4.el7.x86_64 8/46 Verifying : nettle-2.7.1-8.el7.x86_64 9/46 Verifying : gnutls-3.3.29-8.el7.x86_64 10/46 Verifying : cpp-4.8.5-36.el7.x86_64 11/46 Verifying : python2-pyroute2-0.4.13-1.el7.noarch 12/46 Verifying : golang-src-1.11.5-1.el7.noarch 13/46 Verifying : subversion-1.7.14-14.el7.x86_64 14/46 Verifying : gcc-4.8.5-36.el7.x86_64 15/46 Verifying : golang-1.11.5-1.el7.x86_64 16/46 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 17/46 Verifying : apr-1.4.8-3.el7_4.1.x86_64 18/46 Verifying : golang-bin-1.11.5-1.el7.x86_64 19/46 Verifying : gdb-7.6.1-114.el7.x86_64 20/46 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/46 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/46 Verifying : glibc-devel-2.17-260.el7_6.3.x86_64 23/46 Verifying : libmodman-2.0.1-8.el7.x86_64 24/46 Verifying : mock-1.4.13-1.el7.noarch 25/46 Verifying : bzip2-1.0.6-13.el7.x86_64 26/46 Verifying : python-backports-1.0-8.el7.x86_64 27/46 Verifying : apr-util-1.5.2-6.el7.x86_64 28/46 Verifying : patch-2.7.1-10.el7_5.x86_64 29/46 Verifying : libmpc-1.0.1-3.el7.x86_64 30/46 Verifying : python2-distro-1.2.0-1.el7.noarch 31/46 Verifying : usermode-1.111-5.el7.x86_64 32/46 Verifying : python-six-1.9.0-2.el7.noarch 33/46 Verifying : libproxy-0.4.11-11.el7.x86_64 34/46 Verifying : glibc-headers-2.17-260.el7_6.3.x86_64 35/46 Verifying : neon-0.30.0-3.el7.x86_64 36/46 Verifying : python-requests-2.6.0-1.el7_1.noarch 37/46 Verifying : pigz-2.3.4-1.el7.x86_64 38/46 Verifying : zip-3.0-11.el7.x86_64 39/46 Verifying : python-ipaddress-1.0.16-2.el7.noarch 40/46 Verifying : dwz-0.11-3.el7.x86_64 41/46 Verifying : unzip-6.0-19.el7.x86_64 42/46 Verifying : perl-srpm-macros-1-8.el7.noarch 43/46 Verifying : mock-core-configs-29.4-1.el7.noarch 44/46 Verifying : pakchois-0.4-10.el7.x86_64 45/46 Verifying : elfutils-0.172-2.el7.x86_64 46/46 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.13-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7 distribution-gpg-keys.noarch 0:1.28-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.3 glibc-headers.x86_64 0:2.17-260.el7_6.3 gnutls.x86_64 0:3.3.29-8.el7 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.5.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:29.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-1.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2676 0 --:--:-- --:--:-- --:--:-- 2688 100 8513k 100 8513k 0 0 15.3M 0 --:--:-- --:--:-- --:--:-- 15.3M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2329 0 --:--:-- --:--:-- --:--:-- 2330 100 38.3M 100 38.3M 0 0 48.8M 0 --:--:-- --:--:-- --:--:-- 48.8M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 957 0 --:--:-- --:--:-- --:--:-- 962 0 0 0 620 0 0 1685 0 --:--:-- --:--:-- --:--:-- 1685 0 10.7M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 10.7M 100 10.7M 0 0 17.5M 0 --:--:-- --:--:-- --:--:-- 73.5M ~/nightlyrpmQK9Kw1/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmQK9Kw1/glusterd2-v6.0-dev.145.git994aaa0-vendor.tar.xz Created dist archive /root/nightlyrpmQK9Kw1/glusterd2-v6.0-dev.145.git994aaa0-vendor.tar.xz ~ ~/nightlyrpmQK9Kw1 ~ INFO: mock.py version 1.4.13 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmQK9Kw1/rpmbuild/SRPMS/glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.13 INFO: Mock Version: 1.4.13 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmQK9Kw1/rpmbuild/SRPMS/glusterd2-5.0-0.dev.145.git994aaa0.el7.src.rpm) Config(epel-7-x86_64) 1 minutes 31 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M ba331f4de03e4a12b66ed07e1e546666 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.hoYTYZ:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins6574555679765325382.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done db62844d +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 232 | n41.dusty | 172.19.2.105 | dusty | 3248 | Deployed | db62844d | None | None | 7 | x86_64 | 1 | 2400 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0