From ci at centos.org Wed Jan 2 00:54:55 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 2 Jan 2019 00:54:55 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #36 Message-ID: <1118199900.4911.1546390495568.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 575.46 KB...] TASK [GCS Pre | Cluster ID | Generate a UUID] ********************************** Wednesday 02 January 2019 00:44:25 +0000 (0:00:00.371) 0:09:19.589 ***** changed: [kube1] TASK [GCS Pre | Cluster ID | Set gcs_gd2_clusterid fact] *********************** Wednesday 02 January 2019 00:44:25 +0000 (0:00:00.384) 0:09:19.973 ***** ok: [kube1] TASK [GCS Pre | Manifests directory | Create a temporary directory] ************ Wednesday 02 January 2019 00:44:25 +0000 (0:00:00.151) 0:09:20.125 ***** changed: [kube1] TASK [GCS Pre | Manifests directory | Set manifests_dir fact] ****************** Wednesday 02 January 2019 00:44:26 +0000 (0:00:00.556) 0:09:20.682 ***** ok: [kube1] TASK [GCS Pre | Manifests | Sync GCS manifests] ******************************** Wednesday 02 January 2019 00:44:26 +0000 (0:00:00.149) 0:09:20.831 ***** changed: [kube1] => (item=gcs-namespace.yml) changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-csi-node-info-crd.yml) changed: [kube1] => (item=gcs-csi-driver-registry-crd.yml) changed: [kube1] => (item=monitoring-namespace.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-grafana.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Wednesday 02 January 2019 00:44:35 +0000 (0:00:08.919) 0:09:29.750 ***** included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Wednesday 02 January 2019 00:44:35 +0000 (0:00:00.094) 0:09:29.845 ***** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Wednesday 02 January 2019 00:44:35 +0000 (0:00:00.137) 0:09:29.982 ***** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Wednesday 02 January 2019 00:44:36 +0000 (0:00:00.724) 0:09:30.707 ***** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Wednesday 02 January 2019 00:44:36 +0000 (0:00:00.142) 0:09:30.850 ***** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Wednesday 02 January 2019 00:44:37 +0000 (0:00:00.732) 0:09:31.582 ***** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Wednesday 02 January 2019 00:44:37 +0000 (0:00:00.132) 0:09:31.715 ***** changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Wednesday 02 January 2019 00:44:38 +0000 (0:00:00.711) 0:09:32.426 ***** ok: [kube1] TASK [GCS | Namespace | Create Monitoring namespace] *************************** Wednesday 02 January 2019 00:44:38 +0000 (0:00:00.669) 0:09:33.096 ***** ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Wednesday 02 January 2019 00:44:39 +0000 (0:00:00.654) 0:09:33.750 ***** ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Wednesday 02 January 2019 00:44:40 +0000 (0:00:00.714) 0:09:34.465 ***** FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Wednesday 02 January 2019 00:44:51 +0000 (0:00:11.453) 0:09:45.919 ***** FAILED - RETRYING: GCS | ETCD Cluster | Deploy etcd-cluster (5 retries left). ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Wednesday 02 January 2019 00:44:57 +0000 (0:00:06.424) 0:09:52.344 ***** changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Wednesday 02 January 2019 00:44:58 +0000 (0:00:00.543) 0:09:52.888 ***** ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Wednesday 02 January 2019 00:44:58 +0000 (0:00:00.147) 0:09:53.035 ***** FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Wednesday 02 January 2019 00:45:57 +0000 (0:00:58.368) 0:10:51.404 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Wednesday 02 January 2019 00:45:57 +0000 (0:00:00.842) 0:10:52.247 ***** included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Wednesday 02 January 2019 00:45:57 +0000 (0:00:00.124) 0:10:52.371 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Wednesday 02 January 2019 00:45:58 +0000 (0:00:00.153) 0:10:52.524 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Wednesday 02 January 2019 00:45:58 +0000 (0:00:00.745) 0:10:53.270 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Wednesday 02 January 2019 00:45:59 +0000 (0:00:00.154) 0:10:53.424 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Wednesday 02 January 2019 00:46:00 +0000 (0:00:01.054) 0:10:54.479 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Wednesday 02 January 2019 00:46:00 +0000 (0:00:00.136) 0:10:54.616 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Wednesday 02 January 2019 00:46:00 +0000 (0:00:00.733) 0:10:55.349 ***** changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Wednesday 02 January 2019 00:46:01 +0000 (0:00:00.550) 0:10:55.899 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Wednesday 02 January 2019 00:46:01 +0000 (0:00:00.295) 0:10:56.195 ***** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.52.219:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=379 changed=113 unreachable=0 failed=1 kube2 : ok=293 changed=91 unreachable=0 failed=0 kube3 : ok=254 changed=71 unreachable=0 failed=0 Wednesday 02 January 2019 00:54:54 +0000 (0:08:53.069) 0:19:49.265 ***** =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 533.07s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 58.37s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 38.59s kubernetes/master : Master | wait for the apiserver to be running ------ 29.22s Install packages ------------------------------------------------------- 23.65s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 21.35s Extend root VG --------------------------------------------------------- 16.90s Wait for host to be available ------------------------------------------ 16.34s etcd : Gen_certs | Write etcd master certs ----------------------------- 13.45s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 11.45s download : file_download | Download item ------------------------------- 11.30s etcd : reload etcd ----------------------------------------------------- 10.76s docker : Docker | pause while Docker restarts -------------------------- 10.15s download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 9.61s GCS Pre | Manifests | Sync GCS manifests -------------------------------- 8.92s gather facts from all instances ----------------------------------------- 8.55s kubernetes/master : Master | wait for kube-controller-manager ----------- 8.38s kubernetes/node : Ensure nodePort range is reserved --------------------- 8.11s etcd : wait for etcd up ------------------------------------------------- 8.06s kubernetes/master : Master | wait for kube-scheduler -------------------- 6.59s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu Jan 3 02:14:45 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 3 Jan 2019 02:14:45 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #37 In-Reply-To: <1118199900.4911.1546390495568.JavaMail.jenkins@jenkins.ci.centos.org> References: <1118199900.4911.1546390495568.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <867716240.5008.1546481685400.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 582.74 KB...] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Thursday 03 January 2019 00:56:43 +0000 (0:00:01.598) 0:19:33.288 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Thursday 03 January 2019 00:56:43 +0000 (0:00:00.304) 0:19:33.593 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Thursday 03 January 2019 00:56:45 +0000 (0:00:01.512) 0:19:35.105 ****** changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Thursday 03 January 2019 00:56:47 +0000 (0:00:01.514) 0:19:36.620 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Thursday 03 January 2019 00:56:47 +0000 (0:00:00.329) 0:19:36.950 ****** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Add devices] ***************************************** Thursday 03 January 2019 00:57:14 +0000 (0:00:27.116) 0:20:04.066 ****** included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 TASK [GCS | GD2 Cluster | Add devices | Set facts] ***************************** Thursday 03 January 2019 00:57:14 +0000 (0:00:00.313) 0:20:04.380 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Add devices | Add devices for kube1] ***************** Thursday 03 January 2019 00:57:15 +0000 (0:00:00.370) 0:20:04.750 ****** ok: [kube1] => (item=/dev/vdc) ok: [kube1] => (item=/dev/vdd) ok: [kube1] => (item=/dev/vde) TASK [GCS | GD2 Cluster | Add devices | Set facts] ***************************** Thursday 03 January 2019 00:57:20 +0000 (0:00:05.676) 0:20:10.427 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Add devices | Add devices for kube3] ***************** Thursday 03 January 2019 00:57:21 +0000 (0:00:00.408) 0:20:10.835 ****** FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (1 retries left). failed: [kube1] (item=/dev/vdc) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdc", "msg": "Status code was -1 and not [201]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.1.27:24007/v1/devices/79bbb85c-5196-4f1e-a66e-63b57135e79a"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (1 retries left). failed: [kube1] (item=/dev/vdd) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdd", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.1.27:24007/v1/devices/79bbb85c-5196-4f1e-a66e-63b57135e79a"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (1 retries left). failed: [kube1] (item=/dev/vde) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vde", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.1.27:24007/v1/devices/79bbb85c-5196-4f1e-a66e-63b57135e79a"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=389 changed=113 unreachable=0 failed=1 kube2 : ok=293 changed=91 unreachable=0 failed=0 kube3 : ok=255 changed=71 unreachable=0 failed=0 Thursday 03 January 2019 02:14:44 +0000 (1:17:23.764) 1:37:34.600 ****** =============================================================================== GCS | GD2 Cluster | Add devices | Add devices for kube3 -------------- 4643.76s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 96.50s kubernetes/master : Master | wait for the apiserver to be running ------ 43.89s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 40.26s etcd : Gen_certs | Write etcd master certs ----------------------------- 32.87s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 31.77s Install packages ------------------------------------------------------- 30.01s kubernetes/master : Master | wait for kube-controller-manager ---------- 28.86s GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready --------- 27.12s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 26.48s Wait for host to be available ------------------------------------------ 20.90s download : file_download | Download item ------------------------------- 14.61s etcd : Gen_certs | Gather etcd master certs ---------------------------- 13.02s gather facts from all instances ---------------------------------------- 12.73s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.44s etcd : reload etcd ----------------------------------------------------- 11.85s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.38s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 11.16s docker : Docker | pause while Docker restarts -------------------------- 10.38s download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 9.55s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri Jan 4 01:57:45 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 4 Jan 2019 01:57:45 +0000 (UTC) Subject: [CI-results] Jenkins build is back to normal : gluster_anteater_gcs #38 In-Reply-To: <867716240.5008.1546481685400.JavaMail.jenkins@jenkins.ci.centos.org> References: <867716240.5008.1546481685400.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <936883221.5076.1546567065490.JavaMail.jenkins@jenkins.ci.centos.org> See From ci at centos.org Sat Jan 5 01:16:39 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 5 Jan 2019 01:16:39 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #39 Message-ID: <453576193.5169.1546650999796.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 576.53 KB...] changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-csi-node-info-crd.yml) changed: [kube1] => (item=gcs-csi-driver-registry-crd.yml) changed: [kube1] => (item=monitoring-namespace.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Saturday 05 January 2019 01:04:39 +0000 (0:00:26.476) 0:16:21.586 ****** included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Saturday 05 January 2019 01:04:39 +0000 (0:00:00.269) 0:16:21.855 ****** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Saturday 05 January 2019 01:04:40 +0000 (0:00:00.556) 0:16:22.412 ****** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Saturday 05 January 2019 01:04:42 +0000 (0:00:02.122) 0:16:24.535 ****** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Saturday 05 January 2019 01:04:42 +0000 (0:00:00.435) 0:16:24.971 ****** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Saturday 05 January 2019 01:04:44 +0000 (0:00:01.944) 0:16:26.915 ****** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Saturday 05 January 2019 01:04:45 +0000 (0:00:00.408) 0:16:27.324 ****** changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Saturday 05 January 2019 01:04:46 +0000 (0:00:01.946) 0:16:29.271 ****** ok: [kube1] TASK [GCS | Namespace | Create Monitoring namespace] *************************** Saturday 05 January 2019 01:04:48 +0000 (0:00:01.699) 0:16:30.971 ****** ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Saturday 05 January 2019 01:04:50 +0000 (0:00:01.602) 0:16:32.573 ****** ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Saturday 05 January 2019 01:04:51 +0000 (0:00:01.629) 0:16:34.203 ****** FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Saturday 05 January 2019 01:05:04 +0000 (0:00:12.259) 0:16:46.462 ****** ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Saturday 05 January 2019 01:05:05 +0000 (0:00:01.806) 0:16:48.269 ****** ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Saturday 05 January 2019 01:05:07 +0000 (0:00:01.364) 0:16:49.634 ****** ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Saturday 05 January 2019 01:05:08 +0000 (0:00:01.371) 0:16:51.005 ****** ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Saturday 05 January 2019 01:05:10 +0000 (0:00:01.768) 0:16:52.773 ****** ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Saturday 05 January 2019 01:05:12 +0000 (0:00:01.811) 0:16:54.584 ****** changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Saturday 05 January 2019 01:05:13 +0000 (0:00:01.305) 0:16:55.890 ****** ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Saturday 05 January 2019 01:05:13 +0000 (0:00:00.318) 0:16:56.209 ****** FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (43 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Saturday 05 January 2019 01:06:49 +0000 (0:01:35.366) 0:18:31.575 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Saturday 05 January 2019 01:06:50 +0000 (0:00:01.530) 0:18:33.106 ****** included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Saturday 05 January 2019 01:06:51 +0000 (0:00:00.187) 0:18:33.294 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Saturday 05 January 2019 01:06:51 +0000 (0:00:00.362) 0:18:33.656 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Saturday 05 January 2019 01:06:53 +0000 (0:00:01.632) 0:18:35.289 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Saturday 05 January 2019 01:06:53 +0000 (0:00:00.330) 0:18:35.620 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Saturday 05 January 2019 01:06:54 +0000 (0:00:01.621) 0:18:37.242 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Saturday 05 January 2019 01:06:55 +0000 (0:00:00.332) 0:18:37.575 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Saturday 05 January 2019 01:06:58 +0000 (0:00:02.871) 0:18:40.446 ****** changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Saturday 05 January 2019 01:06:59 +0000 (0:00:01.487) 0:18:41.934 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Saturday 05 January 2019 01:06:59 +0000 (0:00:00.295) 0:18:42.229 ****** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.36.37:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=383 changed=113 unreachable=0 failed=1 kube2 : ok=293 changed=91 unreachable=0 failed=0 kube3 : ok=254 changed=71 unreachable=0 failed=0 Saturday 05 January 2019 01:16:36 +0000 (0:09:36.509) 0:28:18.739 ****** =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 576.51s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 95.37s kubernetes/master : Master | wait for the apiserver to be running ------ 43.93s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 33.58s etcd : Gen_certs | Write etcd master certs ----------------------------- 32.46s Install packages ------------------------------------------------------- 31.49s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 26.48s Wait for host to be available ------------------------------------------ 20.95s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 19.41s kubernetes/master : Master | wait for kube-controller-manager ---------- 14.73s gather facts from all instances ---------------------------------------- 14.19s etcd : Gen_certs | Gather etcd master certs ---------------------------- 12.73s download : file_download | Download item ------------------------------- 12.46s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.26s etcd : reload etcd ----------------------------------------------------- 11.80s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 11.11s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests -- 10.61s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 10.53s docker : Docker | pause while Docker restarts -------------------------- 10.39s etcd : wait for etcd up ------------------------------------------------- 9.82s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun Jan 6 01:25:50 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 6 Jan 2019 01:25:50 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #40 In-Reply-To: <453576193.5169.1546650999796.JavaMail.jenkins@jenkins.ci.centos.org> References: <453576193.5169.1546650999796.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1018628469.5212.1546737950935.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 576.58 KB...] changed: [kube1] => (item=gcs-namespace.yml) changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-csi-node-info-crd.yml) changed: [kube1] => (item=gcs-csi-driver-registry-crd.yml) changed: [kube1] => (item=monitoring-namespace.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Sunday 06 January 2019 01:15:18 +0000 (0:00:10.310) 0:09:20.456 ******** included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Sunday 06 January 2019 01:15:18 +0000 (0:00:00.089) 0:09:20.545 ******** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Sunday 06 January 2019 01:15:18 +0000 (0:00:00.137) 0:09:20.683 ******** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Sunday 06 January 2019 01:15:19 +0000 (0:00:00.739) 0:09:21.423 ******** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Sunday 06 January 2019 01:15:19 +0000 (0:00:00.152) 0:09:21.576 ******** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Sunday 06 January 2019 01:15:20 +0000 (0:00:00.709) 0:09:22.285 ******** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Sunday 06 January 2019 01:15:20 +0000 (0:00:00.151) 0:09:22.437 ******** changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Sunday 06 January 2019 01:15:21 +0000 (0:00:00.748) 0:09:23.185 ******** ok: [kube1] TASK [GCS | Namespace | Create Monitoring namespace] *************************** Sunday 06 January 2019 01:15:22 +0000 (0:00:00.648) 0:09:23.834 ******** ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Sunday 06 January 2019 01:15:22 +0000 (0:00:00.648) 0:09:24.482 ******** ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Sunday 06 January 2019 01:15:23 +0000 (0:00:00.702) 0:09:25.184 ******** FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Sunday 06 January 2019 01:15:34 +0000 (0:00:10.934) 0:09:36.119 ******** ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Sunday 06 January 2019 01:15:35 +0000 (0:00:00.689) 0:09:36.808 ******** ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Sunday 06 January 2019 01:15:35 +0000 (0:00:00.531) 0:09:37.340 ******** ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Sunday 06 January 2019 01:15:36 +0000 (0:00:00.501) 0:09:37.841 ******** ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Sunday 06 January 2019 01:15:36 +0000 (0:00:00.708) 0:09:38.550 ******** ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Sunday 06 January 2019 01:15:37 +0000 (0:00:00.915) 0:09:39.465 ******** FAILED - RETRYING: GCS | ETCD Cluster | Get etcd-client service (5 retries left). changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Sunday 06 January 2019 01:15:43 +0000 (0:00:05.955) 0:09:45.420 ******** ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Sunday 06 January 2019 01:15:43 +0000 (0:00:00.160) 0:09:45.581 ******** FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (45 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Sunday 06 January 2019 01:16:50 +0000 (0:01:06.179) 0:10:51.761 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Sunday 06 January 2019 01:16:50 +0000 (0:00:00.771) 0:10:52.532 ******** included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Sunday 06 January 2019 01:16:50 +0000 (0:00:00.090) 0:10:52.623 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Sunday 06 January 2019 01:16:51 +0000 (0:00:00.146) 0:10:52.770 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Sunday 06 January 2019 01:16:51 +0000 (0:00:00.762) 0:10:53.533 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Sunday 06 January 2019 01:16:51 +0000 (0:00:00.148) 0:10:53.682 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Sunday 06 January 2019 01:16:52 +0000 (0:00:00.823) 0:10:54.505 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Sunday 06 January 2019 01:16:52 +0000 (0:00:00.159) 0:10:54.665 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Sunday 06 January 2019 01:16:53 +0000 (0:00:00.773) 0:10:55.439 ******** changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Sunday 06 January 2019 01:16:54 +0000 (0:00:00.592) 0:10:56.031 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Sunday 06 January 2019 01:16:54 +0000 (0:00:00.149) 0:10:56.180 ******** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.19.32:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=383 changed=113 unreachable=0 failed=1 kube2 : ok=293 changed=91 unreachable=0 failed=0 kube3 : ok=254 changed=71 unreachable=0 failed=0 Sunday 06 January 2019 01:25:50 +0000 (0:08:56.222) 0:19:52.403 ******** =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 536.22s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 66.18s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 37.77s kubernetes/master : Master | wait for the apiserver to be running ------ 28.04s Install packages ------------------------------------------------------- 24.06s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 21.42s Wait for host to be available ------------------------------------------ 16.50s Extend root VG --------------------------------------------------------- 15.12s etcd : Gen_certs | Write etcd master certs ----------------------------- 14.00s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 10.93s etcd : reload etcd ----------------------------------------------------- 10.84s download : file_download | Download item ------------------------------- 10.60s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 10.56s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 10.31s docker : Docker | pause while Docker restarts -------------------------- 10.18s gather facts from all instances ----------------------------------------- 8.11s etcd : wait for etcd up ------------------------------------------------- 7.93s kubernetes/master : Master | wait for kube-controller-manager ----------- 7.54s GCS | ETCD Cluster | Get etcd-client service ---------------------------- 5.96s kubernetes-apps/network_plugin/flannel : Flannel | Wait for flannel subnet.env file presence --- 5.76s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Mon Jan 7 01:05:38 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 7 Jan 2019 01:05:38 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #41 In-Reply-To: <1018628469.5212.1546737950935.JavaMail.jenkins@jenkins.ci.centos.org> References: <1018628469.5212.1546737950935.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <962714345.5262.1546823138921.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 576.51 KB...] changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-csi-node-info-crd.yml) changed: [kube1] => (item=gcs-csi-driver-registry-crd.yml) changed: [kube1] => (item=monitoring-namespace.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Monday 07 January 2019 00:53:28 +0000 (0:00:26.173) 0:16:27.523 ******** included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Monday 07 January 2019 00:53:28 +0000 (0:00:00.262) 0:16:27.785 ******** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Monday 07 January 2019 00:53:29 +0000 (0:00:00.529) 0:16:28.314 ******** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Monday 07 January 2019 00:53:31 +0000 (0:00:02.064) 0:16:30.379 ******** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Monday 07 January 2019 00:53:31 +0000 (0:00:00.494) 0:16:30.874 ******** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Monday 07 January 2019 00:53:33 +0000 (0:00:02.223) 0:16:33.097 ******** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Monday 07 January 2019 00:53:34 +0000 (0:00:00.503) 0:16:33.601 ******** changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Monday 07 January 2019 00:53:36 +0000 (0:00:02.207) 0:16:35.808 ******** ok: [kube1] TASK [GCS | Namespace | Create Monitoring namespace] *************************** Monday 07 January 2019 00:53:38 +0000 (0:00:01.747) 0:16:37.556 ******** ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Monday 07 January 2019 00:53:39 +0000 (0:00:01.689) 0:16:39.245 ******** ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Monday 07 January 2019 00:53:41 +0000 (0:00:01.884) 0:16:41.130 ******** FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (49 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Monday 07 January 2019 00:54:05 +0000 (0:00:23.308) 0:17:04.439 ******** ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Monday 07 January 2019 00:54:06 +0000 (0:00:01.739) 0:17:06.178 ******** ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Monday 07 January 2019 00:54:08 +0000 (0:00:01.446) 0:17:07.624 ******** ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Monday 07 January 2019 00:54:09 +0000 (0:00:01.537) 0:17:09.162 ******** ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Monday 07 January 2019 00:54:11 +0000 (0:00:01.668) 0:17:10.831 ******** ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Monday 07 January 2019 00:54:13 +0000 (0:00:01.765) 0:17:12.597 ******** changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Monday 07 January 2019 00:54:14 +0000 (0:00:01.297) 0:17:13.895 ******** ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Monday 07 January 2019 00:54:14 +0000 (0:00:00.344) 0:17:14.239 ******** FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (43 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Monday 07 January 2019 00:55:50 +0000 (0:01:35.455) 0:18:49.695 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Monday 07 January 2019 00:55:52 +0000 (0:00:01.941) 0:18:51.637 ******** included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Monday 07 January 2019 00:55:52 +0000 (0:00:00.212) 0:18:51.849 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Monday 07 January 2019 00:55:52 +0000 (0:00:00.368) 0:18:52.217 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Monday 07 January 2019 00:55:54 +0000 (0:00:01.667) 0:18:53.885 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Monday 07 January 2019 00:55:54 +0000 (0:00:00.298) 0:18:54.184 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Monday 07 January 2019 00:55:56 +0000 (0:00:01.591) 0:18:55.775 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Monday 07 January 2019 00:55:56 +0000 (0:00:00.352) 0:18:56.128 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Monday 07 January 2019 00:55:58 +0000 (0:00:01.813) 0:18:57.941 ******** changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Monday 07 January 2019 00:56:00 +0000 (0:00:01.676) 0:18:59.617 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Monday 07 January 2019 00:56:00 +0000 (0:00:00.352) 0:18:59.970 ******** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.26.163:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=383 changed=113 unreachable=0 failed=1 kube2 : ok=293 changed=91 unreachable=0 failed=0 kube3 : ok=254 changed=71 unreachable=0 failed=0 Monday 07 January 2019 01:05:38 +0000 (0:09:37.863) 0:28:37.833 ******** =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 577.86s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 95.46s kubernetes/master : Master | wait for the apiserver to be running ------ 43.71s etcd : Gen_certs | Write etcd master certs ----------------------------- 32.62s Wait for host to be available ------------------------------------------ 32.09s Install packages ------------------------------------------------------- 31.36s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 29.75s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 26.17s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 23.31s kubernetes/master : Master | wait for kube-controller-manager ---------- 19.16s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 18.18s download : file_download | Download item ------------------------------- 13.48s etcd : Gen_certs | Gather etcd master certs ---------------------------- 13.02s gather facts from all instances ---------------------------------------- 12.54s etcd : reload etcd ----------------------------------------------------- 11.69s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 11.38s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 10.89s docker : Docker | pause while Docker restarts -------------------------- 10.42s download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 9.37s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests --- 9.35s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Tue Jan 8 01:02:28 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 8 Jan 2019 01:02:28 +0000 (UTC) Subject: [CI-results] Jenkins build is back to normal : gluster_anteater_gcs #42 In-Reply-To: <962714345.5262.1546823138921.JavaMail.jenkins@jenkins.ci.centos.org> References: <962714345.5262.1546823138921.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1721137108.5362.1546909348275.JavaMail.jenkins@jenkins.ci.centos.org> See From ci at centos.org Wed Jan 9 02:22:08 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 9 Jan 2019 02:22:08 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #43 Message-ID: <1247391815.5524.1547000529117.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [ndevos] gluster-block/nightly: place repo metadata with the RPMs ------------------------------------------ [...truncated 463.98 KB...] ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Wednesday 09 January 2019 00:58:54 +0000 (0:00:01.697) 0:20:00.773 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Wednesday 09 January 2019 00:58:54 +0000 (0:00:00.460) 0:20:01.234 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Wednesday 09 January 2019 00:58:56 +0000 (0:00:01.798) 0:20:03.032 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Wednesday 09 January 2019 00:58:56 +0000 (0:00:00.504) 0:20:03.537 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Wednesday 09 January 2019 00:58:58 +0000 (0:00:01.650) 0:20:05.187 ***** changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Wednesday 09 January 2019 00:59:00 +0000 (0:00:01.615) 0:20:06.803 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Wednesday 09 January 2019 00:59:00 +0000 (0:00:00.467) 0:20:07.270 ***** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Add devices] ***************************************** Wednesday 09 January 2019 00:59:28 +0000 (0:00:27.453) 0:20:34.724 ***** included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 TASK [GCS | GD2 Cluster | Add devices | Set facts] ***************************** Wednesday 09 January 2019 00:59:28 +0000 (0:00:00.318) 0:20:35.043 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Add devices | Add devices for kube1] ***************** Wednesday 09 January 2019 00:59:28 +0000 (0:00:00.509) 0:20:35.552 ***** FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (1 retries left). failed: [kube1] (item=/dev/vdc) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdc", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.55.252:24007/v1/devices/485565bd-7a7f-4b9c-b658-e45809dd5a36"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (1 retries left). failed: [kube1] (item=/dev/vdd) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdd", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.55.252:24007/v1/devices/485565bd-7a7f-4b9c-b658-e45809dd5a36"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (1 retries left). failed: [kube1] (item=/dev/vde) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vde", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.55.252:24007/v1/devices/485565bd-7a7f-4b9c-b658-e45809dd5a36"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=422 changed=115 unreachable=0 failed=1 kube2 : ok=319 changed=91 unreachable=0 failed=0 kube3 : ok=281 changed=77 unreachable=0 failed=0 Wednesday 09 January 2019 02:22:08 +0000 (1:22:39.700) 1:43:15.253 ***** =============================================================================== GCS | GD2 Cluster | Add devices | Add devices for kube1 -------------- 4959.70s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 96.49s kubernetes/master : kubeadm | Initialize first master ------------------ 39.10s kubernetes/master : kubeadm | Init other uninitialized masters --------- 38.78s etcd : Gen_certs | Write etcd master certs ----------------------------- 32.41s Wait for host to be available ------------------------------------------ 32.06s Install packages ------------------------------------------------------- 31.05s download : container_download | download images for kubeadm config images -- 30.72s GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready --------- 27.45s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 23.01s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.45s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 18.88s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 16.82s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 13.32s gather facts from all instances ---------------------------------------- 12.72s etcd : Gen_certs | Gather etcd master certs ---------------------------- 12.70s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.21s etcd : reload etcd ----------------------------------------------------- 11.69s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests -- 11.20s download : file_download | Download item ------------------------------- 11.05s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu Jan 10 01:03:59 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 10 Jan 2019 01:03:59 +0000 (UTC) Subject: [CI-results] Jenkins build is back to normal : gluster_anteater_gcs #44 In-Reply-To: <1247391815.5524.1547000529117.JavaMail.jenkins@jenkins.ci.centos.org> References: <1247391815.5524.1547000529117.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <434343416.5615.1547082239318.JavaMail.jenkins@jenkins.ci.centos.org> See From ci at centos.org Sat Jan 12 01:16:08 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 12 Jan 2019 01:16:08 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #46 Message-ID: <1471625432.5796.1547255768878.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 459.32 KB...] included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Saturday 12 January 2019 00:54:25 +0000 (0:00:00.296) 0:17:21.972 ****** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Saturday 12 January 2019 00:54:25 +0000 (0:00:00.414) 0:17:22.387 ****** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Saturday 12 January 2019 00:54:27 +0000 (0:00:02.061) 0:17:24.448 ****** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Saturday 12 January 2019 00:54:28 +0000 (0:00:00.444) 0:17:24.893 ****** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Saturday 12 January 2019 00:54:30 +0000 (0:00:02.029) 0:17:26.923 ****** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Saturday 12 January 2019 00:54:30 +0000 (0:00:00.434) 0:17:27.357 ****** changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Saturday 12 January 2019 00:54:33 +0000 (0:00:02.158) 0:17:29.515 ****** ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Saturday 12 January 2019 00:54:34 +0000 (0:00:01.515) 0:17:31.031 ****** ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Saturday 12 January 2019 00:54:36 +0000 (0:00:01.663) 0:17:32.694 ****** FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Saturday 12 January 2019 00:54:48 +0000 (0:00:12.206) 0:17:44.901 ****** ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Saturday 12 January 2019 00:54:50 +0000 (0:00:01.769) 0:17:46.671 ****** ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Saturday 12 January 2019 00:54:51 +0000 (0:00:01.380) 0:17:48.051 ****** ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Saturday 12 January 2019 00:54:52 +0000 (0:00:01.359) 0:17:49.410 ****** ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Saturday 12 January 2019 00:54:54 +0000 (0:00:01.817) 0:17:51.228 ****** ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Saturday 12 January 2019 00:54:56 +0000 (0:00:01.852) 0:17:53.080 ****** changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Saturday 12 January 2019 00:54:57 +0000 (0:00:01.139) 0:17:54.220 ****** ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Saturday 12 January 2019 00:54:58 +0000 (0:00:00.522) 0:17:54.742 ****** FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (44 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Saturday 12 January 2019 00:56:22 +0000 (0:01:24.211) 0:19:18.954 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Saturday 12 January 2019 00:56:24 +0000 (0:00:01.809) 0:19:20.764 ****** included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Saturday 12 January 2019 00:56:24 +0000 (0:00:00.192) 0:19:20.956 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Saturday 12 January 2019 00:56:24 +0000 (0:00:00.475) 0:19:21.432 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Saturday 12 January 2019 00:56:26 +0000 (0:00:01.573) 0:19:23.005 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Saturday 12 January 2019 00:56:27 +0000 (0:00:00.485) 0:19:23.491 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Saturday 12 January 2019 00:56:28 +0000 (0:00:01.778) 0:19:25.270 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Saturday 12 January 2019 00:56:29 +0000 (0:00:00.447) 0:19:25.717 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Saturday 12 January 2019 00:56:31 +0000 (0:00:01.830) 0:19:27.548 ****** changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Saturday 12 January 2019 00:56:32 +0000 (0:00:01.594) 0:19:29.143 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Saturday 12 January 2019 00:56:33 +0000 (0:00:00.578) 0:19:29.722 ****** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Add devices] ***************************************** Saturday 12 January 2019 00:57:10 +0000 (0:00:37.748) 0:20:07.470 ****** included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 TASK [GCS | GD2 Cluster | Add devices | Set facts] ***************************** Saturday 12 January 2019 00:57:11 +0000 (0:00:00.287) 0:20:07.758 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Add devices | Add devices for kube3] ***************** Saturday 12 January 2019 00:57:11 +0000 (0:00:00.540) 0:20:08.299 ****** FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (49 retries left). ok: [kube1] => (item=/dev/vdc) FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (49 retries left). ok: [kube1] => (item=/dev/vdd) FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (1 retries left). failed: [kube1] (item=/dev/vde) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vde", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.35.137:24007/v1/devices/3a016c39-e94a-4ad2-b8fc-44a4b0f2d503"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=421 changed=115 unreachable=0 failed=1 kube2 : ok=319 changed=91 unreachable=0 failed=0 kube3 : ok=281 changed=77 unreachable=0 failed=0 Saturday 12 January 2019 01:16:08 +0000 (0:18:56.657) 0:39:04.956 ****** =============================================================================== GCS | GD2 Cluster | Add devices | Add devices for kube3 -------------- 1136.66s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 84.21s kubernetes/master : kubeadm | Initialize first master ------------------ 38.65s kubernetes/master : kubeadm | Init other uninitialized masters --------- 37.76s GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready --------- 37.75s etcd : Gen_certs | Write etcd master certs ----------------------------- 32.99s download : container_download | download images for kubeadm config images -- 32.96s Install packages ------------------------------------------------------- 30.53s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 29.51s Wait for host to be available ------------------------------------------ 21.04s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.96s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 18.68s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 17.09s gather facts from all instances ---------------------------------------- 13.34s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 13.18s etcd : Gen_certs | Gather etcd master certs ---------------------------- 12.74s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.21s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests -- 11.40s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.23s container-engine/docker : Docker | pause while Docker restarts --------- 10.41s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun Jan 13 00:54:49 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 13 Jan 2019 00:54:49 +0000 (UTC) Subject: [CI-results] Jenkins build is back to normal : gluster_anteater_gcs #47 In-Reply-To: <1471625432.5796.1547255768878.JavaMail.jenkins@jenkins.ci.centos.org> References: <1471625432.5796.1547255768878.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <292310180.5836.1547340889584.JavaMail.jenkins@jenkins.ci.centos.org> See From ci at centos.org Tue Jan 15 01:04:02 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 15 Jan 2019 01:04:02 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #49 Message-ID: <1506398169.5992.1547514242062.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 453.33 KB...] TASK [kubernetes-apps/container_engine_accelerator/nvidia_gpu : Container Engine Acceleration Nvidia GPU | Set fact of download url GTX] *** Tuesday 15 January 2019 00:54:04 +0000 (0:00:00.182) 0:17:02.421 ******* TASK [kubernetes-apps/container_engine_accelerator/nvidia_gpu : Container Engine Acceleration Nvidia GPU | Create addon dir] *** Tuesday 15 January 2019 00:54:04 +0000 (0:00:00.179) 0:17:02.601 ******* TASK [kubernetes-apps/container_engine_accelerator/nvidia_gpu : Container Engine Acceleration Nvidia GPU | Create manifests for nvidia accelerators] *** Tuesday 15 January 2019 00:54:04 +0000 (0:00:00.200) 0:17:02.802 ******* TASK [kubernetes-apps/container_engine_accelerator/nvidia_gpu : Container Engine Acceleration Nvidia GPU | Apply manifests for nvidia accelerators] *** Tuesday 15 January 2019 00:54:05 +0000 (0:00:00.213) 0:17:03.016 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_private_key] *** Tuesday 15 January 2019 00:54:05 +0000 (0:00:00.274) 0:17:03.290 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_region_id] *** Tuesday 15 January 2019 00:54:05 +0000 (0:00:00.201) 0:17:03.491 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_tenancy_id] *** Tuesday 15 January 2019 00:54:05 +0000 (0:00:00.253) 0:17:03.745 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_user_id] *** Tuesday 15 January 2019 00:54:06 +0000 (0:00:00.226) 0:17:03.972 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_user_fingerprint] *** Tuesday 15 January 2019 00:54:06 +0000 (0:00:00.199) 0:17:04.172 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_compartment_id] *** Tuesday 15 January 2019 00:54:06 +0000 (0:00:00.174) 0:17:04.346 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_vnc_id] *** Tuesday 15 January 2019 00:54:06 +0000 (0:00:00.180) 0:17:04.526 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_subnet1_id] *** Tuesday 15 January 2019 00:54:06 +0000 (0:00:00.177) 0:17:04.704 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_subnet2_id] *** Tuesday 15 January 2019 00:54:07 +0000 (0:00:00.196) 0:17:04.900 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_security_list_management] *** Tuesday 15 January 2019 00:54:07 +0000 (0:00:00.272) 0:17:05.172 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Generate Configuration] *** Tuesday 15 January 2019 00:54:07 +0000 (0:00:00.210) 0:17:05.383 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Encode Configuration] *** Tuesday 15 January 2019 00:54:07 +0000 (0:00:00.189) 0:17:05.572 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Apply Configuration To Secret] *** Tuesday 15 January 2019 00:54:07 +0000 (0:00:00.203) 0:17:05.776 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Apply Configuration] *** Tuesday 15 January 2019 00:54:08 +0000 (0:00:00.211) 0:17:05.987 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Download Controller Manifest] *** Tuesday 15 January 2019 00:54:08 +0000 (0:00:00.209) 0:17:06.197 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Apply Controller Manifest] *** Tuesday 15 January 2019 00:54:08 +0000 (0:00:00.185) 0:17:06.382 ******* PLAY [Fetch config] ************************************************************ TASK [Retrieve kubectl config] ************************************************* Tuesday 15 January 2019 00:54:08 +0000 (0:00:00.187) 0:17:06.570 ******* changed: [kube1] PLAY [Deploy GCS] ************************************************************** TASK [GCS Pre | Cluster ID | Generate a UUID] ********************************** Tuesday 15 January 2019 00:54:09 +0000 (0:00:00.921) 0:17:07.492 ******* changed: [kube1] TASK [GCS Pre | Cluster ID | Set gcs_gd2_clusterid fact] *********************** Tuesday 15 January 2019 00:54:11 +0000 (0:00:01.994) 0:17:09.486 ******* ok: [kube1] TASK [GCS Pre | Manifests directory | Create a temporary directory] ************ Tuesday 15 January 2019 00:54:12 +0000 (0:00:00.456) 0:17:09.942 ******* changed: [kube1] TASK [GCS Pre | Manifests directory | Set manifests_dir fact] ****************** Tuesday 15 January 2019 00:54:13 +0000 (0:00:01.318) 0:17:11.261 ******* ok: [kube1] TASK [GCS Pre | Manifests | Sync GCS manifests] ******************************** Tuesday 15 January 2019 00:54:13 +0000 (0:00:00.356) 0:17:11.618 ******* changed: [kube1] => (item=gcs-namespace.yml) changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Tuesday 15 January 2019 00:54:42 +0000 (0:00:28.787) 0:17:40.405 ******* included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Tuesday 15 January 2019 00:54:42 +0000 (0:00:00.262) 0:17:40.668 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Tuesday 15 January 2019 00:54:43 +0000 (0:00:00.392) 0:17:41.061 ******* changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Tuesday 15 January 2019 00:54:45 +0000 (0:00:01.925) 0:17:42.986 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Tuesday 15 January 2019 00:54:45 +0000 (0:00:00.298) 0:17:43.284 ******* changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Tuesday 15 January 2019 00:54:47 +0000 (0:00:02.076) 0:17:45.361 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Tuesday 15 January 2019 00:54:47 +0000 (0:00:00.409) 0:17:45.770 ******* changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Tuesday 15 January 2019 00:54:49 +0000 (0:00:01.994) 0:17:47.765 ******* ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Tuesday 15 January 2019 00:54:51 +0000 (0:00:01.388) 0:17:49.154 ******* ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Tuesday 15 January 2019 00:54:52 +0000 (0:00:01.715) 0:17:50.869 ******* FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (49 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (48 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (47 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (46 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (45 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (44 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (43 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (42 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (41 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (40 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (39 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (38 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (37 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (36 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (35 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (34 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (33 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (32 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (31 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (30 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (29 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (28 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (27 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (26 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (25 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (24 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (23 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (22 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (21 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (20 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (19 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (18 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (17 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (16 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (15 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (14 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (13 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (12 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (11 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (10 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (9 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (8 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (7 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (6 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (5 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (4 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (3 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (2 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": true, "cmd": ["/usr/local/bin/kubectl", "-ngcs", "-ojsonpath={.status.availableReplicas}", "get", "deployment", "etcd-operator"], "delta": "0:00:00.263837", "end": "2019-01-15 01:04:01.653963", "rc": 0, "start": "2019-01-15 01:04:01.390126", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=395 changed=112 unreachable=0 failed=1 kube2 : ok=319 changed=91 unreachable=0 failed=0 kube3 : ok=281 changed=77 unreachable=0 failed=0 Tuesday 15 January 2019 01:04:01 +0000 (0:09:08.706) 0:26:59.575 ******* =============================================================================== GCS | ETCD Operator | Wait for etcd-operator to be available ---------- 548.71s kubernetes/master : kubeadm | Initialize first master ------------------ 40.09s kubernetes/master : kubeadm | Init other uninitialized masters --------- 38.47s etcd : Gen_certs | Write etcd master certs ----------------------------- 33.84s Install packages ------------------------------------------------------- 33.72s download : container_download | download images for kubeadm config images -- 31.78s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 28.79s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 21.12s Wait for host to be available ------------------------------------------ 20.87s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 18.01s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 16.46s gather facts from all instances ---------------------------------------- 12.90s etcd : Gen_certs | Gather etcd master certs ---------------------------- 12.87s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 12.51s etcd : reload etcd ----------------------------------------------------- 11.76s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.02s container-engine/docker : Docker | pause while Docker restarts --------- 10.40s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests -- 10.37s etcd : wait for etcd up ------------------------------------------------- 9.96s download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 8.88s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed Jan 16 01:08:10 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 16 Jan 2019 01:08:10 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #50 In-Reply-To: <1506398169.5992.1547514242062.JavaMail.jenkins@jenkins.ci.centos.org> References: <1506398169.5992.1547514242062.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2035375309.6107.1547600890366.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 458.45 KB...] changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Wednesday 16 January 2019 00:55:58 +0000 (0:00:32.057) 0:17:51.816 ***** included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Wednesday 16 January 2019 00:55:58 +0000 (0:00:00.281) 0:17:52.098 ***** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Wednesday 16 January 2019 00:55:59 +0000 (0:00:00.544) 0:17:52.642 ***** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Wednesday 16 January 2019 00:56:01 +0000 (0:00:02.262) 0:17:54.904 ***** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Wednesday 16 January 2019 00:56:02 +0000 (0:00:00.548) 0:17:55.453 ***** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Wednesday 16 January 2019 00:56:04 +0000 (0:00:02.190) 0:17:57.643 ***** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Wednesday 16 January 2019 00:56:05 +0000 (0:00:00.564) 0:17:58.208 ***** changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Wednesday 16 January 2019 00:56:07 +0000 (0:00:02.248) 0:18:00.456 ***** ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Wednesday 16 January 2019 00:56:08 +0000 (0:00:01.695) 0:18:02.152 ***** ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Wednesday 16 January 2019 00:56:10 +0000 (0:00:01.734) 0:18:03.887 ***** FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Wednesday 16 January 2019 00:56:22 +0000 (0:00:12.223) 0:18:16.111 ***** ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Wednesday 16 January 2019 00:56:24 +0000 (0:00:01.687) 0:18:17.798 ***** ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Wednesday 16 January 2019 00:56:25 +0000 (0:00:01.323) 0:18:19.122 ***** ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Wednesday 16 January 2019 00:56:27 +0000 (0:00:01.396) 0:18:20.518 ***** ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Wednesday 16 January 2019 00:56:29 +0000 (0:00:01.905) 0:18:22.423 ***** ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Wednesday 16 January 2019 00:56:31 +0000 (0:00:01.798) 0:18:24.222 ***** changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Wednesday 16 January 2019 00:56:32 +0000 (0:00:01.564) 0:18:25.787 ***** ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Wednesday 16 January 2019 00:56:33 +0000 (0:00:00.478) 0:18:26.265 ***** FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (42 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Wednesday 16 January 2019 00:58:20 +0000 (0:01:47.152) 0:20:13.418 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Wednesday 16 January 2019 00:58:22 +0000 (0:00:01.751) 0:20:15.170 ***** included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Wednesday 16 January 2019 00:58:22 +0000 (0:00:00.203) 0:20:15.373 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Wednesday 16 January 2019 00:58:22 +0000 (0:00:00.355) 0:20:15.729 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Wednesday 16 January 2019 00:58:23 +0000 (0:00:01.414) 0:20:17.143 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Wednesday 16 January 2019 00:58:24 +0000 (0:00:00.331) 0:20:17.475 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Wednesday 16 January 2019 00:58:26 +0000 (0:00:02.565) 0:20:20.041 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Wednesday 16 January 2019 00:58:27 +0000 (0:00:00.387) 0:20:20.428 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Wednesday 16 January 2019 00:58:28 +0000 (0:00:01.668) 0:20:22.096 ***** changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Wednesday 16 January 2019 00:58:30 +0000 (0:00:01.504) 0:20:23.601 ***** ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Wednesday 16 January 2019 00:58:30 +0000 (0:00:00.354) 0:20:23.955 ***** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.35.19:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=416 changed=115 unreachable=0 failed=1 kube2 : ok=319 changed=91 unreachable=0 failed=0 kube3 : ok=281 changed=77 unreachable=0 failed=0 Wednesday 16 January 2019 01:08:09 +0000 (0:09:39.095) 0:30:03.051 ***** =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 579.10s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------ 107.15s kubernetes/master : kubeadm | Initialize first master ------------------ 38.84s kubernetes/master : kubeadm | Init other uninitialized masters --------- 38.60s download : container_download | download images for kubeadm config images -- 34.02s etcd : Gen_certs | Write etcd master certs ----------------------------- 33.19s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 32.06s Install packages ------------------------------------------------------- 30.59s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 21.32s Wait for host to be available ------------------------------------------ 20.97s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 18.85s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 16.37s gather facts from all instances ---------------------------------------- 13.85s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 13.38s etcd : Gen_certs | Gather etcd master certs ---------------------------- 12.51s download : file_download | Download item ------------------------------- 12.48s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.22s etcd : reload etcd ----------------------------------------------------- 12.04s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 11.82s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.02s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu Jan 17 01:02:26 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 17 Jan 2019 01:02:26 +0000 (UTC) Subject: [CI-results] Jenkins build is back to normal : gluster_anteater_gcs #51 In-Reply-To: <2035375309.6107.1547600890366.JavaMail.jenkins@jenkins.ci.centos.org> References: <2035375309.6107.1547600890366.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <789682682.6183.1547686946878.JavaMail.jenkins@jenkins.ci.centos.org> See From ci at centos.org Sat Jan 19 02:00:51 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 19 Jan 2019 02:00:51 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #53 In-Reply-To: <1776331421.6244.1547771752930.JavaMail.jenkins@jenkins.ci.centos.org> References: <1776331421.6244.1547771752930.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <64476398.6323.1547863252102.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [nigelb] Add a debugging command [nigelb] Move ssh-keygen to shell [nigelb] Remove need to generate ssh-keys [nigelb] Update pip and setup tools before installing docx ------------------------------------------ [...truncated 464.52 KB...] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Saturday 19 January 2019 00:57:33 +0000 (0:00:01.739) 0:20:17.380 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Saturday 19 January 2019 00:57:34 +0000 (0:00:00.482) 0:20:17.862 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Saturday 19 January 2019 00:57:35 +0000 (0:00:01.716) 0:20:19.578 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Saturday 19 January 2019 00:57:36 +0000 (0:00:00.324) 0:20:19.903 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Saturday 19 January 2019 00:57:37 +0000 (0:00:01.596) 0:20:21.499 ****** changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Saturday 19 January 2019 00:57:39 +0000 (0:00:01.345) 0:20:22.845 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Saturday 19 January 2019 00:57:39 +0000 (0:00:00.334) 0:20:23.180 ****** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Add devices] ***************************************** Saturday 19 January 2019 00:58:29 +0000 (0:00:49.834) 0:21:13.014 ****** included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 TASK [GCS | GD2 Cluster | Add devices | Set facts] ***************************** Saturday 19 January 2019 00:58:29 +0000 (0:00:00.331) 0:21:13.346 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Add devices | Add devices for kube2] ***************** Saturday 19 January 2019 00:58:30 +0000 (0:00:00.458) 0:21:13.804 ****** FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (1 retries left). failed: [kube1] (item=/dev/vdc) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdc", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.33.230:24007/v1/devices/5c4c37ad-eaa8-4511-b297-f4efeb958905"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (1 retries left). failed: [kube1] (item=/dev/vdd) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdd", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.33.230:24007/v1/devices/5c4c37ad-eaa8-4511-b297-f4efeb958905"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (1 retries left). failed: [kube1] (item=/dev/vde) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vde", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.33.230:24007/v1/devices/5c4c37ad-eaa8-4511-b297-f4efeb958905"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=421 changed=115 unreachable=0 failed=1 kube2 : ok=319 changed=91 unreachable=0 failed=0 kube3 : ok=281 changed=77 unreachable=0 failed=0 Saturday 19 January 2019 02:00:51 +0000 (1:02:21.408) 1:23:35.213 ****** =============================================================================== GCS | GD2 Cluster | Add devices | Add devices for kube2 -------------- 3741.41s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------ 107.53s GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready --------- 49.83s kubernetes/master : kubeadm | Initialize first master ------------------ 39.85s kubernetes/master : kubeadm | Init other uninitialized masters --------- 37.61s etcd : Gen_certs | Write etcd master certs ----------------------------- 33.67s download : container_download | download images for kubeadm config images -- 32.95s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 31.97s Install packages ------------------------------------------------------- 30.83s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 21.02s Wait for host to be available ------------------------------------------ 20.80s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 19.17s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 16.69s etcd : wait for etcd up ------------------------------------------------ 16.28s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 13.16s etcd : Gen_certs | Gather etcd master certs ---------------------------- 12.92s gather facts from all instances ---------------------------------------- 12.61s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.28s etcd : reload etcd ----------------------------------------------------- 11.71s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.36s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun Jan 20 02:12:07 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 20 Jan 2019 02:12:07 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #54 In-Reply-To: <64476398.6323.1547863252102.JavaMail.jenkins@jenkins.ci.centos.org> References: <64476398.6323.1547863252102.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1176591182.6347.1547950327343.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 466.49 KB...] FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Add devices] ***************************************** Sunday 20 January 2019 00:51:25 +0000 (0:04:05.403) 0:15:55.823 ******** included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 TASK [GCS | GD2 Cluster | Add devices | Set facts] ***************************** Sunday 20 January 2019 00:51:25 +0000 (0:00:00.104) 0:15:55.928 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Add devices | Add devices for kube2] ***************** Sunday 20 January 2019 00:51:25 +0000 (0:00:00.210) 0:15:56.139 ******** ok: [kube1] => (item=/dev/vdc) FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube2 (50 retries left). ok: [kube1] => (item=/dev/vdd) ok: [kube1] => (item=/dev/vde) TASK [GCS | GD2 Cluster | Add devices | Set facts] ***************************** Sunday 20 January 2019 00:51:38 +0000 (0:00:12.915) 0:16:09.054 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Add devices | Add devices for kube3] ***************** Sunday 20 January 2019 00:51:39 +0000 (0:00:00.234) 0:16:09.289 ******** FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (50 retries left). ok: [kube1] => (item=/dev/vdc) ok: [kube1] => (item=/dev/vdd) FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube3 (50 retries left). ok: [kube1] => (item=/dev/vde) TASK [GCS | GD2 Cluster | Add devices | Set facts] ***************************** Sunday 20 January 2019 00:52:02 +0000 (0:00:23.350) 0:16:32.640 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Add devices | Add devices for kube1] ***************** Sunday 20 January 2019 00:52:02 +0000 (0:00:00.211) 0:16:32.852 ******** FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (1 retries left). failed: [kube1] (item=/dev/vdc) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdc", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.21.138:24007/v1/devices/86bcf921-89a7-403c-8c7f-a6a9556a63c6"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (1 retries left). failed: [kube1] (item=/dev/vdd) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdd", "msg": "Status code was -1 and not [201]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.21.138:24007/v1/devices/86bcf921-89a7-403c-8c7f-a6a9556a63c6"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (1 retries left). failed: [kube1] (item=/dev/vde) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vde", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.21.138:24007/v1/devices/86bcf921-89a7-403c-8c7f-a6a9556a63c6"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=425 changed=115 unreachable=0 failed=1 kube2 : ok=319 changed=91 unreachable=0 failed=0 kube3 : ok=281 changed=77 unreachable=0 failed=0 Sunday 20 January 2019 02:12:07 +0000 (1:20:04.531) 1:36:37.383 ******** =============================================================================== GCS | GD2 Cluster | Add devices | Add devices for kube1 -------------- 4804.53s GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 245.40s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 65.41s kubernetes/master : kubeadm | Initialize first master ------------------ 32.45s download : container_download | download images for kubeadm config images -- 31.46s kubernetes/master : kubeadm | Init other uninitialized masters --------- 25.61s Install packages ------------------------------------------------------- 23.74s GCS | GD2 Cluster | Add devices | Add devices for kube3 ---------------- 23.35s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 19.24s Wait for host to be available ------------------------------------------ 16.31s Extend root VG --------------------------------------------------------- 15.70s GCS | GD2 Cluster | Add devices | Add devices for kube2 ---------------- 12.92s etcd : Gen_certs | Write etcd master certs ----------------------------- 12.62s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 12.62s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 12.42s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 12.21s etcd : reload etcd ----------------------------------------------------- 10.89s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 10.88s container-engine/docker : Docker | pause while Docker restarts --------- 10.16s download : file_download | Download item -------------------------------- 9.08s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Mon Jan 21 01:02:39 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 21 Jan 2019 01:02:39 +0000 (UTC) Subject: [CI-results] Jenkins build is back to normal : gluster_anteater_gcs #55 In-Reply-To: <1176591182.6347.1547950327343.JavaMail.jenkins@jenkins.ci.centos.org> References: <1176591182.6347.1547950327343.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1357734855.6371.1548032559167.JavaMail.jenkins@jenkins.ci.centos.org> See From ci at centos.org Tue Jan 22 03:03:26 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 22 Jan 2019 03:03:26 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #56 Message-ID: <13977523.6421.1548126206410.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 453.22 KB...] TASK [kubernetes-apps/container_engine_accelerator/nvidia_gpu : Container Engine Acceleration Nvidia GPU | Set fact of download url GTX] *** Tuesday 22 January 2019 02:53:31 +0000 (0:00:00.187) 0:17:13.397 ******* TASK [kubernetes-apps/container_engine_accelerator/nvidia_gpu : Container Engine Acceleration Nvidia GPU | Create addon dir] *** Tuesday 22 January 2019 02:53:32 +0000 (0:00:00.176) 0:17:13.573 ******* TASK [kubernetes-apps/container_engine_accelerator/nvidia_gpu : Container Engine Acceleration Nvidia GPU | Create manifests for nvidia accelerators] *** Tuesday 22 January 2019 02:53:32 +0000 (0:00:00.190) 0:17:13.764 ******* TASK [kubernetes-apps/container_engine_accelerator/nvidia_gpu : Container Engine Acceleration Nvidia GPU | Apply manifests for nvidia accelerators] *** Tuesday 22 January 2019 02:53:32 +0000 (0:00:00.209) 0:17:13.973 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_private_key] *** Tuesday 22 January 2019 02:53:32 +0000 (0:00:00.239) 0:17:14.213 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_region_id] *** Tuesday 22 January 2019 02:53:32 +0000 (0:00:00.212) 0:17:14.425 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_tenancy_id] *** Tuesday 22 January 2019 02:53:33 +0000 (0:00:00.214) 0:17:14.640 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_user_id] *** Tuesday 22 January 2019 02:53:33 +0000 (0:00:00.206) 0:17:14.846 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_user_fingerprint] *** Tuesday 22 January 2019 02:53:33 +0000 (0:00:00.232) 0:17:15.078 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_compartment_id] *** Tuesday 22 January 2019 02:53:33 +0000 (0:00:00.222) 0:17:15.302 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_vnc_id] *** Tuesday 22 January 2019 02:53:33 +0000 (0:00:00.209) 0:17:15.511 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_subnet1_id] *** Tuesday 22 January 2019 02:53:34 +0000 (0:00:00.321) 0:17:15.833 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_subnet2_id] *** Tuesday 22 January 2019 02:53:34 +0000 (0:00:00.239) 0:17:16.072 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_security_list_management] *** Tuesday 22 January 2019 02:53:34 +0000 (0:00:00.234) 0:17:16.307 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Generate Configuration] *** Tuesday 22 January 2019 02:53:35 +0000 (0:00:00.256) 0:17:16.564 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Encode Configuration] *** Tuesday 22 January 2019 02:53:35 +0000 (0:00:00.225) 0:17:16.789 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Apply Configuration To Secret] *** Tuesday 22 January 2019 02:53:35 +0000 (0:00:00.186) 0:17:16.976 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Apply Configuration] *** Tuesday 22 January 2019 02:53:35 +0000 (0:00:00.192) 0:17:17.168 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Download Controller Manifest] *** Tuesday 22 January 2019 02:53:35 +0000 (0:00:00.273) 0:17:17.442 ******* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Apply Controller Manifest] *** Tuesday 22 January 2019 02:53:36 +0000 (0:00:00.239) 0:17:17.681 ******* PLAY [Fetch config] ************************************************************ TASK [Retrieve kubectl config] ************************************************* Tuesday 22 January 2019 02:53:36 +0000 (0:00:00.246) 0:17:17.928 ******* changed: [kube1] PLAY [Deploy GCS] ************************************************************** TASK [GCS Pre | Cluster ID | Generate a UUID] ********************************** Tuesday 22 January 2019 02:53:37 +0000 (0:00:00.941) 0:17:18.870 ******* changed: [kube1] TASK [GCS Pre | Cluster ID | Set gcs_gd2_clusterid fact] *********************** Tuesday 22 January 2019 02:53:38 +0000 (0:00:00.876) 0:17:19.746 ******* ok: [kube1] TASK [GCS Pre | Manifests directory | Create a temporary directory] ************ Tuesday 22 January 2019 02:53:38 +0000 (0:00:00.368) 0:17:20.115 ******* changed: [kube1] TASK [GCS Pre | Manifests directory | Set manifests_dir fact] ****************** Tuesday 22 January 2019 02:53:39 +0000 (0:00:01.188) 0:17:21.304 ******* ok: [kube1] TASK [GCS Pre | Manifests | Sync GCS manifests] ******************************** Tuesday 22 January 2019 02:53:40 +0000 (0:00:00.424) 0:17:21.728 ******* changed: [kube1] => (item=gcs-namespace.yml) changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Tuesday 22 January 2019 02:54:08 +0000 (0:00:28.720) 0:17:50.449 ******* included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Tuesday 22 January 2019 02:54:09 +0000 (0:00:00.200) 0:17:50.650 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Tuesday 22 January 2019 02:54:09 +0000 (0:00:00.371) 0:17:51.021 ******* changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Tuesday 22 January 2019 02:54:11 +0000 (0:00:01.964) 0:17:52.987 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Tuesday 22 January 2019 02:54:11 +0000 (0:00:00.326) 0:17:53.313 ******* changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Tuesday 22 January 2019 02:54:13 +0000 (0:00:01.790) 0:17:55.104 ******* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Tuesday 22 January 2019 02:54:13 +0000 (0:00:00.314) 0:17:55.418 ******* changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Tuesday 22 January 2019 02:54:15 +0000 (0:00:01.882) 0:17:57.301 ******* ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Tuesday 22 January 2019 02:54:17 +0000 (0:00:01.348) 0:17:58.649 ******* ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Tuesday 22 January 2019 02:54:18 +0000 (0:00:01.680) 0:18:00.329 ******* FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (49 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (48 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (47 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (46 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (45 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (44 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (43 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (42 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (41 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (40 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (39 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (38 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (37 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (36 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (35 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (34 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (33 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (32 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (31 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (30 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (29 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (28 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (27 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (26 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (25 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (24 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (23 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (22 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (21 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (20 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (19 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (18 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (17 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (16 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (15 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (14 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (13 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (12 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (11 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (10 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (9 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (8 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (7 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (6 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (5 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (4 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (3 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (2 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": true, "cmd": ["/usr/local/bin/kubectl", "-ngcs", "-ojsonpath={.status.availableReplicas}", "get", "deployment", "etcd-operator"], "delta": "0:00:00.324363", "end": "2019-01-22 03:03:25.989204", "rc": 0, "start": "2019-01-22 03:03:25.664841", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=395 changed=112 unreachable=0 failed=1 kube2 : ok=319 changed=91 unreachable=0 failed=0 kube3 : ok=281 changed=77 unreachable=0 failed=0 Tuesday 22 January 2019 03:03:26 +0000 (0:09:07.262) 0:27:07.592 ******* =============================================================================== GCS | ETCD Operator | Wait for etcd-operator to be available ---------- 547.26s kubernetes/master : kubeadm | Init other uninitialized masters --------- 39.59s kubernetes/master : kubeadm | Initialize first master ------------------ 39.00s download : container_download | download images for kubeadm config images -- 38.05s etcd : Gen_certs | Write etcd master certs ----------------------------- 33.65s Install packages ------------------------------------------------------- 31.06s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 28.72s Wait for host to be available ------------------------------------------ 21.16s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.82s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 17.53s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 17.00s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests -- 14.54s gather facts from all instances ---------------------------------------- 12.72s etcd : Gen_certs | Gather etcd master certs ---------------------------- 12.50s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 12.23s etcd : reload etcd ----------------------------------------------------- 11.51s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.08s container-engine/docker : Docker | pause while Docker restarts --------- 10.44s etcd : wait for etcd up ------------------------------------------------- 9.86s download : file_download | Download item -------------------------------- 9.78s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed Jan 23 00:58:03 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 23 Jan 2019 00:58:03 +0000 (UTC) Subject: [CI-results] Jenkins build is back to normal : gluster_anteater_gcs #57 In-Reply-To: <13977523.6421.1548126206410.JavaMail.jenkins@jenkins.ci.centos.org> References: <13977523.6421.1548126206410.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1697374673.6480.1548205083303.JavaMail.jenkins@jenkins.ci.centos.org> See From ci at centos.org Thu Jan 24 00:56:18 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 24 Jan 2019 00:56:18 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #58 Message-ID: <482578306.6528.1548291378254.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 458.12 KB...] changed: [kube1] => (item=gcs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Thursday 24 January 2019 00:44:55 +0000 (0:00:10.880) 0:10:08.581 ****** included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Thursday 24 January 2019 00:44:55 +0000 (0:00:00.092) 0:10:08.673 ****** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Thursday 24 January 2019 00:44:55 +0000 (0:00:00.202) 0:10:08.876 ****** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Thursday 24 January 2019 00:44:56 +0000 (0:00:00.789) 0:10:09.665 ****** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Thursday 24 January 2019 00:44:56 +0000 (0:00:00.222) 0:10:09.888 ****** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Thursday 24 January 2019 00:44:57 +0000 (0:00:00.800) 0:10:10.688 ****** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Thursday 24 January 2019 00:44:57 +0000 (0:00:00.205) 0:10:10.893 ****** changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Thursday 24 January 2019 00:44:58 +0000 (0:00:00.782) 0:10:11.676 ****** ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Thursday 24 January 2019 00:44:58 +0000 (0:00:00.708) 0:10:12.384 ****** ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Thursday 24 January 2019 00:44:59 +0000 (0:00:00.770) 0:10:13.155 ****** FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (49 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (48 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (47 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (46 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (45 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Thursday 24 January 2019 00:46:02 +0000 (0:01:02.670) 0:11:15.825 ****** ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Thursday 24 January 2019 00:46:03 +0000 (0:00:00.710) 0:11:16.536 ****** ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Thursday 24 January 2019 00:46:03 +0000 (0:00:00.536) 0:11:17.072 ****** ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Thursday 24 January 2019 00:46:04 +0000 (0:00:00.483) 0:11:17.555 ****** ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Thursday 24 January 2019 00:46:04 +0000 (0:00:00.681) 0:11:18.237 ****** ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Thursday 24 January 2019 00:46:05 +0000 (0:00:00.878) 0:11:19.116 ****** FAILED - RETRYING: GCS | ETCD Cluster | Get etcd-client service (5 retries left). changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Thursday 24 January 2019 00:46:11 +0000 (0:00:05.858) 0:11:24.974 ****** ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Thursday 24 January 2019 00:46:11 +0000 (0:00:00.146) 0:11:25.120 ****** FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (45 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Thursday 24 January 2019 00:47:17 +0000 (0:01:06.275) 0:12:31.395 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Thursday 24 January 2019 00:47:18 +0000 (0:00:00.746) 0:12:32.142 ****** included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Thursday 24 January 2019 00:47:18 +0000 (0:00:00.109) 0:12:32.252 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Thursday 24 January 2019 00:47:18 +0000 (0:00:00.141) 0:12:32.394 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Thursday 24 January 2019 00:47:19 +0000 (0:00:00.655) 0:12:33.049 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Thursday 24 January 2019 00:47:19 +0000 (0:00:00.147) 0:12:33.196 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Thursday 24 January 2019 00:47:20 +0000 (0:00:00.757) 0:12:33.954 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Thursday 24 January 2019 00:47:20 +0000 (0:00:00.184) 0:12:34.138 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Thursday 24 January 2019 00:47:21 +0000 (0:00:00.760) 0:12:34.899 ****** changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Thursday 24 January 2019 00:47:22 +0000 (0:00:00.527) 0:12:35.426 ****** ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Thursday 24 January 2019 00:47:22 +0000 (0:00:00.214) 0:12:35.641 ****** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.3.172:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=415 changed=115 unreachable=0 failed=1 kube2 : ok=319 changed=91 unreachable=0 failed=0 kube3 : ok=282 changed=77 unreachable=0 failed=0 Thursday 24 January 2019 00:56:17 +0000 (0:08:55.646) 0:21:31.288 ****** =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 535.65s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 66.28s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 62.67s download : container_download | download images for kubeadm config images -- 31.95s kubernetes/master : kubeadm | Initialize first master ------------------ 26.05s kubernetes/master : kubeadm | Init other uninitialized masters --------- 25.08s Install packages ------------------------------------------------------- 24.79s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 18.03s Wait for host to be available ------------------------------------------ 16.44s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 16.01s etcd : Gen_certs | Write etcd master certs ----------------------------- 13.54s Extend root VG --------------------------------------------------------- 13.39s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 11.34s etcd : reload etcd ----------------------------------------------------- 11.06s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 10.88s container-engine/docker : Docker | pause while Docker restarts --------- 10.25s gather facts from all instances ----------------------------------------- 8.49s download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 8.03s download : file_download | Download item -------------------------------- 7.51s etcd : wait for etcd up ------------------------------------------------- 7.46s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri Jan 25 01:05:05 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 25 Jan 2019 01:05:05 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #59 In-Reply-To: <482578306.6528.1548291378254.JavaMail.jenkins@jenkins.ci.centos.org> References: <482578306.6528.1548291378254.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1750211969.6661.1548378305670.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 453.22 KB...] TASK [kubernetes-apps/container_engine_accelerator/nvidia_gpu : Container Engine Acceleration Nvidia GPU | Set fact of download url GTX] *** Friday 25 January 2019 00:55:09 +0000 (0:00:00.173) 0:16:56.334 ******** TASK [kubernetes-apps/container_engine_accelerator/nvidia_gpu : Container Engine Acceleration Nvidia GPU | Create addon dir] *** Friday 25 January 2019 00:55:10 +0000 (0:00:00.205) 0:16:56.540 ******** TASK [kubernetes-apps/container_engine_accelerator/nvidia_gpu : Container Engine Acceleration Nvidia GPU | Create manifests for nvidia accelerators] *** Friday 25 January 2019 00:55:10 +0000 (0:00:00.210) 0:16:56.751 ******** TASK [kubernetes-apps/container_engine_accelerator/nvidia_gpu : Container Engine Acceleration Nvidia GPU | Apply manifests for nvidia accelerators] *** Friday 25 January 2019 00:55:10 +0000 (0:00:00.284) 0:16:57.035 ******** TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_private_key] *** Friday 25 January 2019 00:55:10 +0000 (0:00:00.234) 0:16:57.269 ******** TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_region_id] *** Friday 25 January 2019 00:55:11 +0000 (0:00:00.213) 0:16:57.483 ******** TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_tenancy_id] *** Friday 25 January 2019 00:55:11 +0000 (0:00:00.209) 0:16:57.693 ******** TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_user_id] *** Friday 25 January 2019 00:55:11 +0000 (0:00:00.274) 0:16:57.967 ******** TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_user_fingerprint] *** Friday 25 January 2019 00:55:11 +0000 (0:00:00.222) 0:16:58.190 ******** TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_compartment_id] *** Friday 25 January 2019 00:55:11 +0000 (0:00:00.204) 0:16:58.395 ******** TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_vnc_id] *** Friday 25 January 2019 00:55:12 +0000 (0:00:00.209) 0:16:58.604 ******** TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_subnet1_id] *** Friday 25 January 2019 00:55:12 +0000 (0:00:00.201) 0:16:58.805 ******** TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_subnet2_id] *** Friday 25 January 2019 00:55:12 +0000 (0:00:00.204) 0:16:59.010 ******** TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_security_list_management] *** Friday 25 January 2019 00:55:12 +0000 (0:00:00.215) 0:16:59.226 ******** TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Generate Configuration] *** Friday 25 January 2019 00:55:13 +0000 (0:00:00.248) 0:16:59.475 ******** TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Encode Configuration] *** Friday 25 January 2019 00:55:13 +0000 (0:00:00.217) 0:16:59.693 ******** TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Apply Configuration To Secret] *** Friday 25 January 2019 00:55:13 +0000 (0:00:00.219) 0:16:59.912 ******** TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Apply Configuration] *** Friday 25 January 2019 00:55:13 +0000 (0:00:00.209) 0:17:00.122 ******** TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Download Controller Manifest] *** Friday 25 January 2019 00:55:13 +0000 (0:00:00.228) 0:17:00.350 ******** TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Apply Controller Manifest] *** Friday 25 January 2019 00:55:14 +0000 (0:00:00.219) 0:17:00.570 ******** PLAY [Fetch config] ************************************************************ TASK [Retrieve kubectl config] ************************************************* Friday 25 January 2019 00:55:14 +0000 (0:00:00.190) 0:17:00.760 ******** changed: [kube1] PLAY [Deploy GCS] ************************************************************** TASK [GCS Pre | Cluster ID | Generate a UUID] ********************************** Friday 25 January 2019 00:55:15 +0000 (0:00:00.917) 0:17:01.678 ******** changed: [kube1] TASK [GCS Pre | Cluster ID | Set gcs_gd2_clusterid fact] *********************** Friday 25 January 2019 00:55:16 +0000 (0:00:00.984) 0:17:02.662 ******** ok: [kube1] TASK [GCS Pre | Manifests directory | Create a temporary directory] ************ Friday 25 January 2019 00:55:16 +0000 (0:00:00.389) 0:17:03.051 ******** changed: [kube1] TASK [GCS Pre | Manifests directory | Set manifests_dir fact] ****************** Friday 25 January 2019 00:55:17 +0000 (0:00:01.356) 0:17:04.408 ******** ok: [kube1] TASK [GCS Pre | Manifests | Sync GCS manifests] ******************************** Friday 25 January 2019 00:55:18 +0000 (0:00:00.353) 0:17:04.761 ******** changed: [kube1] => (item=gcs-namespace.yml) changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Friday 25 January 2019 00:55:46 +0000 (0:00:28.326) 0:17:33.088 ******** included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Friday 25 January 2019 00:55:46 +0000 (0:00:00.234) 0:17:33.322 ******** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Friday 25 January 2019 00:55:47 +0000 (0:00:00.405) 0:17:33.728 ******** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Friday 25 January 2019 00:55:49 +0000 (0:00:02.013) 0:17:35.741 ******** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Friday 25 January 2019 00:55:49 +0000 (0:00:00.398) 0:17:36.140 ******** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Friday 25 January 2019 00:55:51 +0000 (0:00:01.878) 0:17:38.019 ******** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Friday 25 January 2019 00:55:51 +0000 (0:00:00.300) 0:17:38.319 ******** changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Friday 25 January 2019 00:55:53 +0000 (0:00:01.638) 0:17:39.958 ******** ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Friday 25 January 2019 00:55:55 +0000 (0:00:01.589) 0:17:41.547 ******** ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Friday 25 January 2019 00:55:56 +0000 (0:00:01.501) 0:17:43.049 ******** FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (49 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (48 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (47 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (46 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (45 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (44 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (43 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (42 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (41 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (40 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (39 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (38 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (37 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (36 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (35 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (34 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (33 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (32 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (31 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (30 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (29 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (28 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (27 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (26 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (25 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (24 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (23 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (22 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (21 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (20 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (19 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (18 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (17 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (16 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (15 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (14 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (13 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (12 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (11 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (10 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (9 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (8 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (7 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (6 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (5 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (4 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (3 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (2 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": true, "cmd": ["/usr/local/bin/kubectl", "-ngcs", "-ojsonpath={.status.availableReplicas}", "get", "deployment", "etcd-operator"], "delta": "0:00:00.351373", "end": "2019-01-25 01:05:05.225057", "rc": 0, "start": "2019-01-25 01:05:04.873684", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=395 changed=112 unreachable=0 failed=1 kube2 : ok=319 changed=91 unreachable=0 failed=0 kube3 : ok=281 changed=77 unreachable=0 failed=0 Friday 25 January 2019 01:05:05 +0000 (0:09:08.630) 0:26:51.679 ******** =============================================================================== GCS | ETCD Operator | Wait for etcd-operator to be available ---------- 548.63s kubernetes/master : kubeadm | Init other uninitialized masters --------- 39.51s kubernetes/master : kubeadm | Initialize first master ------------------ 38.04s etcd : Gen_certs | Write etcd master certs ----------------------------- 32.61s download : container_download | download images for kubeadm config images -- 31.37s Install packages ------------------------------------------------------- 29.46s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 28.33s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.68s Wait for host to be available ------------------------------------------ 20.52s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 17.49s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 16.78s gather facts from all instances ---------------------------------------- 13.85s etcd : Gen_certs | Gather etcd master certs ---------------------------- 13.03s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 12.42s etcd : reload etcd ----------------------------------------------------- 11.86s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.33s container-engine/docker : Docker | pause while Docker restarts --------- 10.38s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests -- 10.36s download : file_download | Download item ------------------------------- 10.24s download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 9.54s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sat Jan 26 01:03:42 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 26 Jan 2019 01:03:42 +0000 (UTC) Subject: [CI-results] Jenkins build is back to normal : gluster_anteater_gcs #60 In-Reply-To: <1750211969.6661.1548378305670.JavaMail.jenkins@jenkins.ci.centos.org> References: <1750211969.6661.1548378305670.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <277841079.6779.1548464622985.JavaMail.jenkins@jenkins.ci.centos.org> See From ci at centos.org Sun Jan 27 01:07:29 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 27 Jan 2019 01:07:29 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #61 Message-ID: <1073028422.6860.1548551249782.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 458.28 KB...] changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Sunday 27 January 2019 00:55:19 +0000 (0:00:31.544) 0:17:41.871 ******** included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Sunday 27 January 2019 00:55:19 +0000 (0:00:00.241) 0:17:42.112 ******** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Sunday 27 January 2019 00:55:20 +0000 (0:00:00.549) 0:17:42.662 ******** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Sunday 27 January 2019 00:55:22 +0000 (0:00:02.303) 0:17:44.965 ******** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Sunday 27 January 2019 00:55:22 +0000 (0:00:00.528) 0:17:45.493 ******** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Sunday 27 January 2019 00:55:25 +0000 (0:00:02.175) 0:17:47.669 ******** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Sunday 27 January 2019 00:55:25 +0000 (0:00:00.539) 0:17:48.209 ******** changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Sunday 27 January 2019 00:55:27 +0000 (0:00:01.997) 0:17:50.206 ******** ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Sunday 27 January 2019 00:55:29 +0000 (0:00:01.660) 0:17:51.867 ******** ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Sunday 27 January 2019 00:55:31 +0000 (0:00:01.748) 0:17:53.616 ******** FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Sunday 27 January 2019 00:55:44 +0000 (0:00:13.319) 0:18:06.936 ******** ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Sunday 27 January 2019 00:55:46 +0000 (0:00:01.750) 0:18:08.687 ******** ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Sunday 27 January 2019 00:55:47 +0000 (0:00:01.412) 0:18:10.100 ******** ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Sunday 27 January 2019 00:55:48 +0000 (0:00:01.246) 0:18:11.346 ******** ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Sunday 27 January 2019 00:55:50 +0000 (0:00:01.903) 0:18:13.249 ******** ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Sunday 27 January 2019 00:55:52 +0000 (0:00:02.104) 0:18:15.354 ******** changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Sunday 27 January 2019 00:55:53 +0000 (0:00:01.191) 0:18:16.545 ******** ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Sunday 27 January 2019 00:55:54 +0000 (0:00:00.491) 0:18:17.037 ******** FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (42 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Sunday 27 January 2019 00:57:41 +0000 (0:01:47.307) 0:20:04.345 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Sunday 27 January 2019 00:57:43 +0000 (0:00:01.822) 0:20:06.168 ******** included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Sunday 27 January 2019 00:57:43 +0000 (0:00:00.192) 0:20:06.360 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Sunday 27 January 2019 00:57:44 +0000 (0:00:00.449) 0:20:06.810 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Sunday 27 January 2019 00:57:45 +0000 (0:00:01.654) 0:20:08.464 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Sunday 27 January 2019 00:57:46 +0000 (0:00:00.436) 0:20:08.900 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Sunday 27 January 2019 00:57:48 +0000 (0:00:01.778) 0:20:10.679 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Sunday 27 January 2019 00:57:48 +0000 (0:00:00.460) 0:20:11.139 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Sunday 27 January 2019 00:57:50 +0000 (0:00:01.668) 0:20:12.808 ******** changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Sunday 27 January 2019 00:57:51 +0000 (0:00:01.324) 0:20:14.133 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Sunday 27 January 2019 00:57:51 +0000 (0:00:00.361) 0:20:14.494 ******** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.62.71:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=416 changed=115 unreachable=0 failed=1 kube2 : ok=319 changed=91 unreachable=0 failed=0 kube3 : ok=281 changed=77 unreachable=0 failed=0 Sunday 27 January 2019 01:07:29 +0000 (0:09:37.516) 0:29:52.011 ******** =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 577.52s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------ 107.31s kubernetes/master : kubeadm | Initialize first master ------------------ 38.64s kubernetes/master : kubeadm | Init other uninitialized masters --------- 38.18s download : container_download | download images for kubeadm config images -- 33.53s etcd : Gen_certs | Write etcd master certs ----------------------------- 33.18s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 31.54s Install packages ------------------------------------------------------- 29.15s Wait for host to be available ------------------------------------------ 21.05s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.54s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 18.95s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 16.31s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 13.32s gather facts from all instances ---------------------------------------- 13.21s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 13.04s etcd : Gen_certs | Gather etcd master certs ---------------------------- 12.62s etcd : reload etcd ----------------------------------------------------- 11.85s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.19s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests -- 11.09s container-engine/docker : Docker | pause while Docker restarts --------- 10.39s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Mon Jan 28 02:01:49 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 28 Jan 2019 02:01:49 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #62 In-Reply-To: <1073028422.6860.1548551249782.JavaMail.jenkins@jenkins.ci.centos.org> References: <1073028422.6860.1548551249782.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1950280522.6928.1548640909402.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 464.51 KB...] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Monday 28 January 2019 00:57:55 +0000 (0:00:01.508) 0:20:09.234 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Monday 28 January 2019 00:57:55 +0000 (0:00:00.325) 0:20:09.560 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Monday 28 January 2019 00:57:57 +0000 (0:00:01.556) 0:20:11.116 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Monday 28 January 2019 00:57:57 +0000 (0:00:00.423) 0:20:11.539 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Monday 28 January 2019 00:57:59 +0000 (0:00:01.548) 0:20:13.088 ******** changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Monday 28 January 2019 00:58:00 +0000 (0:00:01.188) 0:20:14.276 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Monday 28 January 2019 00:58:00 +0000 (0:00:00.337) 0:20:14.614 ******** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Add devices] ***************************************** Monday 28 January 2019 00:58:39 +0000 (0:00:39.019) 0:20:53.633 ******** included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 included: /root/gcs/deploy/tasks/add-devices-to-peer.yml for kube1 TASK [GCS | GD2 Cluster | Add devices | Set facts] ***************************** Monday 28 January 2019 00:58:40 +0000 (0:00:00.325) 0:20:53.958 ******** ok: [kube1] TASK [GCS | GD2 Cluster | Add devices | Add devices for kube1] ***************** Monday 28 January 2019 00:58:40 +0000 (0:00:00.403) 0:20:54.362 ******** FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (1 retries left). failed: [kube1] (item=/dev/vdc) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdc", "msg": "Status code was -1 and not [201]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://10.233.53.93:24007/v1/devices/2954a4ba-d25c-4177-a5bb-3599a3264d2d"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (1 retries left). failed: [kube1] (item=/dev/vdd) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vdd", "msg": "Status code was -1 and not [201]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.53.93:24007/v1/devices/2954a4ba-d25c-4177-a5bb-3599a3264d2d"} FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Add devices | Add devices for kube1 (1 retries left). failed: [kube1] (item=/dev/vde) => {"attempts": 50, "changed": false, "content": "", "disk": "/dev/vde", "msg": "Status code was -1 and not [201]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.53.93:24007/v1/devices/2954a4ba-d25c-4177-a5bb-3599a3264d2d"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=420 changed=115 unreachable=0 failed=1 kube2 : ok=320 changed=91 unreachable=0 failed=0 kube3 : ok=281 changed=77 unreachable=0 failed=0 Monday 28 January 2019 02:01:48 +0000 (1:03:08.540) 1:24:02.903 ******** =============================================================================== GCS | GD2 Cluster | Add devices | Add devices for kube1 -------------- 3788.54s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------ 106.68s kubernetes/master : kubeadm | Initialize first master ------------------ 39.25s GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready --------- 39.02s kubernetes/master : kubeadm | Init other uninitialized masters --------- 37.57s etcd : Gen_certs | Write etcd master certs ----------------------------- 33.29s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 33.18s download : container_download | download images for kubeadm config images -- 32.21s Install packages ------------------------------------------------------- 29.44s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.82s Wait for host to be available ------------------------------------------ 20.72s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 18.89s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 17.60s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 13.48s gather facts from all instances ---------------------------------------- 13.19s etcd : Gen_certs | Gather etcd master certs ---------------------------- 12.62s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.23s etcd : reload etcd ----------------------------------------------------- 11.92s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.04s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests -- 10.73s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Tue Jan 29 01:06:46 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 29 Jan 2019 01:06:46 +0000 (UTC) Subject: [CI-results] Jenkins build is back to normal : gluster_anteater_gcs #63 In-Reply-To: <1950280522.6928.1548640909402.JavaMail.jenkins@jenkins.ci.centos.org> References: <1950280522.6928.1548640909402.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <801433504.7014.1548724006475.JavaMail.jenkins@jenkins.ci.centos.org> See From ci at centos.org Thu Jan 31 01:03:10 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 31 Jan 2019 01:03:10 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 9289 - Failure! (release-3.12 on CentOS-6/x86_64) Message-ID: <616492557.7255.1548896590652.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 9289 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/9289/ to view the results. From ci at centos.org Fri Jan 18 00:35:54 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 18 Jan 2019 00:35:54 -0000 Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #52 Message-ID: <1776331421.6244.1547771752930.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [nigelb] Add some debugging to the runs ------------------------------------------ [...truncated 242.13 KB...] TASK [Extend the root LV and FS to occupy remaining space] ********************* Friday 18 January 2019 00:34:57 +0000 (0:00:17.691) 0:00:17.812 ******** kube1: kube1: Vagrant insecure key detected. Vagrant will automatically replace kube1: this with a newly generated keypair for better security. changed: [kube2] kube1: kube1: Inserting generated public key within guest... changed: [kube3] TASK [Load required kernel modules] ******************************************** Friday 18 January 2019 00:34:59 +0000 (0:00:01.111) 0:00:18.924 ******** kube1: Removing insecure key from the guest if it's present... ok: [kube2] => (item=dm_mirror) ok: [kube3] => (item=dm_mirror) changed: [kube2] => (item=dm_snapshot) changed: [kube3] => (item=dm_snapshot) kube1: Key inserted! Disconnecting and reconnecting using new SSH key... changed: [kube2] => (item=dm_thin_pool) changed: [kube3] => (item=dm_thin_pool) TASK [Persist loaded modules] ************************************************** Friday 18 January 2019 00:35:00 +0000 (0:00:01.068) 0:00:19.992 ******** ==> kube1: Setting hostname... ==> kube1: Forwarding ports... ==> kube1: 30600 (guest) => 9090 (host) (adapter eth0) ==> kube1: 30800 (guest) => 9000 (host) (adapter eth0) changed: [kube3] => (item=dm_mirror) ==> kube1: Configuring and enabling network interfaces... changed: [kube2] => (item=dm_mirror) changed: [kube2] => (item=dm_snapshot) changed: [kube3] => (item=dm_snapshot) changed: [kube2] => (item=dm_thin_pool) changed: [kube3] => (item=dm_thin_pool) TASK [Install packages] ******************************************************** Friday 18 January 2019 00:35:02 +0000 (0:00:02.546) 0:00:22.539 ******** kube1: SSH address: 192.168.121.161:22 kube1: SSH username: vagrant kube1: SSH auth method: private key changed: [kube2] => (item=socat) changed: [kube3] => (item=socat) TASK [Reboot to make layered packages available] ******************************* Friday 18 January 2019 00:35:22 +0000 (0:00:19.634) 0:00:42.173 ******** changed: [kube2] changed: [kube3] TASK [Wait for host to be available] ******************************************* Friday 18 January 2019 00:35:23 +0000 (0:00:01.477) 0:00:43.650 ******** ok: [kube3] ok: [kube2] PLAY [localhost] *************************************************************** skipping: no hosts matched [WARNING]: Could not match supplied host pattern, ignoring: bastion PLAY [bastion[0]] ************************************************************** skipping: no hosts matched [WARNING]: Could not match supplied host pattern, ignoring: calico-rr PLAY [k8s-cluster:etcd:calico-rr] ********************************************** TASK [download : include_tasks] ************************************************ Friday 18 January 2019 00:35:39 +0000 (0:00:16.143) 0:00:59.794 ******** TASK [download : Download items] *********************************************** Friday 18 January 2019 00:35:39 +0000 (0:00:00.051) 0:00:59.845 ******** TASK [download : Sync container] *********************************************** Friday 18 January 2019 00:35:40 +0000 (0:00:00.173) 0:01:00.019 ******** TASK [download : include_tasks] ************************************************ Friday 18 January 2019 00:35:40 +0000 (0:00:00.158) 0:01:00.177 ******** TASK [kubespray-defaults : Configure defaults] ********************************* Friday 18 January 2019 00:35:40 +0000 (0:00:00.043) 0:01:00.220 ******** ok: [kube2] => { "msg": "Check roles/kubespray-defaults/defaults/main.yml" } ok: [kube3] => { "msg": "Check roles/kubespray-defaults/defaults/main.yml" } TASK [bootstrap-os : Fetch /etc/os-release] ************************************ Friday 18 January 2019 00:35:40 +0000 (0:00:00.288) 0:01:00.509 ******** ok: [kube2] ok: [kube3] TASK [bootstrap-os : include_tasks] ******************************************** Friday 18 January 2019 00:35:40 +0000 (0:00:00.364) 0:01:00.874 ******** TASK [bootstrap-os : include_tasks] ******************************************** Friday 18 January 2019 00:35:41 +0000 (0:00:00.047) 0:01:00.922 ******** TASK [bootstrap-os : include_tasks] ******************************************** Friday 18 January 2019 00:35:41 +0000 (0:00:00.047) 0:01:00.969 ******** TASK [bootstrap-os : include_tasks] ******************************************** Friday 18 January 2019 00:35:41 +0000 (0:00:00.044) 0:01:01.014 ******** TASK [bootstrap-os : include_tasks] ******************************************** Friday 18 January 2019 00:35:41 +0000 (0:00:00.044) 0:01:01.058 ******** included: /root/gcs/deploy/kubespray/roles/bootstrap-os/tasks/bootstrap-centos.yml for kube2, kube3 TASK [bootstrap-os : check if atomic host] ************************************* Friday 18 January 2019 00:35:41 +0000 (0:00:00.082) 0:01:01.140 ******** ok: [kube2] ok: [kube3] TASK [bootstrap-os : set_fact] ************************************************* Friday 18 January 2019 00:35:41 +0000 (0:00:00.720) 0:01:01.861 ******** ok: [kube2] ok: [kube3] TASK [bootstrap-os : Check presence of fastestmirror.conf] ********************* Friday 18 January 2019 00:35:42 +0000 (0:00:00.297) 0:01:02.159 ******** ok: [kube3] ok: [kube2] TASK [bootstrap-os : Disable fastestmirror plugin] ***************************** Friday 18 January 2019 00:35:43 +0000 (0:00:00.933) 0:01:03.092 ******** changed: [kube2] changed: [kube3] TASK [bootstrap-os : Add proxy to /etc/yum.conf if http_proxy is defined] ****** Friday 18 January 2019 00:35:43 +0000 (0:00:00.720) 0:01:03.812 ******** TASK [bootstrap-os : Install libselinux-python and yum-utils for bootstrap] **** Friday 18 January 2019 00:35:43 +0000 (0:00:00.041) 0:01:03.854 ******** TASK [bootstrap-os : Check python-pip package] ********************************* Friday 18 January 2019 00:35:44 +0000 (0:00:00.043) 0:01:03.898 ******** TASK [bootstrap-os : Install epel-release for bootstrap] *********************** Friday 18 January 2019 00:35:44 +0000 (0:00:00.040) 0:01:03.938 ******** TASK [bootstrap-os : Install pip for bootstrap] ******************************** Friday 18 January 2019 00:35:44 +0000 (0:00:00.043) 0:01:03.982 ******** TASK [bootstrap-os : include_tasks] ******************************************** Friday 18 January 2019 00:35:44 +0000 (0:00:00.041) 0:01:04.024 ******** TASK [bootstrap-os : include_tasks] ******************************************** Friday 18 January 2019 00:35:44 +0000 (0:00:00.046) 0:01:04.070 ******** TASK [bootstrap-os : Remove require tty] *************************************** Friday 18 January 2019 00:35:44 +0000 (0:00:00.044) 0:01:04.115 ******** ok: [kube2] ok: [kube3] TASK [bootstrap-os : Create remote_tmp for it is used by another module] ******* Friday 18 January 2019 00:35:44 +0000 (0:00:00.661) 0:01:04.777 ******** changed: [kube2] changed: [kube3] TASK [bootstrap-os : Gather nodes hostnames] *********************************** Friday 18 January 2019 00:35:45 +0000 (0:00:00.838) 0:01:05.616 ******** ok: [kube3] ok: [kube2] TASK [bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed)] *** Friday 18 January 2019 00:35:46 +0000 (0:00:01.178) 0:01:06.795 ******** ok: [kube3] ok: [kube2] TASK [bootstrap-os : Assign inventory name to unconfigured hostnames (CoreOS and Tumbleweed only)] *** Friday 18 January 2019 00:35:47 +0000 (0:00:00.995) 0:01:07.791 ******** TASK [bootstrap-os : Update hostname fact (CoreOS and Tumbleweed only)] ******** Friday 18 January 2019 00:35:47 +0000 (0:00:00.048) 0:01:07.839 ******** PLAY [k8s-cluster:etcd:calico-rr] ********************************************** TASK [Gathering Facts] ********************************************************* Friday 18 January 2019 00:35:48 +0000 (0:00:00.051) 0:01:07.890 ******** ok: [kube2] ok: [kube3] TASK [gather facts from all instances] ***************************************** Friday 18 January 2019 00:35:48 +0000 (0:00:00.783) 0:01:08.674 ******** failed: [kube2] (item=kube1) => {"item": "kube1", "msg": "SSH Error: data could not be sent to remote host \"kube1\". Make sure this host can be reached over ssh", "unreachable": true} failed: [kube3] (item=kube1) => {"item": "kube1", "msg": "SSH Error: data could not be sent to remote host \"kube1\". Make sure this host can be reached over ssh", "unreachable": true} ok: [kube3 -> 192.168.121.160] => (item=kube2) ok: [kube2 -> 192.168.121.160] => (item=kube2) ok: [kube2 -> 192.168.121.203] => (item=kube3) ok: [kube3 -> 192.168.121.203] => (item=kube3) failed: [kube2] (item=kube1) => {"item": "kube1", "msg": "SSH Error: data could not be sent to remote host \"kube1\". Make sure this host can be reached over ssh", "unreachable": true} failed: [kube3] (item=kube1) => {"item": "kube1", "msg": "SSH Error: data could not be sent to remote host \"kube1\". Make sure this host can be reached over ssh", "unreachable": true} ok: [kube3 -> 192.168.121.160] => (item=kube2) ok: [kube2 -> 192.168.121.160] => (item=kube2) ok: [kube2 -> 192.168.121.203] => (item=kube3) fatal: [kube2]: UNREACHABLE! => {"changed": false, "msg": "All items completed", "results": [{"_ansible_ignore_errors": null, "_ansible_item_label": "kube1", "_ansible_item_result": true, "item": "kube1", "msg": "SSH Error: data could not be sent to remote host \"kube1\". Make sure this host can be reached over ssh", "unreachable": true}, {"_ansible_delegated_vars": {"ansible_delegated_host": "kube2", "ansible_host": "192.168.121.160"}, "_ansible_ignore_errors": null, "_ansible_item_label": "kube2", "_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "_ansible_verbose_override": true, "ansible_facts": {"ansible_all_ipv4_addresses": ["192.168.121.160", "172.17.0.1"], "ansible_all_ipv6_addresses": ["fe80::5054:ff:fef6:968b"], "ansible_apparmor": {"status": "disabled"}, "ansible_architecture": "x86_64", "ansible_bios_date": "01/01/2011", "ansible_bios_version": "0.5.1", "ansible_cmdline": {"BOOT_IMAGE": "/ostree/centos-atomic-host-ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/vmlinuz-3.10.0-957.1.3.el7.x86_64", "console": "ttyS0,115200n8", "crashkernel": "auto", "no_timer_check": true, "ostree": "/ostree/boot.1/centos-atomic-host/ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/0", "rd.lvm.lv": "atomicos/root", "root": "/dev/mapper/atomicos-root"}, "ansible_date_time": {"date": "2019-01-18", "day": "18", "epoch": "1547771749", "hour": "00", "iso8601": "2019-01-18T00:35:49Z", "iso8601_basic": "20190118T003549539925", "iso8601_basic_short": "20190118T003549", "iso8601_micro": "2019-01-18T00:35:49.540015Z", "minute": "35", "month": "01", "second": "49", "time": "00:35:49", "tz": "UTC", "tz_offset": "+0000", "weekday": "Friday", "weekday_number": "5", "weeknumber": "02", "year": "2019"}, "ansible_default_ipv4": {"address": "192.168.121.160", "alias": "eth0", "broadcast": "192.168.121.255", "gateway": "192.168.121.1", "interface": "eth0", "macaddress": "52:54:00:f6:96:8b", "mtu": 1500, "netmask": "255.255.255.0", "network": "192.168.121.0", "type": "ether"}, "ansible_default_ipv6": {}, "ansible_device_links": {"ids": {"dm-0": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "vda2": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "vdb": ["lvm-pv-uuid-S9cnqm-3dnp-3SRH-sFDV-t1tP-VANa-aAUUn7"]}, "labels": {}, "masters": {"vda2": ["dm-0"], "vdb": ["dm-0"]}, "uuids": {"dm-0": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"], "vda1": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}}, "ansible_devices": {"dm-0": {"holders": [], "host": "", "links": {"ids": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "labels": [], "masters": [], "uuids": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"]}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "", "sectors": "56139776", "sectorsize": "512", "size": "26.77 GB", "support_discard": "0", "vendor": null, "virtual": 1}, "vda": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {"vda1": {"holders": [], "links": {"ids": [], "labels": [], "masters": [], "uuids": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}, "sectors": "614400", "sectorsize": 512, "size": "300.00 MB", "start": "2048", "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, "vda2": {"holders": ["atomicos-root"], "links": {"ids": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "labels": [], "masters": ["dm-0"], "uuids": []}, "sectors": "20355072", "sectorsize": 512, "size": "9.71 GB", "start": "616448", "uuid": null}}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "23068672", "sectorsize": "512", "size": "11.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdb": {"holders": ["atomicos-root"], "host": "", "links": {"ids": ["lvm-pv-uuid-S9cnqm-3dnp-3SRH-sFDV-t1tP-VANa-aAUUn7"], "labels": [], "masters": ["dm-0"], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "41943040", "sectorsize": "512", "size": "20.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdc": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdd": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vde": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}}, "ansible_distribution": "CentOS", "ansible_distribution_file_parsed": true, "ansible_distribution_file_path": "/etc/redhat-release", "ansible_distribution_file_variety": "RedHat", "ansible_distribution_major_version": "7", "ansible_distribution_release": "Core", "ansible_distribution_version": "7.6.1810", "ansible_dns": {"nameservers": ["192.168.121.1"]}, "ansible_docker0": {"active": false, "device": "docker0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "off [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "on", "tx_gre_csum_segmentation": "on", "tx_gre_segmentation": "on", "tx_gso_partial": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "on", "tx_udp_tnl_segmentation": "on", "tx_vlan_offload": "on", "tx_vlan_stag_hw_insert": "on", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "id": "8000.024229599257", "interfaces": [], "ipv4": {"address": "172.17.0.1", "broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0"}, "macaddress": "02:42:29:59:92:57", "mtu": 1500, "promisc": false, "stp": false, "timestamping": ["rx_software", "software"], "type": "bridge"}, "ansible_domain": "", "ansible_effective_group_id": 0, "ansible_effective_user_id": 0, "ansible_env": {"HOME": "/root", "LANG": "en_US.UTF-8", "LOGNAME": "root", "MAIL": "/var/mail/vagrant", "PATH": "/sbin:/bin:/usr/sbin:/usr/bin", "PWD": "/var/home/vagrant", "SHELL": "/bin/bash", "SHLVL": "1", "SUDO_COMMAND": "/bin/sh -c echo BECOME-SUCCESS-kaoqdwtgoubawscqsectwiultstfuynp; /usr/bin/python", "SUDO_GID": "1000", "SUDO_UID": "1000", "SUDO_USER": "vagrant", "TERM": "unknown", "USER": "root", "USERNAME": "root", "XDG_SESSION_ID": "1", "_": "/usr/bin/python"}, "ansible_eth0": {"active": true, "device": "eth0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "192.168.121.160", "broadcast": "192.168.121.255", "netmask": "255.255.255.0", "network": "192.168.121.0"}, "ipv6": [{"address": "fe80::5054:ff:fef6:968b", "prefix": "64", "scope": "link"}], "macaddress": "52:54:00:f6:96:8b", "module": "virtio_net", "mtu": 1500, "pciid": "virtio8", "promisc": false, "timestamping": ["rx_software", "software"], "type": "ether"}, "ansible_fips": false, "ansible_form_factor": "Other", "ansible_fqdn": "kube2", "ansible_hostname": "kube2", "ansible_interfaces": ["lo", "docker0", "eth0"], "ansible_is_chroot": false, "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:7cbffc69d119", "ansible_kernel": "3.10.0-957.1.3.el7.x86_64", "ansible_lo": {"active": true, "device": "lo", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "on [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "on [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off [fixed]", "tx_scatter_gather": "on [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "on [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "127.0.0.1", "broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0"}, "ipv6": [{"address": "::1", "prefix": "128", "scope": "host"}], "mtu": 65536, "promisc": false, "timestamping": ["rx_software", "software"], "type": "loopback"}, "ansible_local": {}, "ansible_lsb": {}, "ansible_lvm": {"lvs": {"root": {"size_g": "26.77", "vg": "atomicos"}}, "pvs": {"/dev/vda2": {"free_g": "0", "size_g": "9.70", "vg": "atomicos"}, "/dev/vdb": {"free_g": "2.93", "size_g": "20.00", "vg": "atomicos"}}, "vgs": {"atomicos": {"free_g": "2.93", "num_lvs": "1", "num_pvs": "2", "size_g": "29.70"}}}, "ansible_machine": "x86_64", "ansible_machine_id": "680eec9976e64bfe94f31df5ef510baf", "ansible_memfree_mb": 1460, "ansible_memory_mb": {"nocache": {"free": 1656, "used": 182}, "real": {"free": 1460, "total": 1838, "used": 378}, "swap": {"cached": 0, "free": 0, "total": 0, "used": 0}}, "ansible_memtotal_mb": 1838, "ansible_mounts": [{"block_available": 6616123, "block_size": 4096, "block_total": 7014912, "block_used": 398789, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004055, "inode_total": 14034944, "inode_used": 30889, "mount": "/sysroot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 27099639808, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6616123, "block_size": 4096, "block_total": 7014912, "block_used": 398789, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004055, "inode_total": 14034944, "inode_used": 30889, "mount": "/", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27099639808, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6616123, "block_size": 4096, "block_total": 7014912, "block_used": 398789, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004055, "inode_total": 14034944, "inode_used": 30889, "mount": "/usr", "options": "ro,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27099639808, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6616123, "block_size": 4096, "block_total": 7014912, "block_used": 398789, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004055, "inode_total": 14034944, "inode_used": 30889, "mount": "/var", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27099639808, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 54635, "block_size": 4096, "block_total": 75945, "block_used": 21310, "device": "/dev/vda1", "fstype": "xfs", "inode_available": 153280, "inode_total": 153600, "inode_used": 320, "mount": "/boot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 223784960, "size_total": 311070720, "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, {"block_available": 6616123, "block_size": 4096, "block_total": 7014912, "block_used": 398789, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004055, "inode_total": 14034944, "inode_used": 30889, "mount": "/var/lib/docker/containers", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27099639808, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6616123, "block_size": 4096, "block_total": 7014912, "block_used": 398789, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004055, "inode_total": 14034944, "inode_used": 30889, "mount": "/var/lib/docker/overlay2", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27099639808, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}], "ansible_nodename": "kube2", "ansible_os_family": "RedHat", "ansible_pkg_mgr": "atomic_container", "ansible_processor": ["0", "GenuineIntel", "Intel(R) Xeon(R) CPU E3-1265L V2 @ 2.50GHz", "1", "GenuineIntel", "Intel(R) Xeon(R) CPU E3-1265L V2 @ 2.50GHz"], "ansible_processor_cores": 1, "ansible_processor_count": 2, "ansible_processor_threads_per_core": 1, "ansible_processor_vcpus": 2, "ansible_product_name": "KVM", "ansible_product_serial": "NA", "ansible_product_uuid": "680EEC99-76E6-4BFE-94F3-1DF5EF510BAF", "ansible_product_version": "RHEL 7.0.0 PC (i440FX + PIIX, 1996)", "ansible_python": {"executable": "/usr/bin/python", "has_sslcontext": true, "type": "CPython", "version": {"major": 2, "micro": 5, "minor": 7, "releaselevel": "final", "serial": 0}, "version_info": [2, 7, 5, "final", 0]}, "ansible_python_version": "2.7.5", "ansible_real_group_id": 0, "ansible_real_user_id": 0, "ansible_selinux": {"config_mode": "enforcing", "mode": "enforcing", "policyvers": 31, "status": "enabled", "type": "targeted"}, "ansible_selinux_python_present": true, "ansible_service_mgr": "systemd", "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI52Tisx6DVjvFDr0Lt7aLUrkuEzWChdA5W8HW6r7ZusAPHyFZRmrpYUV0HfpW7bLEEjhs0WTqiUf4fn+6QZXb8=", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIK7K8QQkTuId2LDF7xoKmEFCpWNZvOSRMDJUx0Jct86U", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDNK9pmjdnaTMV9VMvX+FrIxVsIagXla3+DomenPZvp3fF/ie1Q4JokIRe3fwrnOkS+PlsqtVnLH2hd7rA4/sIglOsBl6+TuYGTgnypn/maKXflcK4yjCVSKk3xqDAOYkBzUt89gewV+ndBrYBNmks1YEK2lg8gpe/T5Jeemf2M8IFpWKug3pN+lcxdG7Pg9fINjspCSnf+XwTmHpFBHmw1/MaJXJHS0oZDLce3UwOS5u0+Czey1SjqIqPKAy8fyrpihP0wN/OlmoE+AK0jxqNowY5mJ+GWVHaHY/c3xZSMVfdzq0fzOqp1z2wHDSmrgXHVxM2OqUNqccl3M1T/gvk/", "ansible_swapfree_mb": 0, "ansible_swaptotal_mb": 0, "ansible_system": "Linux", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_system_capabilities_enforced": "True", "ansible_system_vendor": "Red Hat", "ansible_uptime_seconds": 21, "ansible_user_dir": "/root", "ansible_user_gecos": "root", "ansible_user_gid": 0, "ansible_user_id": "root", "ansible_user_shell": "/bin/bash", "ansible_user_uid": 0, "ansible_userspace_architecture": "x86_64", "ansible_userspace_bits": "64", "ansible_virtualization_role": "guest", "ansible_virtualization_type": "kvm", "gather_subset": ["all"], "module_setup": true}, "changed": false, "failed": false, "invocation": {"module_args": {"fact_path": "/etc/ansible/facts.d", "filter": "*", "gather_subset": ["all"], "gather_timeout": 10}}, "item": "kube2"}, {"_ansible_delegated_vars": {"ansible_delegated_host": "kube3", "ansible_host": "192.168.121.203"}, "_ansible_ignore_errors": null, "_ansible_item_label": "kube3", "_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "_ansible_verbose_override": true, "ansible_facts": {"ansible_all_ipv4_addresses": ["192.168.121.203", "172.17.0.1"], "ansible_all_ipv6_addresses": ["fe80::5054:ff:fe74:365e"], "ansible_apparmor": {"status": "disabled"}, "ansible_architecture": "x86_64", "ansible_bios_date": "01/01/2011", "ansible_bios_version": "0.5.1", "ansible_cmdline": {"BOOT_IMAGE": "/ostree/centos-atomic-host-ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/vmlinuz-3.10.0-957.1.3.el7.x86_64", "console": "ttyS0,115200n8", "crashkernel": "auto", "no_timer_check": true, "ostree": "/ostree/boot.1/centos-atomic-host/ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/0", "rd.lvm.lv": "atomicos/root", "root": "/dev/mapper/atomicos-root"}, "ansible_date_time": {"date": "2019-01-18", "day": "18", "epoch": "1547771750", "hour": "00", "iso8601": "2019-01-18T00:35:50Z", "iso8601_basic": "20190118T003550443362", "iso8601_basic_short": "20190118T003550", "iso8601_micro": "2019-01-18T00:35:50.443448Z", "minute": "35", "month": "01", "second": "50", "time": "00:35:50", "tz": "UTC", "tz_offset": "+0000", "weekday": "Friday", "weekday_number": "5", "weeknumber": "02", "year": "2019"}, "ansible_default_ipv4": {"address": "192.168.121.203", "alias": "eth0", "broadcast": "192.168.121.255", "gateway": "192.168.121.1", "interface": "eth0", "macaddress": "52:54:00:74:36:5e", "mtu": 1500, "netmask": "255.255.255.0", "network": "192.168.121.0", "type": "ether"}, "ansible_default_ipv6": {}, "ansible_device_links": {"ids": {"dm-0": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "vda2": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "vdb": ["lvm-pv-uuid-Vf1FGz-dqpf-mlQg-b787-j9XC-RadL-8diWKj"]}, "labels": {}, "masters": {"vda2": ["dm-0"], "vdb": ["dm-0"]}, "uuids": {"dm-0": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"], "vda1": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}}, "ansible_devices": {"dm-0": {"holders": [], "host": "", "links": {"ids": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "labels": [], "masters": [], "uuids": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"]}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "", "sectors": "56139776", "sectorsize": "512", "size": "26.77 GB", "support_discard": "0", "vendor": null, "virtual": 1}, "vda": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {"vda1": {"holders": [], "links": {"ids": [], "labels": [], "masters": [], "uuids": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}, "sectors": "614400", "sectorsize": 512, "size": "300.00 MB", "start": "2048", "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, "vda2": {"holders": ["atomicos-root"], "links": {"ids": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "labels": [], "masters": ["dm-0"], "uuids": []}, "sectors": "20355072", "sectorsize": 512, "size": "9.71 GB", "start": "616448", "uuid": null}}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "23068672", "sectorsize": "512", "size": "11.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdb": {"holders": ["atomicos-root"], "host": "", "links": {"ids": ["lvm-pv-uuid-Vf1FGz-dqpf-mlQg-b787-j9XC-RadL-8diWKj"], "labels": [], "masters": ["dm-0"], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "41943040", "sectorsize": "512", "size": "20.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdc": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdd": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vde": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}}, "ansible_distribution": "CentOS", "ansible_distribution_file_parsed": true, "ansible_distribution_file_path": "/etc/redhat-release", "ansible_distribution_file_variety": "RedHat", "ansible_distribution_major_version": "7", "ansible_distribution_release": "Core", "ansible_distribution_version": "7.6.1810", "ansible_dns": {"nameservers": ["192.168.121.1"]}, "ansible_docker0": {"active": false, "device": "docker0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "off [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "on", "tx_gre_csum_segmentation": "on", "tx_gre_segmentation": "on", "tx_gso_partial": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "on", "tx_udp_tnl_segmentation": "on", "tx_vlan_offload": "on", "tx_vlan_stag_hw_insert": "on", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "id": "8000.0242e93c6b0a", "interfaces": [], "ipv4": {"address": "172.17.0.1", "broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0"}, "macaddress": "02:42:e9:3c:6b:0a", "mtu": 1500, "promisc": false, "stp": false, "timestamping": ["rx_software", "software"], "type": "bridge"}, "ansible_domain": "", "ansible_effective_group_id": 0, "ansible_effective_user_id": 0, "ansible_env": {"HOME": "/root", "LANG": "en_US.UTF-8", "LOGNAME": "root", "MAIL": "/var/mail/vagrant", "PATH": "/sbin:/bin:/usr/sbin:/usr/bin", "PWD": "/var/home/vagrant", "SHELL": "/bin/bash", "SHLVL": "1", "SUDO_COMMAND": "/bin/sh -c echo BECOME-SUCCESS-wpeijkwnhnfpjxaawpaythlykkbmogge; /usr/bin/python", "SUDO_GID": "1000", "SUDO_UID": "1000", "SUDO_USER": "vagrant", "TERM": "unknown", "USER": "root", "USERNAME": "root", "XDG_SESSION_ID": "1", "_": "/usr/bin/python"}, "ansible_eth0": {"active": true, "device": "eth0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "192.168.121.203", "broadcast": "192.168.121.255", "netmask": "255.255.255.0", "network": "192.168.121.0"}, "ipv6": [{"address": "fe80::5054:ff:fe74:365e", "prefix": "64", "scope": "link"}], "macaddress": "52:54:00:74:36:5e", "module": "virtio_net", "mtu": 1500, "pciid": "virtio8", "promisc": false, "timestamping": ["rx_software", "software"], "type": "ether"}, "ansible_fips": false, "ansible_form_factor": "Other", "ansible_fqdn": "kube3", "ansible_hostname": "kube3", "ansible_interfaces": ["lo", "docker0", "eth0"], "ansible_is_chroot": false, "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:7cbffc69d119", "ansible_kernel": "3.10.0-957.1.3.el7.x86_64", "ansible_lo": {"active": true, "device": "lo", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "on [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "on [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off [fixed]", "tx_scatter_gather": "on [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "on [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "127.0.0.1", "broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0"}, "ipv6": [{"address": "::1", "prefix": "128", "scope": "host"}], "mtu": 65536, "promisc": false, "timestamping": ["rx_software", "software"], "type": "loopback"}, "ansible_local": {}, "ansible_lsb": {}, "ansible_lvm": {"lvs": {"root": {"size_g": "26.77", "vg": "atomicos"}}, "pvs": {"/dev/vda2": {"free_g": "0", "size_g": "9.70", "vg": "atomicos"}, "/dev/vdb": {"free_g": "2.93", "size_g": "20.00", "vg": "atomicos"}}, "vgs": {"atomicos": {"free_g": "2.93", "num_lvs": "1", "num_pvs": "2", "size_g": "29.70"}}}, "ansible_machine": "x86_64", "ansible_machine_id": "f528bcd4e97c45ad93000dd72e5496bd", "ansible_memfree_mb": 1453, "ansible_memory_mb": {"nocache": {"free": 1652, "used": 186}, "real": {"free": 1453, "total": 1838, "used": 385}, "swap": {"cached": 0, "free": 0, "total": 0, "used": 0}}, "ansible_memtotal_mb": 1838, "ansible_mounts": [{"block_available": 6614065, "block_size": 4096, "block_total": 7014912, "block_used": 400847, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/sysroot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 27091210240, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6614065, "block_size": 4096, "block_total": 7014912, "block_used": 400847, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27091210240, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6614065, "block_size": 4096, "block_total": 7014912, "block_used": 400847, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/usr", "options": "ro,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27091210240, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6614065, "block_size": 4096, "block_total": 7014912, "block_used": 400847, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/var", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27091210240, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 54635, "block_size": 4096, "block_total": 75945, "block_used": 21310, "device": "/dev/vda1", "fstype": "xfs", "inode_available": 153280, "inode_total": 153600, "inode_used": 320, "mount": "/boot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 223784960, "size_total": 311070720, "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, {"block_available": 6614065, "block_size": 4096, "block_total": 7014912, "block_used": 400847, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/var/lib/docker/containers", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27091210240, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6614065, "block_size": 4096, "block_total": 7014912, "block_used": 400847, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/var/lib/docker/overlay2", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27091210240, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}], "ansible_nodename": "kube3", "ansible_os_family": "RedHat", "ansible_pkg_mgr": "atomic_container", "ansible_processor": ["0", "GenuineIntel", "Intel(R) Xeon(R) CPU E3-1265L V2 @ 2.50GHz", "1", "GenuineIntel", "Intel(R) Xeon(R) CPU E3-1265L V2 @ 2.50GHz"], "ansible_processor_cores": 1, "ansible_processor_count": 2, "ansible_processor_threads_per_core": 1, "ansible_processor_vcpus": 2, "ansible_product_name": "KVM", "ansible_product_serial": "NA", "ansible_product_uuid": "F528BCD4-E97C-45AD-9300-0DD72E5496BD", "ansible_product_version": "RHEL 7.0.0 PC (i440FX + PIIX, 1996)", "ansible_python": {"executable": "/usr/bin/python", "has_sslcontext": true, "type": "CPython", "version": {"major": 2, "micro": 5, "minor": 7, "releaselevel": "final", "serial": 0}, "version_info": [2, 7, 5, "final", 0]}, "ansible_python_version": "2.7.5", "ansible_real_group_id": 0, "ansible_real_user_id": 0, "ansible_selinux": {"config_mode": "enforcing", "mode": "enforcing", "policyvers": 31, "status": "enabled", "type": "targeted"}, "ansible_selinux_python_present": true, "ansible_service_mgr": "systemd", "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFV/sZHod8yZ/kpe6nv4pnSe5AauqF7O+Jsf09LTc2le2wEcWLNKTrJiWGnPSomDpfkkvdO+h9qUueP6RymZC0E=", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIFANf9gpQRhCnNOn0hWPcoAe8I6sofXjfYIiqjKV0J0X", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQC7g2A+X3s8ChZvQg+/+2gATNFvrFp57odVMHyMdJhkrTjZrZJnv43NpQPZOHt04T/KjvSzSvSwSBhX9ypTV6ARU7p0GnXkT3anH5HUvDJOW3eerN/CF2YBme+8MDA02F5gKaIy+CguSn+37Fc0rXyJpAGssg7YlndwzBfqkphvWHta/liIpH4jNo0+wyldZvBqP/CZra8GpWL9a7evSBeNe3DrQIggZfObxbzDcd3gXpTxjrZLNc/Sgwe93Be7BoJp3p8iCBKEAY0/UhdShAyjzSSciul1LHTUAwpUOk22zKeHttXmb5x+50vyz1wL+KiKOsq/Ycp/5sVckAZ8eXAJ", "ansible_swapfree_mb": 0, "ansible_swaptotal_mb": 0, "ansible_system": "Linux", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_system_capabilities_enforced": "True", "ansible_system_vendor": "Red Hat", "ansible_uptime_seconds": 21, "ansible_user_dir": "/root", "ansible_user_gecos": "root", "ansible_user_gid": 0, "ansible_user_id": "root", "ansible_user_shell": "/bin/bash", "ansible_user_uid": 0, "ansible_userspace_architecture": "x86_64", "ansible_userspace_bits": "64", "ansible_virtualization_role": "guest", "ansible_virtualization_type": "kvm", "gather_subset": ["all"], "module_setup": true}, "changed": false, "failed": false, "invocation": {"module_args": {"fact_path": "/etc/ansible/facts.d", "filter": "*", "gather_subset": ["all"], "gather_timeout": 10}}, "item": "kube3"}, {"_ansible_ignore_errors": null, "_ansible_item_label": "kube1", "_ansible_item_result": true, "item": "kube1", "msg": "SSH Error: data could not be sent to remote host \"kube1\". Make sure this host can be reached over ssh", "unreachable": true}, {"_ansible_delegated_vars": {"ansible_delegated_host": "kube2", "ansible_host": "192.168.121.160"}, "_ansible_ignore_errors": null, "_ansible_item_label": "kube2", "_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "_ansible_verbose_override": true, "ansible_facts": {"ansible_all_ipv4_addresses": ["192.168.121.160", "172.17.0.1"], "ansible_all_ipv6_addresses": ["fe80::5054:ff:fef6:968b"], "ansible_apparmor": {"status": "disabled"}, "ansible_architecture": "x86_64", "ansible_bios_date": "01/01/2011", "ansible_bios_version": "0.5.1", "ansible_cmdline": {"BOOT_IMAGE": "/ostree/centos-atomic-host-ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/vmlinuz-3.10.0-957.1.3.el7.x86_64", "console": "ttyS0,115200n8", "crashkernel": "auto", "no_timer_check": true, "ostree": "/ostree/boot.1/centos-atomic-host/ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/0", "rd.lvm.lv": "atomicos/root", "root": "/dev/mapper/atomicos-root"}, "ansible_date_time": {"date": "2019-01-18", "day": "18", "epoch": "1547771751", "hour": "00", "iso8601": "2019-01-18T00:35:51Z", "iso8601_basic": "20190118T003551232018", "iso8601_basic_short": "20190118T003551", "iso8601_micro": "2019-01-18T00:35:51.232114Z", "minute": "35", "month": "01", "second": "51", "time": "00:35:51", "tz": "UTC", "tz_offset": "+0000", "weekday": "Friday", "weekday_number": "5", "weeknumber": "02", "year": "2019"}, "ansible_default_ipv4": {"address": "192.168.121.160", "alias": "eth0", "broadcast": "192.168.121.255", "gateway": "192.168.121.1", "interface": "eth0", "macaddress": "52:54:00:f6:96:8b", "mtu": 1500, "netmask": "255.255.255.0", "network": "192.168.121.0", "type": "ether"}, "ansible_default_ipv6": {}, "ansible_device_links": {"ids": {"dm-0": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "vda2": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "vdb": ["lvm-pv-uuid-S9cnqm-3dnp-3SRH-sFDV-t1tP-VANa-aAUUn7"]}, "labels": {}, "masters": {"vda2": ["dm-0"], "vdb": ["dm-0"]}, "uuids": {"dm-0": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"], "vda1": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}}, "ansible_devices": {"dm-0": {"holders": [], "host": "", "links": {"ids": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "labels": [], "masters": [], "uuids": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"]}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "", "sectors": "56139776", "sectorsize": "512", "size": "26.77 GB", "support_discard": "0", "vendor": null, "virtual": 1}, "vda": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {"vda1": {"holders": [], "links": {"ids": [], "labels": [], "masters": [], "uuids": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}, "sectors": "614400", "sectorsize": 512, "size": "300.00 MB", "start": "2048", "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, "vda2": {"holders": ["atomicos-root"], "links": {"ids": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "labels": [], "masters": ["dm-0"], "uuids": []}, "sectors": "20355072", "sectorsize": 512, "size": "9.71 GB", "start": "616448", "uuid": null}}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "23068672", "sectorsize": "512", "size": "11.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdb": {"holders": ["atomicos-root"], "host": "", "links": {"ids": ["lvm-pv-uuid-S9cnqm-3dnp-3SRH-sFDV-t1tP-VANa-aAUUn7"], "labels": [], "masters": ["dm-0"], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "41943040", "sectorsize": "512", "size": "20.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdc": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdd": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vde": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}}, "ansible_distribution": "CentOS", "ansible_distribution_file_parsed": true, "ansible_distribution_file_path": "/etc/redhat-release", "ansible_distribution_file_variety": "RedHat", "ansible_distribution_major_version": "7", "ansible_distribution_release": "Core", "ansible_distribution_version": "7.6.1810", "ansible_dns": {"nameservers": ["192.168.121.1"]}, "ansible_docker0": {"active": false, "device": "docker0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "off [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "on", "tx_gre_csum_segmentation": "on", "tx_gre_segmentation": "on", "tx_gso_partial": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "on", "tx_udp_tnl_segmentation": "on", "tx_vlan_offload": "on", "tx_vlan_stag_hw_insert": "on", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "id": "8000.024229599257", "interfaces": [], "ipv4": {"address": "172.17.0.1", "broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0"}, "macaddress": "02:42:29:59:92:57", "mtu": 1500, "promisc": false, "stp": false, "timestamping": ["rx_software", "software"], "type": "bridge"}, "ansible_domain": "", "ansible_effective_group_id": 0, "ansible_effective_user_id": 0, "ansible_env": {"HOME": "/root", "LANG": "en_US.UTF-8", "LOGNAME": "root", "MAIL": "/var/mail/vagrant", "PATH": "/sbin:/bin:/usr/sbin:/usr/bin", "PWD": "/var/home/vagrant", "SHELL": "/bin/bash", "SHLVL": "1", "SUDO_COMMAND": "/bin/sh -c echo BECOME-SUCCESS-dxzzbmllimkxpxrwcedjonhrtdizbmvz; /usr/bin/python", "SUDO_GID": "1000", "SUDO_UID": "1000", "SUDO_USER": "vagrant", "TERM": "unknown", "USER": "root", "USERNAME": "root", "XDG_SESSION_ID": "1", "_": "/usr/bin/python"}, "ansible_eth0": {"active": true, "device": "eth0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "192.168.121.160", "broadcast": "192.168.121.255", "netmask": "255.255.255.0", "network": "192.168.121.0"}, "ipv6": [{"address": "fe80::5054:ff:fef6:968b", "prefix": "64", "scope": "link"}], "macaddress": "52:54:00:f6:96:8b", "module": "virtio_net", "mtu": 1500, "pciid": "virtio8", "promisc": false, "timestamping": ["rx_software", "software"], "type": "ether"}, "ansible_fips": false, "ansible_form_factor": "Other", "ansible_fqdn": "kube2", "ansible_hostname": "kube2", "ansible_interfaces": ["lo", "docker0", "eth0"], "ansible_is_chroot": false, "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:7cbffc69d119", "ansible_kernel": "3.10.0-957.1.3.el7.x86_64", "ansible_lo": {"active": true, "device": "lo", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "on [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "on [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off [fixed]", "tx_scatter_gather": "on [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "on [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "127.0.0.1", "broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0"}, "ipv6": [{"address": "::1", "prefix": "128", "scope": "host"}], "mtu": 65536, "promisc": false, "timestamping": ["rx_software", "software"], "type": "loopback"}, "ansible_local": {}, "ansible_lsb": {}, "ansible_lvm": {"lvs": {"root": {"size_g": "26.77", "vg": "atomicos"}}, "pvs": {"/dev/vda2": {"free_g": "0", "size_g": "9.70", "vg": "atomicos"}, "/dev/vdb": {"free_g": "2.93", "size_g": "20.00", "vg": "atomicos"}}, "vgs": {"atomicos": {"free_g": "2.93", "num_lvs": "1", "num_pvs": "2", "size_g": "29.70"}}}, "ansible_machine": "x86_64", "ansible_machine_id": "680eec9976e64bfe94f31df5ef510baf", "ansible_memfree_mb": 1448, "ansible_memory_mb": {"nocache": {"free": 1648, "used": 190}, "real": {"free": 1448, "total": 1838, "used": 390}, "swap": {"cached": 0, "free": 0, "total": 0, "used": 0}}, "ansible_memtotal_mb": 1838, "ansible_mounts": [{"block_available": 6614067, "block_size": 4096, "block_total": 7014912, "block_used": 400845, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004055, "inode_total": 14034944, "inode_used": 30889, "mount": "/sysroot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 27091218432, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6614067, "block_size": 4096, "block_total": 7014912, "block_used": 400845, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004055, "inode_total": 14034944, "inode_used": 30889, "mount": "/", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27091218432, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6614067, "block_size": 4096, "block_total": 7014912, "block_used": 400845, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004055, "inode_total": 14034944, "inode_used": 30889, "mount": "/usr", "options": "ro,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27091218432, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6614067, "block_size": 4096, "block_total": 7014912, "block_used": 400845, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004055, "inode_total": 14034944, "inode_used": 30889, "mount": "/var", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27091218432, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 54635, "block_size": 4096, "block_total": 75945, "block_used": 21310, "device": "/dev/vda1", "fstype": "xfs", "inode_available": 153280, "inode_total": 153600, "inode_used": 320, "mount": "/boot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 223784960, "size_total": 311070720, "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, {"block_available": 6614067, "block_size": 4096, "block_total": 7014912, "block_used": 400845, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004055, "inode_total": 14034944, "inode_used": 30889, "mount": "/var/lib/docker/containers", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27091218432, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6614067, "block_size": 4096, "block_total": 7014912, "block_used": 400845, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004055, "inode_total": 14034944, "inode_used": 30889, "mount": "/var/lib/docker/overlay2", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27091218432, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}], "ansible_nodename": "kube2", "ansible_os_family": "RedHat", "ansible_pkg_mgr": "atomic_container", "ansible_processor": ["0", "GenuineIntel", "Intel(R) Xeon(R) CPU E3-1265L V2 @ 2.50GHz", "1", "GenuineIntel", "Intel(R) Xeon(R) CPU E3-1265L V2 @ 2.50GHz"], "ansible_processor_cores": 1, "ansible_processor_count": 2, "ansible_processor_threads_per_core": 1, "ansible_processor_vcpus": 2, "ansible_product_name": "KVM", "ansible_product_serial": "NA", "ansible_product_uuid": "680EEC99-76E6-4BFE-94F3-1DF5EF510BAF", "ansible_product_version": "RHEL 7.0.0 PC (i440FX + PIIX, 1996)", "ansible_python": {"executable": "/usr/bin/python", "has_sslcontext": true, "type": "CPython", "version": {"major": 2, "micro": 5, "minor": 7, "releaselevel": "final", "serial": 0}, "version_info": [2, 7, 5, "final", 0]}, "ansible_python_version": "2.7.5", "ansible_real_group_id": 0, "ansible_real_user_id": 0, "ansible_selinux": {"config_mode": "enforcing", "mode": "enforcing", "policyvers": 31, "status": "enabled", "type": "targeted"}, "ansible_selinux_python_present": true, "ansible_service_mgr": "systemd", "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI52Tisx6DVjvFDr0Lt7aLUrkuEzWChdA5W8HW6r7ZusAPHyFZRmrpYUV0HfpW7bLEEjhs0WTqiUf4fn+6QZXb8=", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIK7K8QQkTuId2LDF7xoKmEFCpWNZvOSRMDJUx0Jct86U", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDNK9pmjdnaTMV9VMvX+FrIxVsIagXla3+DomenPZvp3fF/ie1Q4JokIRe3fwrnOkS+PlsqtVnLH2hd7rA4/sIglOsBl6+TuYGTgnypn/maKXflcK4yjCVSKk3xqDAOYkBzUt89gewV+ndBrYBNmks1YEK2lg8gpe/T5Jeemf2M8IFpWKug3pN+lcxdG7Pg9fINjspCSnf+XwTmHpFBHmw1/MaJXJHS0oZDLce3UwOS5u0+Czey1SjqIqPKAy8fyrpihP0wN/OlmoE+AK0jxqNowY5mJ+GWVHaHY/c3xZSMVfdzq0fzOqp1z2wHDSmrgXHVxM2OqUNqccl3M1T/gvk/", "ansible_swapfree_mb": 0, "ansible_swaptotal_mb": 0, "ansible_system": "Linux", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_system_capabilities_enforced": "True", "ansible_system_vendor": "Red Hat", "ansible_uptime_seconds": 22, "ansible_user_dir": "/root", "ansible_user_gecos": "root", "ansible_user_gid": 0, "ansible_user_id": "root", "ansible_user_shell": "/bin/bash", "ansible_user_uid": 0, "ansible_userspace_architecture": "x86_64", "ansible_userspace_bits": "64", "ansible_virtualization_role": "guest", "ansible_virtualization_type": "kvm", "gather_subset": ["all"], "module_setup": true}, "changed": false, "failed": false, "invocation": {"module_args": {"fact_path": "/etc/ansible/facts.d", "filter": "*", "gather_subset": ["all"], "gather_timeout": 10}}, "item": "kube2"}, {"_ansible_delegated_vars": {"ansible_delegated_host": "kube3", "ansible_host": "192.168.121.203"}, "_ansible_ignore_errors": null, "_ansible_item_label": "kube3", "_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "_ansible_verbose_override": true, "ansible_facts": {"ansible_all_ipv4_addresses": ["192.168.121.203", "172.17.0.1"], "ansible_all_ipv6_addresses": ["fe80::5054:ff:fe74:365e"], "ansible_apparmor": {"status": "disabled"}, "ansible_architecture": "x86_64", "ansible_bios_date": "01/01/2011", "ansible_bios_version": "0.5.1", "ansible_cmdline": {"BOOT_IMAGE": "/ostree/centos-atomic-host-ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/vmlinuz-3.10.0-957.1.3.el7.x86_64", "console": "ttyS0,115200n8", "crashkernel": "auto", "no_timer_check": true, "ostree": "/ostree/boot.1/centos-atomic-host/ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/0", "rd.lvm.lv": "atomicos/root", "root": "/dev/mapper/atomicos-root"}, "ansible_date_time": {"date": "2019-01-18", "day": "18", "epoch": "1547771752", "hour": "00", "iso8601": "2019-01-18T00:35:52Z", "iso8601_basic": "20190118T003552030568", "iso8601_basic_short": "20190118T003552", "iso8601_micro": "2019-01-18T00:35:52.030651Z", "minute": "35", "month": "01", "second": "52", "time": "00:35:52", "tz": "UTC", "tz_offset": "+0000", "weekday": "Friday", "weekday_number": "5", "weeknumber": "02", "year": "2019"}, "ansible_default_ipv4": {"address": "192.168.121.203", "alias": "eth0", "broadcast": "192.168.121.255", "gateway": "192.168.121.1", "interface": "eth0", "macaddress": "52:54:00:74:36:5e", "mtu": 1500, "netmask": "255.255.255.0", "network": "192.168.121.0", "type": "ether"}, "ansible_default_ipv6": {}, "ansible_device_links": {"ids": {"dm-0": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "vda2": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "vdb": ["lvm-pv-uuid-Vf1FGz-dqpf-mlQg-b787-j9XC-RadL-8diWKj"]}, "labels": {}, "masters": {"vda2": ["dm-0"], "vdb": ["dm-0"]}, "uuids": {"dm-0": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"], "vda1": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}}, "ansible_devices": {"dm-0": {"holders": [], "host": "", "links": {"ids": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "labels": [], "masters": [], "uuids": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"]}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "", "sectors": "56139776", "sectorsize": "512", "size": "26.77 GB", "support_discard": "0", "vendor": null, "virtual": 1}, "vda": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {"vda1": {"holders": [], "links": {"ids": [], "labels": [], "masters": [], "uuids": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}, "sectors": "614400", "sectorsize": 512, "size": "300.00 MB", "start": "2048", "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, "vda2": {"holders": ["atomicos-root"], "links": {"ids": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "labels": [], "masters": ["dm-0"], "uuids": []}, "sectors": "20355072", "sectorsize": 512, "size": "9.71 GB", "start": "616448", "uuid": null}}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "23068672", "sectorsize": "512", "size": "11.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdb": {"holders": ["atomicos-root"], "host": "", "links": {"ids": ["lvm-pv-uuid-Vf1FGz-dqpf-mlQg-b787-j9XC-RadL-8diWKj"], "labels": [], "masters": ["dm-0"], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "41943040", "sectorsize": "512", "size": "20.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdc": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdd": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vde": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}}, "ansible_distribution": "CentOS", "ansible_distribution_file_parsed": true, "ansible_distribution_file_path": "/etc/redhat-release", "ansible_distribution_file_variety": "RedHat", "ansible_distribution_major_version": "7", "ansible_distribution_release": "Core", "ansible_distribution_version": "7.6.1810", "ansible_dns": {"nameservers": ["192.168.121.1"]}, "ansible_docker0": {"active": false, "device": "docker0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "off [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "on", "tx_gre_csum_segmentation": "on", "tx_gre_segmentation": "on", "tx_gso_partial": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "on", "tx_udp_tnl_segmentation": "on", "tx_vlan_offload": "on", "tx_vlan_stag_hw_insert": "on", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "id": "8000.0242e93c6b0a", "interfaces": [], "ipv4": {"address": "172.17.0.1", "broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0"}, "macaddress": "02:42:e9:3c:6b:0a", "mtu": 1500, "promisc": false, "stp": false, "timestamping": ["rx_software", "software"], "type": "bridge"}, "ansible_domain": "", "ansible_effective_group_id": 0, "ansible_effective_user_id": 0, "ansible_env": {"HOME": "/root", "LANG": "en_US.UTF-8", "LOGNAME": "root", "MAIL": "/var/mail/vagrant", "PATH": "/sbin:/bin:/usr/sbin:/usr/bin", "PWD": "/var/home/vagrant", "SHELL": "/bin/bash", "SHLVL": "1", "SUDO_COMMAND": "/bin/sh -c echo BECOME-SUCCESS-qtxbtbjdzkrrennhrolcxktrviouzqko; /usr/bin/python", "SUDO_GID": "1000", "SUDO_UID": "1000", "SUDO_USER": "vagrant", "TERM": "unknown", "USER": "root", "USERNAME": "root", "XDG_SESSION_ID": "1", "_": "/usr/bin/python"}, "ansible_eth0": {"active": true, "device": "eth0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "192.168.121.203", "broadcast": "192.168.121.255", "netmask": "255.255.255.0", "network": "192.168.121.0"}, "ipv6": [{"address": "fe80::5054:ff:fe74:365e", "prefix": "64", "scope": "link"}], "macaddress": "52:54:00:74:36:5e", "module": "virtio_net", "mtu": 1500, "pciid": "virtio8", "promisc": false, "timestamping": ["rx_software", "software"], "type": "ether"}, "ansible_fips": false, "ansible_form_factor": "Other", "ansible_fqdn": "kube3", "ansible_hostname": "kube3", "ansible_interfaces": ["lo", "docker0", "eth0"], "ansible_is_chroot": false, "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:7cbffc69d119", "ansible_kernel": "3.10.0-957.1.3.el7.x86_64", "ansible_lo": {"active": true, "device": "lo", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "on [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "on [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off [fixed]", "tx_scatter_gather": "on [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "on [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "127.0.0.1", "broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0"}, "ipv6": [{"address": "::1", "prefix": "128", "scope": "host"}], "mtu": 65536, "promisc": false, "timestamping": ["rx_software", "software"], "type": "loopback"}, "ansible_local": {}, "ansible_lsb": {}, "ansible_lvm": {"lvs": {"root": {"size_g": "26.77", "vg": "atomicos"}}, "pvs": {"/dev/vda2": {"free_g": "0", "size_g": "9.70", "vg": "atomicos"}, "/dev/vdb": {"free_g": "2.93", "size_g": "20.00", "vg": "atomicos"}}, "vgs": {"atomicos": {"free_g": "2.93", "num_lvs": "1", "num_pvs": "2", "size_g": "29.70"}}}, "ansible_machine": "x86_64", "ansible_machine_id": "f528bcd4e97c45ad93000dd72e5496bd", "ansible_memfree_mb": 1452, "ansible_memory_mb": {"nocache": {"free": 1654, "used": 184}, "real": {"free": 1452, "total": 1838, "used": 386}, "swap": {"cached": 0, "free": 0, "total": 0, "used": 0}}, "ansible_memtotal_mb": 1838, "ansible_mounts": [{"block_available": 6614065, "block_size": 4096, "block_total": 7014912, "block_used": 400847, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/sysroot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 27091210240, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6614065, "block_size": 4096, "block_total": 7014912, "block_used": 400847, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27091210240, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6614065, "block_size": 4096, "block_total": 7014912, "block_used": 400847, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/usr", "options": "ro,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27091210240, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6614065, "block_size": 4096, "block_total": 7014912, "block_used": 400847, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/var", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27091210240, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 54635, "block_size": 4096, "block_total": 75945, "block_used": 21310, "device": "/dev/vda1", "fstype": "xfs", "inode_available": 153280, "inode_total": 153600, "inode_used": 320, "mount": "/boot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 223784960, "size_total": 311070720, "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, {"block_available": 6614065, "block_size": 4096, "block_total": 7014912, "block_used": 400847, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/var/lib/docker/containers", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27091210240, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6614065, "block_size": 4096, "block_total": 7014912, "block_used": 400847, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/var/lib/docker/overlay2", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27091210240, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}], "ansible_nodename": "kube3", "ansible_os_family": "RedHat", "ansible_pkg_mgr": "atomic_container", "ansible_processor": ["0", "GenuineIntel", "Intel(R) Xeon(R) CPU E3-1265L V2 @ 2.50GHz", "1", "GenuineIntel", "Intel(R) Xeon(R) CPU E3-1265L V2 @ 2.50GHz"], "ansible_processor_cores": 1, "ansible_processor_count": 2, "ansible_processor_threads_per_core": 1, "ansible_processor_vcpus": 2, "ansible_product_name": "KVM", "ansible_product_serial": "NA", "ansible_product_uuid": "F528BCD4-E97C-45AD-9300-0DD72E5496BD", "ansible_product_version": "RHEL 7.0.0 PC (i440FX + PIIX, 1996)", "ansible_python": {"executable": "/usr/bin/python", "has_sslcontext": true, "type": "CPython", "version": {"major": 2, "micro": 5, "minor": 7, "releaselevel": "final", "serial": 0}, "version_info": [2, 7, 5, "final", 0]}, "ansible_python_version": "2.7.5", "ansible_real_group_id": 0, "ansible_real_user_id": 0, "ansible_selinux": {"config_mode": "enforcing", "mode": "enforcing", "policyvers": 31, "status": "enabled", "type": "targeted"}, "ansible_selinux_python_present": true, "ansible_service_mgr": "systemd", "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFV/sZHod8yZ/kpe6nv4pnSe5AauqF7O+Jsf09LTc2le2wEcWLNKTrJiWGnPSomDpfkkvdO+h9qUueP6RymZC0E=", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIFANf9gpQRhCnNOn0hWPcoAe8I6sofXjfYIiqjKV0J0X", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQC7g2A+X3s8ChZvQg+/+2gATNFvrFp57odVMHyMdJhkrTjZrZJnv43NpQPZOHt04T/KjvSzSvSwSBhX9ypTV6ARU7p0GnXkT3anH5HUvDJOW3eerN/CF2YBme+8MDA02F5gKaIy+CguSn+37Fc0rXyJpAGssg7YlndwzBfqkphvWHta/liIpH4jNo0+wyldZvBqP/CZra8GpWL9a7evSBeNe3DrQIggZfObxbzDcd3gXpTxjrZLNc/Sgwe93Be7BoJp3p8iCBKEAY0/UhdShAyjzSSciul1LHTUAwpUOk22zKeHttXmb5x+50vyz1wL+KiKOsq/Ycp/5sVckAZ8eXAJ", "ansible_swapfree_mb": 0, "ansible_swaptotal_mb": 0, "ansible_system": "Linux", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_system_capabilities_enforced": "True", "ansible_system_vendor": "Red Hat", "ansible_uptime_seconds": 23, "ansible_user_dir": "/root", "ansible_user_gecos": "root", "ansible_user_gid": 0, "ansible_user_id": "root", "ansible_user_shell": "/bin/bash", "ansible_user_uid": 0, "ansible_userspace_architecture": "x86_64", "ansible_userspace_bits": "64", "ansible_virtualization_role": "guest", "ansible_virtualization_type": "kvm", "gather_subset": ["all"], "module_setup": true}, "changed": false, "failed": false, "invocation": {"module_args": {"fact_path": "/etc/ansible/facts.d", "filter": "*", "gather_subset": ["all"], "gather_timeout": 10}}, "item": "kube3"}]} ok: [kube3 -> 192.168.121.203] => (item=kube3) fatal: [kube3]: UNREACHABLE! => {"changed": false, "msg": "All items completed", "results": [{"_ansible_ignore_errors": null, "_ansible_item_label": "kube1", "_ansible_item_result": true, "item": "kube1", "msg": "SSH Error: data could not be sent to remote host \"kube1\". Make sure this host can be reached over ssh", "unreachable": true}, {"_ansible_delegated_vars": {"ansible_delegated_host": "kube2", "ansible_host": "192.168.121.160"}, "_ansible_ignore_errors": null, "_ansible_item_label": "kube2", "_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "_ansible_verbose_override": true, "ansible_facts": {"ansible_all_ipv4_addresses": ["192.168.121.160", "172.17.0.1"], "ansible_all_ipv6_addresses": ["fe80::5054:ff:fef6:968b"], "ansible_apparmor": {"status": "disabled"}, "ansible_architecture": "x86_64", "ansible_bios_date": "01/01/2011", "ansible_bios_version": "0.5.1", "ansible_cmdline": {"BOOT_IMAGE": "/ostree/centos-atomic-host-ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/vmlinuz-3.10.0-957.1.3.el7.x86_64", "console": "ttyS0,115200n8", "crashkernel": "auto", "no_timer_check": true, "ostree": "/ostree/boot.1/centos-atomic-host/ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/0", "rd.lvm.lv": "atomicos/root", "root": "/dev/mapper/atomicos-root"}, "ansible_date_time": {"date": "2019-01-18", "day": "18", "epoch": "1547771749", "hour": "00", "iso8601": "2019-01-18T00:35:49Z", "iso8601_basic": "20190118T003549531704", "iso8601_basic_short": "20190118T003549", "iso8601_micro": "2019-01-18T00:35:49.531794Z", "minute": "35", "month": "01", "second": "49", "time": "00:35:49", "tz": "UTC", "tz_offset": "+0000", "weekday": "Friday", "weekday_number": "5", "weeknumber": "02", "year": "2019"}, "ansible_default_ipv4": {"address": "192.168.121.160", "alias": "eth0", "broadcast": "192.168.121.255", "gateway": "192.168.121.1", "interface": "eth0", "macaddress": "52:54:00:f6:96:8b", "mtu": 1500, "netmask": "255.255.255.0", "network": "192.168.121.0", "type": "ether"}, "ansible_default_ipv6": {}, "ansible_device_links": {"ids": {"dm-0": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "vda2": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "vdb": ["lvm-pv-uuid-S9cnqm-3dnp-3SRH-sFDV-t1tP-VANa-aAUUn7"]}, "labels": {}, "masters": {"vda2": ["dm-0"], "vdb": ["dm-0"]}, "uuids": {"dm-0": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"], "vda1": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}}, "ansible_devices": {"dm-0": {"holders": [], "host": "", "links": {"ids": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "labels": [], "masters": [], "uuids": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"]}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "", "sectors": "56139776", "sectorsize": "512", "size": "26.77 GB", "support_discard": "0", "vendor": null, "virtual": 1}, "vda": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {"vda1": {"holders": [], "links": {"ids": [], "labels": [], "masters": [], "uuids": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}, "sectors": "614400", "sectorsize": 512, "size": "300.00 MB", "start": "2048", "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, "vda2": {"holders": ["atomicos-root"], "links": {"ids": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "labels": [], "masters": ["dm-0"], "uuids": []}, "sectors": "20355072", "sectorsize": 512, "size": "9.71 GB", "start": "616448", "uuid": null}}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "23068672", "sectorsize": "512", "size": "11.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdb": {"holders": ["atomicos-root"], "host": "", "links": {"ids": ["lvm-pv-uuid-S9cnqm-3dnp-3SRH-sFDV-t1tP-VANa-aAUUn7"], "labels": [], "masters": ["dm-0"], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "41943040", "sectorsize": "512", "size": "20.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdc": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdd": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vde": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}}, "ansible_distribution": "CentOS", "ansible_distribution_file_parsed": true, "ansible_distribution_file_path": "/etc/redhat-release", "ansible_distribution_file_variety": "RedHat", "ansible_distribution_major_version": "7", "ansible_distribution_release": "Core", "ansible_distribution_version": "7.6.1810", "ansible_dns": {"nameservers": ["192.168.121.1"]}, "ansible_docker0": {"active": false, "device": "docker0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "off [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "on", "tx_gre_csum_segmentation": "on", "tx_gre_segmentation": "on", "tx_gso_partial": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "on", "tx_udp_tnl_segmentation": "on", "tx_vlan_offload": "on", "tx_vlan_stag_hw_insert": "on", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "id": "8000.024229599257", "interfaces": [], "ipv4": {"address": "172.17.0.1", "broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0"}, "macaddress": "02:42:29:59:92:57", "mtu": 1500, "promisc": false, "stp": false, "timestamping": ["rx_software", "software"], "type": "bridge"}, "ansible_domain": "", "ansible_effective_group_id": 0, "ansible_effective_user_id": 0, "ansible_env": {"HOME": "/root", "LANG": "en_US.UTF-8", "LOGNAME": "root", "MAIL": "/var/mail/vagrant", "PATH": "/sbin:/bin:/usr/sbin:/usr/bin", "PWD": "/var/home/vagrant", "SHELL": "/bin/bash", "SHLVL": "1", "SUDO_COMMAND": "/bin/sh -c echo BECOME-SUCCESS-gtnynsjualaeshlojukuitnbffrgzwcd; /usr/bin/python", "SUDO_GID": "1000", "SUDO_UID": "1000", "SUDO_USER": "vagrant", "TERM": "unknown", "USER": "root", "USERNAME": "root", "XDG_SESSION_ID": "1", "_": "/usr/bin/python"}, "ansible_eth0": {"active": true, "device": "eth0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "192.168.121.160", "broadcast": "192.168.121.255", "netmask": "255.255.255.0", "network": "192.168.121.0"}, "ipv6": [{"address": "fe80::5054:ff:fef6:968b", "prefix": "64", "scope": "link"}], "macaddress": "52:54:00:f6:96:8b", "module": "virtio_net", "mtu": 1500, "pciid": "virtio8", "promisc": false, "timestamping": ["rx_software", "software"], "type": "ether"}, "ansible_fips": false, "ansible_form_factor": "Other", "ansible_fqdn": "kube2", "ansible_hostname": "kube2", "ansible_interfaces": ["lo", "docker0", "eth0"], "ansible_is_chroot": false, "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:7cbffc69d119", "ansible_kernel": "3.10.0-957.1.3.el7.x86_64", "ansible_lo": {"active": true, "device": "lo", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "on [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "on [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off [fixed]", "tx_scatter_gather": "on [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "on [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "127.0.0.1", "broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0"}, "ipv6": [{"address": "::1", "prefix": "128", "scope": "host"}], "mtu": 65536, "promisc": false, "timestamping": ["rx_software", "software"], "type": "loopback"}, "ansible_local": {}, "ansible_lsb": {}, "ansible_lvm": {"lvs": {"root": {"size_g": "26.77", "vg": "atomicos"}}, "pvs": {"/dev/vda2": {"free_g": "0", "size_g": "9.70", "vg": "atomicos"}, "/dev/vdb": {"free_g": "2.93", "size_g": "20.00", "vg": "atomicos"}}, "vgs": {"atomicos": {"free_g": "2.93", "num_lvs": "1", "num_pvs": "2", "size_g": "29.70"}}}, "ansible_machine": "x86_64", "ansible_machine_id": "680eec9976e64bfe94f31df5ef510baf", "ansible_memfree_mb": 1460, "ansible_memory_mb": {"nocache": {"free": 1656, "used": 182}, "real": {"free": 1460, "total": 1838, "used": 378}, "swap": {"cached": 0, "free": 0, "total": 0, "used": 0}}, "ansible_memtotal_mb": 1838, "ansible_mounts": [{"block_available": 6616123, "block_size": 4096, "block_total": 7014912, "block_used": 398789, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004055, "inode_total": 14034944, "inode_used": 30889, "mount": "/sysroot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 27099639808, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6616123, "block_size": 4096, "block_total": 7014912, "block_used": 398789, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004055, "inode_total": 14034944, "inode_used": 30889, "mount": "/", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27099639808, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6616123, "block_size": 4096, "block_total": 7014912, "block_used": 398789, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004055, "inode_total": 14034944, "inode_used": 30889, "mount": "/usr", "options": "ro,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27099639808, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6616123, "block_size": 4096, "block_total": 7014912, "block_used": 398789, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004055, "inode_total": 14034944, "inode_used": 30889, "mount": "/var", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27099639808, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 54635, "block_size": 4096, "block_total": 75945, "block_used": 21310, "device": "/dev/vda1", "fstype": "xfs", "inode_available": 153280, "inode_total": 153600, "inode_used": 320, "mount": "/boot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 223784960, "size_total": 311070720, "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, {"block_available": 6616123, "block_size": 4096, "block_total": 7014912, "block_used": 398789, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004055, "inode_total": 14034944, "inode_used": 30889, "mount": "/var/lib/docker/containers", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27099639808, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6616123, "block_size": 4096, "block_total": 7014912, "block_used": 398789, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004055, "inode_total": 14034944, "inode_used": 30889, "mount": "/var/lib/docker/overlay2", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27099639808, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}], "ansible_nodename": "kube2", "ansible_os_family": "RedHat", "ansible_pkg_mgr": "atomic_container", "ansible_processor": ["0", "GenuineIntel", "Intel(R) Xeon(R) CPU E3-1265L V2 @ 2.50GHz", "1", "GenuineIntel", "Intel(R) Xeon(R) CPU E3-1265L V2 @ 2.50GHz"], "ansible_processor_cores": 1, "ansible_processor_count": 2, "ansible_processor_threads_per_core": 1, "ansible_processor_vcpus": 2, "ansible_product_name": "KVM", "ansible_product_serial": "NA", "ansible_product_uuid": "680EEC99-76E6-4BFE-94F3-1DF5EF510BAF", "ansible_product_version": "RHEL 7.0.0 PC (i440FX + PIIX, 1996)", "ansible_python": {"executable": "/usr/bin/python", "has_sslcontext": true, "type": "CPython", "version": {"major": 2, "micro": 5, "minor": 7, "releaselevel": "final", "serial": 0}, "version_info": [2, 7, 5, "final", 0]}, "ansible_python_version": "2.7.5", "ansible_real_group_id": 0, "ansible_real_user_id": 0, "ansible_selinux": {"config_mode": "enforcing", "mode": "enforcing", "policyvers": 31, "status": "enabled", "type": "targeted"}, "ansible_selinux_python_present": true, "ansible_service_mgr": "systemd", "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI52Tisx6DVjvFDr0Lt7aLUrkuEzWChdA5W8HW6r7ZusAPHyFZRmrpYUV0HfpW7bLEEjhs0WTqiUf4fn+6QZXb8=", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIK7K8QQkTuId2LDF7xoKmEFCpWNZvOSRMDJUx0Jct86U", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDNK9pmjdnaTMV9VMvX+FrIxVsIagXla3+DomenPZvp3fF/ie1Q4JokIRe3fwrnOkS+PlsqtVnLH2hd7rA4/sIglOsBl6+TuYGTgnypn/maKXflcK4yjCVSKk3xqDAOYkBzUt89gewV+ndBrYBNmks1YEK2lg8gpe/T5Jeemf2M8IFpWKug3pN+lcxdG7Pg9fINjspCSnf+XwTmHpFBHmw1/MaJXJHS0oZDLce3UwOS5u0+Czey1SjqIqPKAy8fyrpihP0wN/OlmoE+AK0jxqNowY5mJ+GWVHaHY/c3xZSMVfdzq0fzOqp1z2wHDSmrgXHVxM2OqUNqccl3M1T/gvk/", "ansible_swapfree_mb": 0, "ansible_swaptotal_mb": 0, "ansible_system": "Linux", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_system_capabilities_enforced": "True", "ansible_system_vendor": "Red Hat", "ansible_uptime_seconds": 21, "ansible_user_dir": "/root", "ansible_user_gecos": "root", "ansible_user_gid": 0, "ansible_user_id": "root", "ansible_user_shell": "/bin/bash", "ansible_user_uid": 0, "ansible_userspace_architecture": "x86_64", "ansible_userspace_bits": "64", "ansible_virtualization_role": "guest", "ansible_virtualization_type": "kvm", "gather_subset": ["all"], "module_setup": true}, "changed": false, "failed": false, "invocation": {"module_args": {"fact_path": "/etc/ansible/facts.d", "filter": "*", "gather_subset": ["all"], "gather_timeout": 10}}, "item": "kube2"}, {"_ansible_delegated_vars": {"ansible_delegated_host": "kube3", "ansible_host": "192.168.121.203"}, "_ansible_ignore_errors": null, "_ansible_item_label": "kube3", "_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "_ansible_verbose_override": true, "ansible_facts": {"ansible_all_ipv4_addresses": ["192.168.121.203", "172.17.0.1"], "ansible_all_ipv6_addresses": ["fe80::5054:ff:fe74:365e"], "ansible_apparmor": {"status": "disabled"}, "ansible_architecture": "x86_64", "ansible_bios_date": "01/01/2011", "ansible_bios_version": "0.5.1", "ansible_cmdline": {"BOOT_IMAGE": "/ostree/centos-atomic-host-ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/vmlinuz-3.10.0-957.1.3.el7.x86_64", "console": "ttyS0,115200n8", "crashkernel": "auto", "no_timer_check": true, "ostree": "/ostree/boot.1/centos-atomic-host/ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/0", "rd.lvm.lv": "atomicos/root", "root": "/dev/mapper/atomicos-root"}, "ansible_date_time": {"date": "2019-01-18", "day": "18", "epoch": "1547771750", "hour": "00", "iso8601": "2019-01-18T00:35:50Z", "iso8601_basic": "20190118T003550243409", "iso8601_basic_short": "20190118T003550", "iso8601_micro": "2019-01-18T00:35:50.243512Z", "minute": "35", "month": "01", "second": "50", "time": "00:35:50", "tz": "UTC", "tz_offset": "+0000", "weekday": "Friday", "weekday_number": "5", "weeknumber": "02", "year": "2019"}, "ansible_default_ipv4": {"address": "192.168.121.203", "alias": "eth0", "broadcast": "192.168.121.255", "gateway": "192.168.121.1", "interface": "eth0", "macaddress": "52:54:00:74:36:5e", "mtu": 1500, "netmask": "255.255.255.0", "network": "192.168.121.0", "type": "ether"}, "ansible_default_ipv6": {}, "ansible_device_links": {"ids": {"dm-0": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "vda2": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "vdb": ["lvm-pv-uuid-Vf1FGz-dqpf-mlQg-b787-j9XC-RadL-8diWKj"]}, "labels": {}, "masters": {"vda2": ["dm-0"], "vdb": ["dm-0"]}, "uuids": {"dm-0": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"], "vda1": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}}, "ansible_devices": {"dm-0": {"holders": [], "host": "", "links": {"ids": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "labels": [], "masters": [], "uuids": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"]}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "", "sectors": "56139776", "sectorsize": "512", "size": "26.77 GB", "support_discard": "0", "vendor": null, "virtual": 1}, "vda": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {"vda1": {"holders": [], "links": {"ids": [], "labels": [], "masters": [], "uuids": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}, "sectors": "614400", "sectorsize": 512, "size": "300.00 MB", "start": "2048", "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, "vda2": {"holders": ["atomicos-root"], "links": {"ids": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "labels": [], "masters": ["dm-0"], "uuids": []}, "sectors": "20355072", "sectorsize": 512, "size": "9.71 GB", "start": "616448", "uuid": null}}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "23068672", "sectorsize": "512", "size": "11.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdb": {"holders": ["atomicos-root"], "host": "", "links": {"ids": ["lvm-pv-uuid-Vf1FGz-dqpf-mlQg-b787-j9XC-RadL-8diWKj"], "labels": [], "masters": ["dm-0"], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "41943040", "sectorsize": "512", "size": "20.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdc": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdd": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vde": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}}, "ansible_distribution": "CentOS", "ansible_distribution_file_parsed": true, "ansible_distribution_file_path": "/etc/redhat-release", "ansible_distribution_file_variety": "RedHat", "ansible_distribution_major_version": "7", "ansible_distribution_release": "Core", "ansible_distribution_version": "7.6.1810", "ansible_dns": {"nameservers": ["192.168.121.1"]}, "ansible_docker0": {"active": false, "device": "docker0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "off [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "on", "tx_gre_csum_segmentation": "on", "tx_gre_segmentation": "on", "tx_gso_partial": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "on", "tx_udp_tnl_segmentation": "on", "tx_vlan_offload": "on", "tx_vlan_stag_hw_insert": "on", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "id": "8000.0242e93c6b0a", "interfaces": [], "ipv4": {"address": "172.17.0.1", "broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0"}, "macaddress": "02:42:e9:3c:6b:0a", "mtu": 1500, "promisc": false, "stp": false, "timestamping": ["rx_software", "software"], "type": "bridge"}, "ansible_domain": "", "ansible_effective_group_id": 0, "ansible_effective_user_id": 0, "ansible_env": {"HOME": "/root", "LANG": "en_US.UTF-8", "LOGNAME": "root", "MAIL": "/var/mail/vagrant", "PATH": "/sbin:/bin:/usr/sbin:/usr/bin", "PWD": "/var/home/vagrant", "SHELL": "/bin/bash", "SHLVL": "1", "SUDO_COMMAND": "/bin/sh -c echo BECOME-SUCCESS-hkkfzoffzdbfpxmmbbcifyqtxzyfjcpg; /usr/bin/python", "SUDO_GID": "1000", "SUDO_UID": "1000", "SUDO_USER": "vagrant", "TERM": "unknown", "USER": "root", "USERNAME": "root", "XDG_SESSION_ID": "1", "_": "/usr/bin/python"}, "ansible_eth0": {"active": true, "device": "eth0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "192.168.121.203", "broadcast": "192.168.121.255", "netmask": "255.255.255.0", "network": "192.168.121.0"}, "ipv6": [{"address": "fe80::5054:ff:fe74:365e", "prefix": "64", "scope": "link"}], "macaddress": "52:54:00:74:36:5e", "module": "virtio_net", "mtu": 1500, "pciid": "virtio8", "promisc": false, "timestamping": ["rx_software", "software"], "type": "ether"}, "ansible_fips": false, "ansible_form_factor": "Other", "ansible_fqdn": "kube3", "ansible_hostname": "kube3", "ansible_interfaces": ["lo", "docker0", "eth0"], "ansible_is_chroot": false, "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:7cbffc69d119", "ansible_kernel": "3.10.0-957.1.3.el7.x86_64", "ansible_lo": {"active": true, "device": "lo", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "on [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "on [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off [fixed]", "tx_scatter_gather": "on [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "on [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "127.0.0.1", "broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0"}, "ipv6": [{"address": "::1", "prefix": "128", "scope": "host"}], "mtu": 65536, "promisc": false, "timestamping": ["rx_software", "software"], "type": "loopback"}, "ansible_local": {}, "ansible_lsb": {}, "ansible_lvm": {"lvs": {"root": {"size_g": "26.77", "vg": "atomicos"}}, "pvs": {"/dev/vda2": {"free_g": "0", "size_g": "9.70", "vg": "atomicos"}, "/dev/vdb": {"free_g": "2.93", "size_g": "20.00", "vg": "atomicos"}}, "vgs": {"atomicos": {"free_g": "2.93", "num_lvs": "1", "num_pvs": "2", "size_g": "29.70"}}}, "ansible_machine": "x86_64", "ansible_machine_id": "f528bcd4e97c45ad93000dd72e5496bd", "ansible_memfree_mb": 1463, "ansible_memory_mb": {"nocache": {"free": 1661, "used": 177}, "real": {"free": 1463, "total": 1838, "used": 375}, "swap": {"cached": 0, "free": 0, "total": 0, "used": 0}}, "ansible_memtotal_mb": 1838, "ansible_mounts": [{"block_available": 6614065, "block_size": 4096, "block_total": 7014912, "block_used": 400847, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/sysroot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 27091210240, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6614065, "block_size": 4096, "block_total": 7014912, "block_used": 400847, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27091210240, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6614065, "block_size": 4096, "block_total": 7014912, "block_used": 400847, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/usr", "options": "ro,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27091210240, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6614065, "block_size": 4096, "block_total": 7014912, "block_used": 400847, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/var", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27091210240, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 54635, "block_size": 4096, "block_total": 75945, "block_used": 21310, "device": "/dev/vda1", "fstype": "xfs", "inode_available": 153280, "inode_total": 153600, "inode_used": 320, "mount": "/boot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 223784960, "size_total": 311070720, "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, {"block_available": 6614065, "block_size": 4096, "block_total": 7014912, "block_used": 400847, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/var/lib/docker/containers", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27091210240, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6614065, "block_size": 4096, "block_total": 7014912, "block_used": 400847, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/var/lib/docker/overlay2", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27091210240, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}], "ansible_nodename": "kube3", "ansible_os_family": "RedHat", "ansible_pkg_mgr": "atomic_container", "ansible_processor": ["0", "GenuineIntel", "Intel(R) Xeon(R) CPU E3-1265L V2 @ 2.50GHz", "1", "GenuineIntel", "Intel(R) Xeon(R) CPU E3-1265L V2 @ 2.50GHz"], "ansible_processor_cores": 1, "ansible_processor_count": 2, "ansible_processor_threads_per_core": 1, "ansible_processor_vcpus": 2, "ansible_product_name": "KVM", "ansible_product_serial": "NA", "ansible_product_uuid": "F528BCD4-E97C-45AD-9300-0DD72E5496BD", "ansible_product_version": "RHEL 7.0.0 PC (i440FX + PIIX, 1996)", "ansible_python": {"executable": "/usr/bin/python", "has_sslcontext": true, "type": "CPython", "version": {"major": 2, "micro": 5, "minor": 7, "releaselevel": "final", "serial": 0}, "version_info": [2, 7, 5, "final", 0]}, "ansible_python_version": "2.7.5", "ansible_real_group_id": 0, "ansible_real_user_id": 0, "ansible_selinux": {"config_mode": "enforcing", "mode": "enforcing", "policyvers": 31, "status": "enabled", "type": "targeted"}, "ansible_selinux_python_present": true, "ansible_service_mgr": "systemd", "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFV/sZHod8yZ/kpe6nv4pnSe5AauqF7O+Jsf09LTc2le2wEcWLNKTrJiWGnPSomDpfkkvdO+h9qUueP6RymZC0E=", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIFANf9gpQRhCnNOn0hWPcoAe8I6sofXjfYIiqjKV0J0X", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQC7g2A+X3s8ChZvQg+/+2gATNFvrFp57odVMHyMdJhkrTjZrZJnv43NpQPZOHt04T/KjvSzSvSwSBhX9ypTV6ARU7p0GnXkT3anH5HUvDJOW3eerN/CF2YBme+8MDA02F5gKaIy+CguSn+37Fc0rXyJpAGssg7YlndwzBfqkphvWHta/liIpH4jNo0+wyldZvBqP/CZra8GpWL9a7evSBeNe3DrQIggZfObxbzDcd3gXpTxjrZLNc/Sgwe93Be7BoJp3p8iCBKEAY0/UhdShAyjzSSciul1LHTUAwpUOk22zKeHttXmb5x+50vyz1wL+KiKOsq/Ycp/5sVckAZ8eXAJ", "ansible_swapfree_mb": 0, "ansible_swaptotal_mb": 0, "ansible_system": "Linux", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_system_capabilities_enforced": "True", "ansible_system_vendor": "Red Hat", "ansible_uptime_seconds": 21, "ansible_user_dir": "/root", "ansible_user_gecos": "root", "ansible_user_gid": 0, "ansible_user_id": "root", "ansible_user_shell": "/bin/bash", "ansible_user_uid": 0, "ansible_userspace_architecture": "x86_64", "ansible_userspace_bits": "64", "ansible_virtualization_role": "guest", "ansible_virtualization_type": "kvm", "gather_subset": ["all"], "module_setup": true}, "changed": false, "failed": false, "invocation": {"module_args": {"fact_path": "/etc/ansible/facts.d", "filter": "*", "gather_subset": ["all"], "gather_timeout": 10}}, "item": "kube3"}, {"_ansible_ignore_errors": null, "_ansible_item_label": "kube1", "_ansible_item_result": true, "item": "kube1", "msg": "SSH Error: data could not be sent to remote host \"kube1\". Make sure this host can be reached over ssh", "unreachable": true}, {"_ansible_delegated_vars": {"ansible_delegated_host": "kube2", "ansible_host": "192.168.121.160"}, "_ansible_ignore_errors": null, "_ansible_item_label": "kube2", "_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "_ansible_verbose_override": true, "ansible_facts": {"ansible_all_ipv4_addresses": ["192.168.121.160", "172.17.0.1"], "ansible_all_ipv6_addresses": ["fe80::5054:ff:fef6:968b"], "ansible_apparmor": {"status": "disabled"}, "ansible_architecture": "x86_64", "ansible_bios_date": "01/01/2011", "ansible_bios_version": "0.5.1", "ansible_cmdline": {"BOOT_IMAGE": "/ostree/centos-atomic-host-ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/vmlinuz-3.10.0-957.1.3.el7.x86_64", "console": "ttyS0,115200n8", "crashkernel": "auto", "no_timer_check": true, "ostree": "/ostree/boot.1/centos-atomic-host/ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/0", "rd.lvm.lv": "atomicos/root", "root": "/dev/mapper/atomicos-root"}, "ansible_date_time": {"date": "2019-01-18", "day": "18", "epoch": "1547771751", "hour": "00", "iso8601": "2019-01-18T00:35:51Z", "iso8601_basic": "20190118T003551171784", "iso8601_basic_short": "20190118T003551", "iso8601_micro": "2019-01-18T00:35:51.171880Z", "minute": "35", "month": "01", "second": "51", "time": "00:35:51", "tz": "UTC", "tz_offset": "+0000", "weekday": "Friday", "weekday_number": "5", "weeknumber": "02", "year": "2019"}, "ansible_default_ipv4": {"address": "192.168.121.160", "alias": "eth0", "broadcast": "192.168.121.255", "gateway": "192.168.121.1", "interface": "eth0", "macaddress": "52:54:00:f6:96:8b", "mtu": 1500, "netmask": "255.255.255.0", "network": "192.168.121.0", "type": "ether"}, "ansible_default_ipv6": {}, "ansible_device_links": {"ids": {"dm-0": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "vda2": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "vdb": ["lvm-pv-uuid-S9cnqm-3dnp-3SRH-sFDV-t1tP-VANa-aAUUn7"]}, "labels": {}, "masters": {"vda2": ["dm-0"], "vdb": ["dm-0"]}, "uuids": {"dm-0": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"], "vda1": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}}, "ansible_devices": {"dm-0": {"holders": [], "host": "", "links": {"ids": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "labels": [], "masters": [], "uuids": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"]}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "", "sectors": "56139776", "sectorsize": "512", "size": "26.77 GB", "support_discard": "0", "vendor": null, "virtual": 1}, "vda": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {"vda1": {"holders": [], "links": {"ids": [], "labels": [], "masters": [], "uuids": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}, "sectors": "614400", "sectorsize": 512, "size": "300.00 MB", "start": "2048", "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, "vda2": {"holders": ["atomicos-root"], "links": {"ids": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "labels": [], "masters": ["dm-0"], "uuids": []}, "sectors": "20355072", "sectorsize": 512, "size": "9.71 GB", "start": "616448", "uuid": null}}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "23068672", "sectorsize": "512", "size": "11.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdb": {"holders": ["atomicos-root"], "host": "", "links": {"ids": ["lvm-pv-uuid-S9cnqm-3dnp-3SRH-sFDV-t1tP-VANa-aAUUn7"], "labels": [], "masters": ["dm-0"], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "41943040", "sectorsize": "512", "size": "20.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdc": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdd": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vde": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}}, "ansible_distribution": "CentOS", "ansible_distribution_file_parsed": true, "ansible_distribution_file_path": "/etc/redhat-release", "ansible_distribution_file_variety": "RedHat", "ansible_distribution_major_version": "7", "ansible_distribution_release": "Core", "ansible_distribution_version": "7.6.1810", "ansible_dns": {"nameservers": ["192.168.121.1"]}, "ansible_docker0": {"active": false, "device": "docker0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "off [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "on", "tx_gre_csum_segmentation": "on", "tx_gre_segmentation": "on", "tx_gso_partial": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "on", "tx_udp_tnl_segmentation": "on", "tx_vlan_offload": "on", "tx_vlan_stag_hw_insert": "on", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "id": "8000.024229599257", "interfaces": [], "ipv4": {"address": "172.17.0.1", "broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0"}, "macaddress": "02:42:29:59:92:57", "mtu": 1500, "promisc": false, "stp": false, "timestamping": ["rx_software", "software"], "type": "bridge"}, "ansible_domain": "", "ansible_effective_group_id": 0, "ansible_effective_user_id": 0, "ansible_env": {"HOME": "/root", "LANG": "en_US.UTF-8", "LOGNAME": "root", "MAIL": "/var/mail/vagrant", "PATH": "/sbin:/bin:/usr/sbin:/usr/bin", "PWD": "/var/home/vagrant", "SHELL": "/bin/bash", "SHLVL": "1", "SUDO_COMMAND": "/bin/sh -c echo BECOME-SUCCESS-jndkpugpfvlrpugokmvfyursypqeieir; /usr/bin/python", "SUDO_GID": "1000", "SUDO_UID": "1000", "SUDO_USER": "vagrant", "TERM": "unknown", "USER": "root", "USERNAME": "root", "XDG_SESSION_ID": "1", "_": "/usr/bin/python"}, "ansible_eth0": {"active": true, "device": "eth0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "192.168.121.160", "broadcast": "192.168.121.255", "netmask": "255.255.255.0", "network": "192.168.121.0"}, "ipv6": [{"address": "fe80::5054:ff:fef6:968b", "prefix": "64", "scope": "link"}], "macaddress": "52:54:00:f6:96:8b", "module": "virtio_net", "mtu": 1500, "pciid": "virtio8", "promisc": false, "timestamping": ["rx_software", "software"], "type": "ether"}, "ansible_fips": false, "ansible_form_factor": "Other", "ansible_fqdn": "kube2", "ansible_hostname": "kube2", "ansible_interfaces": ["lo", "docker0", "eth0"], "ansible_is_chroot": false, "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:7cbffc69d119", "ansible_kernel": "3.10.0-957.1.3.el7.x86_64", "ansible_lo": {"active": true, "device": "lo", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "on [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "on [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off [fixed]", "tx_scatter_gather": "on [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "on [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "127.0.0.1", "broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0"}, "ipv6": [{"address": "::1", "prefix": "128", "scope": "host"}], "mtu": 65536, "promisc": false, "timestamping": ["rx_software", "software"], "type": "loopback"}, "ansible_local": {}, "ansible_lsb": {}, "ansible_lvm": {"lvs": {"root": {"size_g": "26.77", "vg": "atomicos"}}, "pvs": {"/dev/vda2": {"free_g": "0", "size_g": "9.70", "vg": "atomicos"}, "/dev/vdb": {"free_g": "2.93", "size_g": "20.00", "vg": "atomicos"}}, "vgs": {"atomicos": {"free_g": "2.93", "num_lvs": "1", "num_pvs": "2", "size_g": "29.70"}}}, "ansible_machine": "x86_64", "ansible_machine_id": "680eec9976e64bfe94f31df5ef510baf", "ansible_memfree_mb": 1454, "ansible_memory_mb": {"nocache": {"free": 1654, "used": 184}, "real": {"free": 1454, "total": 1838, "used": 384}, "swap": {"cached": 0, "free": 0, "total": 0, "used": 0}}, "ansible_memtotal_mb": 1838, "ansible_mounts": [{"block_available": 6614067, "block_size": 4096, "block_total": 7014912, "block_used": 400845, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004055, "inode_total": 14034944, "inode_used": 30889, "mount": "/sysroot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 27091218432, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6614067, "block_size": 4096, "block_total": 7014912, "block_used": 400845, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004055, "inode_total": 14034944, "inode_used": 30889, "mount": "/", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27091218432, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6614067, "block_size": 4096, "block_total": 7014912, "block_used": 400845, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004055, "inode_total": 14034944, "inode_used": 30889, "mount": "/usr", "options": "ro,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27091218432, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6614067, "block_size": 4096, "block_total": 7014912, "block_used": 400845, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004055, "inode_total": 14034944, "inode_used": 30889, "mount": "/var", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27091218432, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 54635, "block_size": 4096, "block_total": 75945, "block_used": 21310, "device": "/dev/vda1", "fstype": "xfs", "inode_available": 153280, "inode_total": 153600, "inode_used": 320, "mount": "/boot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 223784960, "size_total": 311070720, "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, {"block_available": 6614067, "block_size": 4096, "block_total": 7014912, "block_used": 400845, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004055, "inode_total": 14034944, "inode_used": 30889, "mount": "/var/lib/docker/containers", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27091218432, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6614067, "block_size": 4096, "block_total": 7014912, "block_used": 400845, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004055, "inode_total": 14034944, "inode_used": 30889, "mount": "/var/lib/docker/overlay2", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27091218432, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}], "ansible_nodename": "kube2", "ansible_os_family": "RedHat", "ansible_pkg_mgr": "atomic_container", "ansible_processor": ["0", "GenuineIntel", "Intel(R) Xeon(R) CPU E3-1265L V2 @ 2.50GHz", "1", "GenuineIntel", "Intel(R) Xeon(R) CPU E3-1265L V2 @ 2.50GHz"], "ansible_processor_cores": 1, "ansible_processor_count": 2, "ansible_processor_threads_per_core": 1, "ansible_processor_vcpus": 2, "ansible_product_name": "KVM", "ansible_product_serial": "NA", "ansible_product_uuid": "680EEC99-76E6-4BFE-94F3-1DF5EF510BAF", "ansible_product_version": "RHEL 7.0.0 PC (i440FX + PIIX, 1996)", "ansible_python": {"executable": "/usr/bin/python", "has_sslcontext": true, "type": "CPython", "version": {"major": 2, "micro": 5, "minor": 7, "releaselevel": "final", "serial": 0}, "version_info": [2, 7, 5, "final", 0]}, "ansible_python_version": "2.7.5", "ansible_real_group_id": 0, "ansible_real_user_id": 0, "ansible_selinux": {"config_mode": "enforcing", "mode": "enforcing", "policyvers": 31, "status": "enabled", "type": "targeted"}, "ansible_selinux_python_present": true, "ansible_service_mgr": "systemd", "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI52Tisx6DVjvFDr0Lt7aLUrkuEzWChdA5W8HW6r7ZusAPHyFZRmrpYUV0HfpW7bLEEjhs0WTqiUf4fn+6QZXb8=", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIK7K8QQkTuId2LDF7xoKmEFCpWNZvOSRMDJUx0Jct86U", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDNK9pmjdnaTMV9VMvX+FrIxVsIagXla3+DomenPZvp3fF/ie1Q4JokIRe3fwrnOkS+PlsqtVnLH2hd7rA4/sIglOsBl6+TuYGTgnypn/maKXflcK4yjCVSKk3xqDAOYkBzUt89gewV+ndBrYBNmks1YEK2lg8gpe/T5Jeemf2M8IFpWKug3pN+lcxdG7Pg9fINjspCSnf+XwTmHpFBHmw1/MaJXJHS0oZDLce3UwOS5u0+Czey1SjqIqPKAy8fyrpihP0wN/OlmoE+AK0jxqNowY5mJ+GWVHaHY/c3xZSMVfdzq0fzOqp1z2wHDSmrgXHVxM2OqUNqccl3M1T/gvk/", "ansible_swapfree_mb": 0, "ansible_swaptotal_mb": 0, "ansible_system": "Linux", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_system_capabilities_enforced": "True", "ansible_system_vendor": "Red Hat", "ansible_uptime_seconds": 22, "ansible_user_dir": "/root", "ansible_user_gecos": "root", "ansible_user_gid": 0, "ansible_user_id": "root", "ansible_user_shell": "/bin/bash", "ansible_user_uid": 0, "ansible_userspace_architecture": "x86_64", "ansible_userspace_bits": "64", "ansible_virtualization_role": "guest", "ansible_virtualization_type": "kvm", "gather_subset": ["all"], "module_setup": true}, "changed": false, "failed": false, "invocation": {"module_args": {"fact_path": "/etc/ansible/facts.d", "filter": "*", "gather_subset": ["all"], "gather_timeout": 10}}, "item": "kube2"}, {"_ansible_delegated_vars": {"ansible_delegated_host": "kube3", "ansible_host": "192.168.121.203"}, "_ansible_ignore_errors": null, "_ansible_item_label": "kube3", "_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "_ansible_verbose_override": true, "ansible_facts": {"ansible_all_ipv4_addresses": ["192.168.121.203", "172.17.0.1"], "ansible_all_ipv6_addresses": ["fe80::5054:ff:fe74:365e"], "ansible_apparmor": {"status": "disabled"}, "ansible_architecture": "x86_64", "ansible_bios_date": "01/01/2011", "ansible_bios_version": "0.5.1", "ansible_cmdline": {"BOOT_IMAGE": "/ostree/centos-atomic-host-ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/vmlinuz-3.10.0-957.1.3.el7.x86_64", "console": "ttyS0,115200n8", "crashkernel": "auto", "no_timer_check": true, "ostree": "/ostree/boot.1/centos-atomic-host/ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/0", "rd.lvm.lv": "atomicos/root", "root": "/dev/mapper/atomicos-root"}, "ansible_date_time": {"date": "2019-01-18", "day": "18", "epoch": "1547771751", "hour": "00", "iso8601": "2019-01-18T00:35:51Z", "iso8601_basic": "20190118T003551823735", "iso8601_basic_short": "20190118T003551", "iso8601_micro": "2019-01-18T00:35:51.823822Z", "minute": "35", "month": "01", "second": "51", "time": "00:35:51", "tz": "UTC", "tz_offset": "+0000", "weekday": "Friday", "weekday_number": "5", "weeknumber": "02", "year": "2019"}, "ansible_default_ipv4": {"address": "192.168.121.203", "alias": "eth0", "broadcast": "192.168.121.255", "gateway": "192.168.121.1", "interface": "eth0", "macaddress": "52:54:00:74:36:5e", "mtu": 1500, "netmask": "255.255.255.0", "network": "192.168.121.0", "type": "ether"}, "ansible_default_ipv6": {}, "ansible_device_links": {"ids": {"dm-0": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "vda2": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "vdb": ["lvm-pv-uuid-Vf1FGz-dqpf-mlQg-b787-j9XC-RadL-8diWKj"]}, "labels": {}, "masters": {"vda2": ["dm-0"], "vdb": ["dm-0"]}, "uuids": {"dm-0": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"], "vda1": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}}, "ansible_devices": {"dm-0": {"holders": [], "host": "", "links": {"ids": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "labels": [], "masters": [], "uuids": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"]}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "", "sectors": "56139776", "sectorsize": "512", "size": "26.77 GB", "support_discard": "0", "vendor": null, "virtual": 1}, "vda": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {"vda1": {"holders": [], "links": {"ids": [], "labels": [], "masters": [], "uuids": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}, "sectors": "614400", "sectorsize": 512, "size": "300.00 MB", "start": "2048", "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, "vda2": {"holders": ["atomicos-root"], "links": {"ids": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "labels": [], "masters": ["dm-0"], "uuids": []}, "sectors": "20355072", "sectorsize": 512, "size": "9.71 GB", "start": "616448", "uuid": null}}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "23068672", "sectorsize": "512", "size": "11.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdb": {"holders": ["atomicos-root"], "host": "", "links": {"ids": ["lvm-pv-uuid-Vf1FGz-dqpf-mlQg-b787-j9XC-RadL-8diWKj"], "labels": [], "masters": ["dm-0"], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "41943040", "sectorsize": "512", "size": "20.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdc": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdd": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vde": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}}, "ansible_distribution": "CentOS", "ansible_distribution_file_parsed": true, "ansible_distribution_file_path": "/etc/redhat-release", "ansible_distribution_file_variety": "RedHat", "ansible_distribution_major_version": "7", "ansible_distribution_release": "Core", "ansible_distribution_version": "7.6.1810", "ansible_dns": {"nameservers": ["192.168.121.1"]}, "ansible_docker0": {"active": false, "device": "docker0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "off [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "on", "tx_gre_csum_segmentation": "on", "tx_gre_segmentation": "on", "tx_gso_partial": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "on", "tx_udp_tnl_segmentation": "on", "tx_vlan_offload": "on", "tx_vlan_stag_hw_insert": "on", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "id": "8000.0242e93c6b0a", "interfaces": [], "ipv4": {"address": "172.17.0.1", "broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0"}, "macaddress": "02:42:e9:3c:6b:0a", "mtu": 1500, "promisc": false, "stp": false, "timestamping": ["rx_software", "software"], "type": "bridge"}, "ansible_domain": "", "ansible_effective_group_id": 0, "ansible_effective_user_id": 0, "ansible_env": {"HOME": "/root", "LANG": "en_US.UTF-8", "LOGNAME": "root", "MAIL": "/var/mail/vagrant", "PATH": "/sbin:/bin:/usr/sbin:/usr/bin", "PWD": "/var/home/vagrant", "SHELL": "/bin/bash", "SHLVL": "1", "SUDO_COMMAND": "/bin/sh -c echo BECOME-SUCCESS-cggkkuakldgcybfrccakxqkssffbpdyu; /usr/bin/python", "SUDO_GID": "1000", "SUDO_UID": "1000", "SUDO_USER": "vagrant", "TERM": "unknown", "USER": "root", "USERNAME": "root", "XDG_SESSION_ID": "1", "_": "/usr/bin/python"}, "ansible_eth0": {"active": true, "device": "eth0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "192.168.121.203", "broadcast": "192.168.121.255", "netmask": "255.255.255.0", "network": "192.168.121.0"}, "ipv6": [{"address": "fe80::5054:ff:fe74:365e", "prefix": "64", "scope": "link"}], "macaddress": "52:54:00:74:36:5e", "module": "virtio_net", "mtu": 1500, "pciid": "virtio8", "promisc": false, "timestamping": ["rx_software", "software"], "type": "ether"}, "ansible_fips": false, "ansible_form_factor": "Other", "ansible_fqdn": "kube3", "ansible_hostname": "kube3", "ansible_interfaces": ["lo", "docker0", "eth0"], "ansible_is_chroot": false, "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:7cbffc69d119", "ansible_kernel": "3.10.0-957.1.3.el7.x86_64", "ansible_lo": {"active": true, "device": "lo", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "on [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "on [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off [fixed]", "tx_scatter_gather": "on [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "on [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "127.0.0.1", "broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0"}, "ipv6": [{"address": "::1", "prefix": "128", "scope": "host"}], "mtu": 65536, "promisc": false, "timestamping": ["rx_software", "software"], "type": "loopback"}, "ansible_local": {}, "ansible_lsb": {}, "ansible_lvm": {"lvs": {"root": {"size_g": "26.77", "vg": "atomicos"}}, "pvs": {"/dev/vda2": {"free_g": "0", "size_g": "9.70", "vg": "atomicos"}, "/dev/vdb": {"free_g": "2.93", "size_g": "20.00", "vg": "atomicos"}}, "vgs": {"atomicos": {"free_g": "2.93", "num_lvs": "1", "num_pvs": "2", "size_g": "29.70"}}}, "ansible_machine": "x86_64", "ansible_machine_id": "f528bcd4e97c45ad93000dd72e5496bd", "ansible_memfree_mb": 1461, "ansible_memory_mb": {"nocache": {"free": 1663, "used": 175}, "real": {"free": 1461, "total": 1838, "used": 377}, "swap": {"cached": 0, "free": 0, "total": 0, "used": 0}}, "ansible_memtotal_mb": 1838, "ansible_mounts": [{"block_available": 6614065, "block_size": 4096, "block_total": 7014912, "block_used": 400847, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/sysroot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 27091210240, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6614065, "block_size": 4096, "block_total": 7014912, "block_used": 400847, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27091210240, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6614065, "block_size": 4096, "block_total": 7014912, "block_used": 400847, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/usr", "options": "ro,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27091210240, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6614065, "block_size": 4096, "block_total": 7014912, "block_used": 400847, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/var", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27091210240, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 54635, "block_size": 4096, "block_total": 75945, "block_used": 21310, "device": "/dev/vda1", "fstype": "xfs", "inode_available": 153280, "inode_total": 153600, "inode_used": 320, "mount": "/boot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 223784960, "size_total": 311070720, "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, {"block_available": 6614065, "block_size": 4096, "block_total": 7014912, "block_used": 400847, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/var/lib/docker/containers", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27091210240, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6614065, "block_size": 4096, "block_total": 7014912, "block_used": 400847, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/var/lib/docker/overlay2", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27091210240, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}], "ansible_nodename": "kube3", "ansible_os_family": "RedHat", "ansible_pkg_mgr": "atomic_container", "ansible_processor": ["0", "GenuineIntel", "Intel(R) Xeon(R) CPU E3-1265L V2 @ 2.50GHz", "1", "GenuineIntel", "Intel(R) Xeon(R) CPU E3-1265L V2 @ 2.50GHz"], "ansible_processor_cores": 1, "ansible_processor_count": 2, "ansible_processor_threads_per_core": 1, "ansible_processor_vcpus": 2, "ansible_product_name": "KVM", "ansible_product_serial": "NA", "ansible_product_uuid": "F528BCD4-E97C-45AD-9300-0DD72E5496BD", "ansible_product_version": "RHEL 7.0.0 PC (i440FX + PIIX, 1996)", "ansible_python": {"executable": "/usr/bin/python", "has_sslcontext": true, "type": "CPython", "version": {"major": 2, "micro": 5, "minor": 7, "releaselevel": "final", "serial": 0}, "version_info": [2, 7, 5, "final", 0]}, "ansible_python_version": "2.7.5", "ansible_real_group_id": 0, "ansible_real_user_id": 0, "ansible_selinux": {"config_mode": "enforcing", "mode": "enforcing", "policyvers": 31, "status": "enabled", "type": "targeted"}, "ansible_selinux_python_present": true, "ansible_service_mgr": "systemd", "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFV/sZHod8yZ/kpe6nv4pnSe5AauqF7O+Jsf09LTc2le2wEcWLNKTrJiWGnPSomDpfkkvdO+h9qUueP6RymZC0E=", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIFANf9gpQRhCnNOn0hWPcoAe8I6sofXjfYIiqjKV0J0X", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQC7g2A+X3s8ChZvQg+/+2gATNFvrFp57odVMHyMdJhkrTjZrZJnv43NpQPZOHt04T/KjvSzSvSwSBhX9ypTV6ARU7p0GnXkT3anH5HUvDJOW3eerN/CF2YBme+8MDA02F5gKaIy+CguSn+37Fc0rXyJpAGssg7YlndwzBfqkphvWHta/liIpH4jNo0+wyldZvBqP/CZra8GpWL9a7evSBeNe3DrQIggZfObxbzDcd3gXpTxjrZLNc/Sgwe93Be7BoJp3p8iCBKEAY0/UhdShAyjzSSciul1LHTUAwpUOk22zKeHttXmb5x+50vyz1wL+KiKOsq/Ycp/5sVckAZ8eXAJ", "ansible_swapfree_mb": 0, "ansible_swaptotal_mb": 0, "ansible_system": "Linux", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_system_capabilities_enforced": "True", "ansible_system_vendor": "Red Hat", "ansible_uptime_seconds": 23, "ansible_user_dir": "/root", "ansible_user_gecos": "root", "ansible_user_gid": 0, "ansible_user_id": "root", "ansible_user_shell": "/bin/bash", "ansible_user_uid": 0, "ansible_userspace_architecture": "x86_64", "ansible_userspace_bits": "64", "ansible_virtualization_role": "guest", "ansible_virtualization_type": "kvm", "gather_subset": ["all"], "module_setup": true}, "changed": false, "failed": false, "invocation": {"module_args": {"fact_path": "/etc/ansible/facts.d", "filter": "*", "gather_subset": ["all"], "gather_timeout": 10}}, "item": "kube3"}]} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=0 changed=0 unreachable=1 failed=0 kube2 : ok=19 changed=8 unreachable=1 failed=0 kube3 : ok=19 changed=8 unreachable=1 failed=0 Friday 18 January 2019 00:35:52 +0000 (0:00:03.627) 0:01:12.301 ******** =============================================================================== Install packages ------------------------------------------------------- 19.63s Extend root VG --------------------------------------------------------- 17.69s Wait for host to be available ------------------------------------------ 16.14s gather facts from all instances ----------------------------------------- 3.63s Persist loaded modules -------------------------------------------------- 2.55s Reboot to make layered packages available ------------------------------- 1.48s bootstrap-os : Gather nodes hostnames ----------------------------------- 1.18s Extend the root LV and FS to occupy remaining space --------------------- 1.11s Load required kernel modules -------------------------------------------- 1.07s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 1.00s bootstrap-os : Check presence of fastestmirror.conf --------------------- 0.93s bootstrap-os : Create remote_tmp for it is used by another module ------- 0.84s Gathering Facts --------------------------------------------------------- 0.78s bootstrap-os : check if atomic host ------------------------------------- 0.72s bootstrap-os : Disable fastestmirror plugin ----------------------------- 0.72s bootstrap-os : Remove require tty --------------------------------------- 0.66s bootstrap-os : Fetch /etc/os-release ------------------------------------ 0.36s bootstrap-os : set_fact ------------------------------------------------- 0.30s kubespray-defaults : Configure defaults --------------------------------- 0.29s download : Download items ----------------------------------------------- 0.17s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0