[CI-results] Build failed in Jenkins: gluster_anteater_gcs #120

ci at centos.org ci at centos.org
Thu Mar 28 01:04:43 UTC 2019


See <https://ci.centos.org/job/gluster_anteater_gcs/120/display/redirect>

------------------------------------------
[...truncated 398.78 KB...]

TASK [network_plugin/contiv : Contiv | Copy the generated certificate on nodes] ***
Thursday 28 March 2019  00:54:03 +0000 (0:00:00.223)       0:14:22.920 ******** 

TASK [network_plugin/contiv : Contiv | Set cni directory permissions] **********
Thursday 28 March 2019  00:54:03 +0000 (0:00:00.353)       0:14:23.273 ******** 

TASK [network_plugin/contiv : Contiv | Copy cni plugins] ***********************
Thursday 28 March 2019  00:54:04 +0000 (0:00:00.339)       0:14:23.612 ******** 

TASK [network_plugin/contiv : Contiv | Copy netctl binary from docker container] ***
Thursday 28 March 2019  00:54:04 +0000 (0:00:00.322)       0:14:23.935 ******** 

TASK [network_plugin/kube-router : kube-router | Add annotations on kube-master] ***
Thursday 28 March 2019  00:54:04 +0000 (0:00:00.346)       0:14:24.281 ******** 

TASK [network_plugin/kube-router : kube-router | Add annotations on kube-node] ***
Thursday 28 March 2019  00:54:05 +0000 (0:00:00.330)       0:14:24.611 ******** 

TASK [network_plugin/kube-router : kube-router | Add common annotations on all servers] ***
Thursday 28 March 2019  00:54:05 +0000 (0:00:00.275)       0:14:24.887 ******** 

TASK [network_plugin/kube-router : kube-roter | Set cni directory permissions] ***
Thursday 28 March 2019  00:54:05 +0000 (0:00:00.260)       0:14:25.148 ******** 

TASK [network_plugin/kube-router : kube-router | Copy cni plugins] *************
Thursday 28 March 2019  00:54:05 +0000 (0:00:00.273)       0:14:25.422 ******** 

TASK [network_plugin/kube-router : kube-router | Create manifest] **************
Thursday 28 March 2019  00:54:06 +0000 (0:00:00.324)       0:14:25.747 ******** 

TASK [network_plugin/cloud : Cloud | Set cni directory permissions] ************
Thursday 28 March 2019  00:54:06 +0000 (0:00:00.285)       0:14:26.032 ******** 

TASK [network_plugin/cloud : Canal | Copy cni plugins] *************************
Thursday 28 March 2019  00:54:06 +0000 (0:00:00.248)       0:14:26.282 ******** 

TASK [network_plugin/multus : Multus | Copy manifest files] ********************
Thursday 28 March 2019  00:54:06 +0000 (0:00:00.261)       0:14:26.543 ******** 

TASK [network_plugin/multus : Multus | Copy manifest templates] ****************
Thursday 28 March 2019  00:54:07 +0000 (0:00:00.442)       0:14:26.985 ******** 

RUNNING HANDLER [kubernetes/kubeadm : restart kubelet] *************************
Thursday 28 March 2019  00:54:07 +0000 (0:00:00.213)       0:14:27.199 ******** 
changed: [kube3]

PLAY [kube-master[0]] **********************************************************

TASK [download : include_tasks] ************************************************
Thursday 28 March 2019  00:54:08 +0000 (0:00:01.318)       0:14:28.517 ******** 

TASK [download : Download items] ***********************************************
Thursday 28 March 2019  00:54:09 +0000 (0:00:00.164)       0:14:28.682 ******** 

TASK [download : Sync container] ***********************************************
Thursday 28 March 2019  00:54:10 +0000 (0:00:01.637)       0:14:30.320 ******** 

TASK [download : include_tasks] ************************************************
Thursday 28 March 2019  00:54:12 +0000 (0:00:01.600)       0:14:31.920 ******** 

TASK [kubespray-defaults : Configure defaults] *********************************
Thursday 28 March 2019  00:54:12 +0000 (0:00:00.161)       0:14:32.082 ******** 
ok: [kube1] => {
    "msg": "Check roles/kubespray-defaults/defaults/main.yml"
}

TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Get default token name] ***
Thursday 28 March 2019  00:54:12 +0000 (0:00:00.484)       0:14:32.567 ******** 
ok: [kube1]

TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Get default token data] ***
Thursday 28 March 2019  00:54:14 +0000 (0:00:01.573)       0:14:34.140 ******** 
ok: [kube1]

TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Test if default certificate is expired] ***
Thursday 28 March 2019  00:54:15 +0000 (0:00:01.288)       0:14:35.429 ******** 
ok: [kube1]

TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Determine if certificate is expired] ***
Thursday 28 March 2019  00:54:17 +0000 (0:00:01.910)       0:14:37.340 ******** 
ok: [kube1]

TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Get all serviceaccount tokens to expire] ***
Thursday 28 March 2019  00:54:18 +0000 (0:00:00.515)       0:14:37.855 ******** 

TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Delete expired tokens] ***
Thursday 28 March 2019  00:54:18 +0000 (0:00:00.155)       0:14:38.011 ******** 

TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Delete pods in system namespace] ***
Thursday 28 March 2019  00:54:18 +0000 (0:00:00.136)       0:14:38.147 ******** 

TASK [win_nodes/kubernetes_patch : Ensure that user manifests directory exists] ***
Thursday 28 March 2019  00:54:18 +0000 (0:00:00.166)       0:14:38.314 ******** 
changed: [kube1]

TASK [win_nodes/kubernetes_patch : Copy kube-proxy daemonset hostnameOverride patch] ***
Thursday 28 March 2019  00:54:19 +0000 (0:00:01.009)       0:14:39.324 ******** 
changed: [kube1]

TASK [win_nodes/kubernetes_patch : Check current command for kube-proxy daemonset] ***
Thursday 28 March 2019  00:54:21 +0000 (0:00:02.191)       0:14:41.515 ******** 
changed: [kube1]

TASK [win_nodes/kubernetes_patch : Apply hostnameOverride patch for kube-proxy daemonset] ***
Thursday 28 March 2019  00:54:23 +0000 (0:00:01.424)       0:14:42.940 ******** 
changed: [kube1]

TASK [win_nodes/kubernetes_patch : debug] **************************************
Thursday 28 March 2019  00:54:24 +0000 (0:00:01.506)       0:14:44.446 ******** 
ok: [kube1] => {
    "msg": [
        "daemonset.extensions/kube-proxy patched"
    ]
}

TASK [win_nodes/kubernetes_patch : debug] **************************************
Thursday 28 March 2019  00:54:25 +0000 (0:00:00.469)       0:14:44.916 ******** 
ok: [kube1] => {
    "msg": []
}

TASK [win_nodes/kubernetes_patch : Copy kube-proxy daemonset nodeselector patch] ***
Thursday 28 March 2019  00:54:25 +0000 (0:00:00.539)       0:14:45.455 ******** 
changed: [kube1]

TASK [win_nodes/kubernetes_patch : Check current nodeselector for kube-proxy daemonset] ***
Thursday 28 March 2019  00:54:28 +0000 (0:00:02.330)       0:14:47.786 ******** 
changed: [kube1]

TASK [win_nodes/kubernetes_patch : Apply nodeselector patch for kube-proxy daemonset] ***
Thursday 28 March 2019  00:54:29 +0000 (0:00:01.352)       0:14:49.138 ******** 
changed: [kube1]

TASK [win_nodes/kubernetes_patch : debug] **************************************
Thursday 28 March 2019  00:54:31 +0000 (0:00:01.484)       0:14:50.623 ******** 
ok: [kube1] => {
    "msg": [
        "daemonset.extensions/kube-proxy patched"
    ]
}

TASK [win_nodes/kubernetes_patch : debug] **************************************
Thursday 28 March 2019  00:54:31 +0000 (0:00:00.507)       0:14:51.131 ******** 
ok: [kube1] => {
    "msg": []
}

PLAY [kube-master] *************************************************************

TASK [download : include_tasks] ************************************************
Thursday 28 March 2019  00:54:32 +0000 (0:00:00.668)       0:14:51.799 ******** 

TASK [download : Download items] ***********************************************
Thursday 28 March 2019  00:54:32 +0000 (0:00:00.198)       0:14:51.997 ******** 

TASK [download : Sync container] ***********************************************
Thursday 28 March 2019  00:54:34 +0000 (0:00:01.779)       0:14:53.777 ******** 

TASK [download : include_tasks] ************************************************
Thursday 28 March 2019  00:54:36 +0000 (0:00:01.843)       0:14:55.620 ******** 

TASK [kubespray-defaults : Configure defaults] *********************************
Thursday 28 March 2019  00:54:36 +0000 (0:00:00.226)       0:14:55.847 ******** 
ok: [kube1] => {
    "msg": "Check roles/kubespray-defaults/defaults/main.yml"
}
ok: [kube2] => {
    "msg": "Check roles/kubespray-defaults/defaults/main.yml"
}

TASK [kubernetes-apps/network_plugin/cilium : Cilium | Start Resources] ********
Thursday 28 March 2019  00:54:36 +0000 (0:00:00.455)       0:14:56.303 ******** 

TASK [kubernetes-apps/network_plugin/cilium : Cilium | Wait for pods to run] ***
Thursday 28 March 2019  00:54:37 +0000 (0:00:00.381)       0:14:56.684 ******** 

TASK [kubernetes-apps/network_plugin/calico : Start Calico resources] **********
Thursday 28 March 2019  00:54:37 +0000 (0:00:00.210)       0:14:56.894 ******** 

TASK [kubernetes-apps/network_plugin/calico : calico upgrade complete] *********
Thursday 28 March 2019  00:54:37 +0000 (0:00:00.257)       0:14:57.151 ******** 

TASK [kubernetes-apps/network_plugin/canal : Canal | Start Resources] **********
Thursday 28 March 2019  00:54:37 +0000 (0:00:00.283)       0:14:57.434 ******** 

TASK [kubernetes-apps/network_plugin/flannel : Flannel | Start Resources] ******
Thursday 28 March 2019  00:54:38 +0000 (0:00:00.403)       0:14:57.838 ******** 
ok: [kube1] => (item={'_ansible_parsed': True, u'md5sum': u'973704ff91b4c9341dccaf1da6003177', u'uid': 0, u'dest': u'/etc/kubernetes/cni-flannel-rbac.yml', '_ansible_item_result': True, '_ansible_no_log': False, u'owner': u'root', 'diff': [], u'size': 836, u'src': u'/home/vagrant/.ansible/tmp/ansible-tmp-1553734431.65-78430218277608/source', u'group': u'root', '_ansible_item_label': {u'type': u'sa', u'name': u'flannel', u'file': u'cni-flannel-rbac.yml'}, 'item': {u'type': u'sa', u'name': u'flannel', u'file': u'cni-flannel-rbac.yml'}, u'checksum': u'8c69db180ab422f55a122372bee4620dfb2ad0ed', u'changed': True, 'failed': False, u'state': u'file', u'gid': 0, u'secontext': u'system_u:object_r:etc_t:s0', u'mode': u'0644', u'invocation': {u'module_args': {u'directory_mode': None, u'force': True, u'remote_src': None, u'dest': u'/etc/kubernetes/cni-flannel-rbac.yml', u'selevel': None, u'_original_basename': u'cni-flannel-rbac.yml.j2', u'delimiter': None, u'regexp': None, u'owner': None, u'follow': False, u'validate': None, u'local_follow': None, u'src': u'/home/vagrant/.ansible/tmp/ansible-tmp-1553734431.65-78430218277608/source', u'group': None, u'unsafe_writes': None, u'checksum': u'8c69db180ab422f55a122372bee4620dfb2ad0ed', u'seuser': None, u'serole': None, u'content': None, u'setype': None, u'mode': None, u'attributes': None, u'backup': False}}, '_ansible_ignore_errors': None})
ok: [kube1] => (item={'_ansible_parsed': True, u'md5sum': u'51829ca2a2d540389c94291f63118112', u'uid': 0, u'dest': u'/etc/kubernetes/cni-flannel.yml', '_ansible_item_result': True, '_ansible_no_log': False, u'owner': u'root', 'diff': [], u'size': 3198, u'src': u'/home/vagrant/.ansible/tmp/ansible-tmp-1553734433.4-157087191596506/source', u'group': u'root', '_ansible_item_label': {u'type': u'ds', u'name': u'kube-flannel', u'file': u'cni-flannel.yml'}, 'item': {u'type': u'ds', u'name': u'kube-flannel', u'file': u'cni-flannel.yml'}, u'checksum': u'0b1393229c9e863d63eff80c96bda56568b58e82', u'changed': True, 'failed': False, u'state': u'file', u'gid': 0, u'secontext': u'system_u:object_r:etc_t:s0', u'mode': u'0644', u'invocation': {u'module_args': {u'directory_mode': None, u'force': True, u'remote_src': None, u'dest': u'/etc/kubernetes/cni-flannel.yml', u'selevel': None, u'_original_basename': u'cni-flannel.yml.j2', u'delimiter': None, u'regexp': None, u'owner': None, u'follow': False, u'validate': None, u'local_follow': None, u'src': u'/home/vagrant/.ansible/tmp/ansible-tmp-1553734433.4-157087191596506/source', u'group': None, u'unsafe_writes': None, u'checksum': u'0b1393229c9e863d63eff80c96bda56568b58e82', u'seuser': None, u'serole': None, u'content': None, u'setype': None, u'mode': None, u'attributes': None, u'backup': False}}, '_ansible_ignore_errors': None})

TASK [kubernetes-apps/network_plugin/flannel : Flannel | Wait for flannel subnet.env file presence] ***
Thursday 28 March 2019  00:54:41 +0000 (0:00:03.164)       0:15:01.003 ******** 
ok: [kube1]
fatal: [kube2]: FAILED! => {"changed": false, "elapsed": 600, "msg": "Timeout when waiting for file /run/flannel/subnet.env"}

NO MORE HOSTS LEFT *************************************************************
	to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry

PLAY RECAP *********************************************************************
kube1                      : ok=364  changed=103  unreachable=0    failed=0   
kube2                      : ok=315  changed=91   unreachable=0    failed=1   
kube3                      : ok=282  changed=78   unreachable=0    failed=0   

Thursday 28 March 2019  01:04:43 +0000 (0:10:01.825)       0:25:02.829 ******** 
=============================================================================== 
kubernetes-apps/network_plugin/flannel : Flannel | Wait for flannel subnet.env file presence - 601.83s
kubernetes/master : kubeadm | Initialize first master ------------------ 39.63s
kubernetes/master : kubeadm | Init other uninitialized masters --------- 38.78s
download : container_download | download images for kubeadm config images -- 35.03s
etcd : Gen_certs | Write etcd master certs ----------------------------- 33.62s
Install packages ------------------------------------------------------- 31.93s
Wait for host to be available ------------------------------------------ 20.97s
kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.25s
download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 16.71s
etcd : Gen_certs | Gather etcd master certs ---------------------------- 13.16s
gather facts from all instances ---------------------------------------- 13.11s
etcd : reload etcd ----------------------------------------------------- 11.90s
container-engine/docker : Docker | pause while Docker restarts --------- 10.39s
download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 9.96s
download : file_download | Download item -------------------------------- 9.40s
kubernetes/master : slurp kubeadm certs --------------------------------- 8.18s
etcd : wait for etcd up ------------------------------------------------- 8.12s
download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 8.01s
download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 6.80s
download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 6.01s
==> kube3: An error occurred. The error will be shown after all tasks complete.
An error occurred while executing multiple actions in parallel.
Any errors that occurred are shown below.

An error occurred while executing the action on the 'kube3'
machine. Please handle this error then try again:

Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
Build step 'Execute shell' marked build as failure
Performing Post build task...
Could not match :Build started  : False
Logical operation result is FALSE
Skipping script  : # cico-node-done-from-ansible.sh
# A script that releases nodes from a SSID file written by
SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid}

for ssid in $(cat ${SSID_FILE})
do
    cico -q node done $ssid
done

END OF POST BUILD TASK 	: 0


More information about the ci-results mailing list