[CI-results] Build failed in Jenkins: gluster_anteater_gcs #166

ci at centos.org ci at centos.org
Mon May 13 01:02:29 UTC 2019


See <https://ci.centos.org/job/gluster_anteater_gcs/166/display/redirect>

------------------------------------------
[...truncated 398.67 KB...]

TASK [network_plugin/contiv : Contiv | Copy the generated certificate on nodes] ***
Monday 13 May 2019  01:51:50 +0100 (0:00:00.174)       0:14:04.978 ************ 

TASK [network_plugin/contiv : Contiv | Set cni directory permissions] **********
Monday 13 May 2019  01:51:50 +0100 (0:00:00.388)       0:14:05.367 ************ 

TASK [network_plugin/contiv : Contiv | Copy cni plugins] ***********************
Monday 13 May 2019  01:51:51 +0100 (0:00:00.306)       0:14:05.673 ************ 

TASK [network_plugin/contiv : Contiv | Copy netctl binary from docker container] ***
Monday 13 May 2019  01:51:51 +0100 (0:00:00.264)       0:14:05.937 ************ 

TASK [network_plugin/kube-router : kube-router | Add annotations on kube-master] ***
Monday 13 May 2019  01:51:51 +0100 (0:00:00.265)       0:14:06.202 ************ 

TASK [network_plugin/kube-router : kube-router | Add annotations on kube-node] ***
Monday 13 May 2019  01:51:52 +0100 (0:00:00.293)       0:14:06.496 ************ 

TASK [network_plugin/kube-router : kube-router | Add common annotations on all servers] ***
Monday 13 May 2019  01:51:52 +0100 (0:00:00.260)       0:14:06.757 ************ 

TASK [network_plugin/kube-router : kube-roter | Set cni directory permissions] ***
Monday 13 May 2019  01:51:52 +0100 (0:00:00.376)       0:14:07.133 ************ 

TASK [network_plugin/kube-router : kube-router | Copy cni plugins] *************
Monday 13 May 2019  01:51:52 +0100 (0:00:00.301)       0:14:07.435 ************ 

TASK [network_plugin/kube-router : kube-router | Create manifest] **************
Monday 13 May 2019  01:51:53 +0100 (0:00:00.292)       0:14:07.728 ************ 

TASK [network_plugin/cloud : Cloud | Set cni directory permissions] ************
Monday 13 May 2019  01:51:53 +0100 (0:00:00.310)       0:14:08.039 ************ 

TASK [network_plugin/cloud : Canal | Copy cni plugins] *************************
Monday 13 May 2019  01:51:53 +0100 (0:00:00.287)       0:14:08.326 ************ 

TASK [network_plugin/multus : Multus | Copy manifest files] ********************
Monday 13 May 2019  01:51:54 +0100 (0:00:00.267)       0:14:08.594 ************ 

TASK [network_plugin/multus : Multus | Copy manifest templates] ****************
Monday 13 May 2019  01:51:54 +0100 (0:00:00.357)       0:14:08.952 ************ 

RUNNING HANDLER [kubernetes/kubeadm : restart kubelet] *************************
Monday 13 May 2019  01:51:54 +0100 (0:00:00.234)       0:14:09.186 ************ 
changed: [kube3]

PLAY [kube-master[0]] **********************************************************

TASK [download : include_tasks] ************************************************
Monday 13 May 2019  01:51:56 +0100 (0:00:01.505)       0:14:10.692 ************ 

TASK [download : Download items] ***********************************************
Monday 13 May 2019  01:51:56 +0100 (0:00:00.154)       0:14:10.847 ************ 

TASK [download : Sync container] ***********************************************
Monday 13 May 2019  01:51:58 +0100 (0:00:01.686)       0:14:12.533 ************ 

TASK [download : include_tasks] ************************************************
Monday 13 May 2019  01:51:59 +0100 (0:00:01.543)       0:14:14.077 ************ 

TASK [kubespray-defaults : Configure defaults] *********************************
Monday 13 May 2019  01:51:59 +0100 (0:00:00.185)       0:14:14.262 ************ 
ok: [kube1] => {
    "msg": "Check roles/kubespray-defaults/defaults/main.yml"
}

TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Get default token name] ***
Monday 13 May 2019  01:52:00 +0100 (0:00:00.550)       0:14:14.812 ************ 
ok: [kube1]

TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Get default token data] ***
Monday 13 May 2019  01:52:01 +0100 (0:00:01.200)       0:14:16.012 ************ 
ok: [kube1]

TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Test if default certificate is expired] ***
Monday 13 May 2019  01:52:02 +0100 (0:00:01.269)       0:14:17.282 ************ 
ok: [kube1]

TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Determine if certificate is expired] ***
Monday 13 May 2019  01:52:04 +0100 (0:00:01.804)       0:14:19.087 ************ 
ok: [kube1]

TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Get all serviceaccount tokens to expire] ***
Monday 13 May 2019  01:52:05 +0100 (0:00:00.464)       0:14:19.551 ************ 

TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Delete expired tokens] ***
Monday 13 May 2019  01:52:05 +0100 (0:00:00.131)       0:14:19.682 ************ 

TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Delete pods in system namespace] ***
Monday 13 May 2019  01:52:05 +0100 (0:00:00.143)       0:14:19.826 ************ 

TASK [win_nodes/kubernetes_patch : Ensure that user manifests directory exists] ***
Monday 13 May 2019  01:52:05 +0100 (0:00:00.150)       0:14:19.977 ************ 
changed: [kube1]

TASK [win_nodes/kubernetes_patch : Copy kube-proxy daemonset hostnameOverride patch] ***
Monday 13 May 2019  01:52:06 +0100 (0:00:00.982)       0:14:20.960 ************ 
changed: [kube1]

TASK [win_nodes/kubernetes_patch : Check current command for kube-proxy daemonset] ***
Monday 13 May 2019  01:52:08 +0100 (0:00:02.258)       0:14:23.219 ************ 
changed: [kube1]

TASK [win_nodes/kubernetes_patch : Apply hostnameOverride patch for kube-proxy daemonset] ***
Monday 13 May 2019  01:52:10 +0100 (0:00:01.535)       0:14:24.754 ************ 
changed: [kube1]

TASK [win_nodes/kubernetes_patch : debug] **************************************
Monday 13 May 2019  01:52:11 +0100 (0:00:01.381)       0:14:26.136 ************ 
ok: [kube1] => {
    "msg": [
        "daemonset.extensions/kube-proxy patched"
    ]
}

TASK [win_nodes/kubernetes_patch : debug] **************************************
Monday 13 May 2019  01:52:12 +0100 (0:00:00.523)       0:14:26.659 ************ 
ok: [kube1] => {
    "msg": []
}

TASK [win_nodes/kubernetes_patch : Copy kube-proxy daemonset nodeselector patch] ***
Monday 13 May 2019  01:52:12 +0100 (0:00:00.359)       0:14:27.019 ************ 
changed: [kube1]

TASK [win_nodes/kubernetes_patch : Check current nodeselector for kube-proxy daemonset] ***
Monday 13 May 2019  01:52:14 +0100 (0:00:02.099)       0:14:29.119 ************ 
changed: [kube1]

TASK [win_nodes/kubernetes_patch : Apply nodeselector patch for kube-proxy daemonset] ***
Monday 13 May 2019  01:52:15 +0100 (0:00:01.272)       0:14:30.391 ************ 
changed: [kube1]

TASK [win_nodes/kubernetes_patch : debug] **************************************
Monday 13 May 2019  01:52:17 +0100 (0:00:01.427)       0:14:31.819 ************ 
ok: [kube1] => {
    "msg": [
        "daemonset.extensions/kube-proxy patched"
    ]
}

TASK [win_nodes/kubernetes_patch : debug] **************************************
Monday 13 May 2019  01:52:17 +0100 (0:00:00.386)       0:14:32.206 ************ 
ok: [kube1] => {
    "msg": []
}

PLAY [kube-master] *************************************************************

TASK [download : include_tasks] ************************************************
Monday 13 May 2019  01:52:18 +0100 (0:00:00.543)       0:14:32.749 ************ 

TASK [download : Download items] ***********************************************
Monday 13 May 2019  01:52:18 +0100 (0:00:00.189)       0:14:32.938 ************ 

TASK [download : Sync container] ***********************************************
Monday 13 May 2019  01:52:20 +0100 (0:00:01.669)       0:14:34.608 ************ 

TASK [download : include_tasks] ************************************************
Monday 13 May 2019  01:52:21 +0100 (0:00:01.768)       0:14:36.377 ************ 

TASK [kubespray-defaults : Configure defaults] *********************************
Monday 13 May 2019  01:52:22 +0100 (0:00:00.223)       0:14:36.600 ************ 
ok: [kube1] => {
    "msg": "Check roles/kubespray-defaults/defaults/main.yml"
}
ok: [kube2] => {
    "msg": "Check roles/kubespray-defaults/defaults/main.yml"
}

TASK [kubernetes-apps/network_plugin/cilium : Cilium | Start Resources] ********
Monday 13 May 2019  01:52:22 +0100 (0:00:00.568)       0:14:37.169 ************ 

TASK [kubernetes-apps/network_plugin/cilium : Cilium | Wait for pods to run] ***
Monday 13 May 2019  01:52:23 +0100 (0:00:00.449)       0:14:37.618 ************ 

TASK [kubernetes-apps/network_plugin/calico : Start Calico resources] **********
Monday 13 May 2019  01:52:23 +0100 (0:00:00.209)       0:14:37.828 ************ 

TASK [kubernetes-apps/network_plugin/calico : calico upgrade complete] *********
Monday 13 May 2019  01:52:23 +0100 (0:00:00.174)       0:14:38.003 ************ 

TASK [kubernetes-apps/network_plugin/canal : Canal | Start Resources] **********
Monday 13 May 2019  01:52:23 +0100 (0:00:00.238)       0:14:38.241 ************ 

TASK [kubernetes-apps/network_plugin/flannel : Flannel | Start Resources] ******
Monday 13 May 2019  01:52:24 +0100 (0:00:00.439)       0:14:38.680 ************ 
ok: [kube1] => (item={'_ansible_parsed': True, u'md5sum': u'973704ff91b4c9341dccaf1da6003177', u'uid': 0, u'dest': u'/etc/kubernetes/cni-flannel-rbac.yml', '_ansible_item_result': True, '_ansible_no_log': False, u'owner': u'root', 'diff': [], u'size': 836, u'src': u'/home/vagrant/.ansible/tmp/ansible-tmp-1557708698.88-87612009870434/source', u'group': u'root', '_ansible_item_label': {u'type': u'sa', u'name': u'flannel', u'file': u'cni-flannel-rbac.yml'}, 'item': {u'type': u'sa', u'name': u'flannel', u'file': u'cni-flannel-rbac.yml'}, u'checksum': u'8c69db180ab422f55a122372bee4620dfb2ad0ed', u'changed': True, 'failed': False, u'state': u'file', u'gid': 0, u'secontext': u'system_u:object_r:etc_t:s0', u'mode': u'0644', u'invocation': {u'module_args': {u'directory_mode': None, u'force': True, u'remote_src': None, u'dest': u'/etc/kubernetes/cni-flannel-rbac.yml', u'selevel': None, u'_original_basename': u'cni-flannel-rbac.yml.j2', u'delimiter': None, u'regexp': None, u'owner': None, u'follow': False, u'validate': None, u'local_follow': None, u'src': u'/home/vagrant/.ansible/tmp/ansible-tmp-1557708698.88-87612009870434/source', u'group': None, u'unsafe_writes': None, u'checksum': u'8c69db180ab422f55a122372bee4620dfb2ad0ed', u'seuser': None, u'serole': None, u'content': None, u'setype': None, u'mode': None, u'attributes': None, u'backup': False}}, '_ansible_ignore_errors': None})
ok: [kube1] => (item={'_ansible_parsed': True, u'md5sum': u'51829ca2a2d540389c94291f63118112', u'uid': 0, u'dest': u'/etc/kubernetes/cni-flannel.yml', '_ansible_item_result': True, '_ansible_no_log': False, u'owner': u'root', 'diff': [], u'size': 3198, u'src': u'/home/vagrant/.ansible/tmp/ansible-tmp-1557708700.48-112660941555190/source', u'group': u'root', '_ansible_item_label': {u'type': u'ds', u'name': u'kube-flannel', u'file': u'cni-flannel.yml'}, 'item': {u'type': u'ds', u'name': u'kube-flannel', u'file': u'cni-flannel.yml'}, u'checksum': u'0b1393229c9e863d63eff80c96bda56568b58e82', u'changed': True, 'failed': False, u'state': u'file', u'gid': 0, u'secontext': u'system_u:object_r:etc_t:s0', u'mode': u'0644', u'invocation': {u'module_args': {u'directory_mode': None, u'force': True, u'remote_src': None, u'dest': u'/etc/kubernetes/cni-flannel.yml', u'selevel': None, u'_original_basename': u'cni-flannel.yml.j2', u'delimiter': None, u'regexp': None, u'owner': None, u'follow': False, u'validate': None, u'local_follow': None, u'src': u'/home/vagrant/.ansible/tmp/ansible-tmp-1557708700.48-112660941555190/source', u'group': None, u'unsafe_writes': None, u'checksum': u'0b1393229c9e863d63eff80c96bda56568b58e82', u'seuser': None, u'serole': None, u'content': None, u'setype': None, u'mode': None, u'attributes': None, u'backup': False}}, '_ansible_ignore_errors': None})

TASK [kubernetes-apps/network_plugin/flannel : Flannel | Wait for flannel subnet.env file presence] ***
Monday 13 May 2019  01:52:27 +0100 (0:00:03.054)       0:14:41.735 ************ 
ok: [kube1]
fatal: [kube2]: FAILED! => {"changed": false, "elapsed": 600, "msg": "Timeout when waiting for file /run/flannel/subnet.env"}

NO MORE HOSTS LEFT *************************************************************
	to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry

PLAY RECAP *********************************************************************
kube1                      : ok=364  changed=103  unreachable=0    failed=0   
kube2                      : ok=315  changed=91   unreachable=0    failed=1   
kube3                      : ok=282  changed=78   unreachable=0    failed=0   

Monday 13 May 2019  02:02:29 +0100 (0:10:01.810)       0:24:43.545 ************ 
=============================================================================== 
kubernetes-apps/network_plugin/flannel : Flannel | Wait for flannel subnet.env file presence - 601.81s
kubernetes/master : kubeadm | Initialize first master ------------------ 40.76s
kubernetes/master : kubeadm | Init other uninitialized masters --------- 38.34s
download : container_download | download images for kubeadm config images -- 33.44s
etcd : Gen_certs | Write etcd master certs ----------------------------- 32.77s
Install packages ------------------------------------------------------- 32.24s
kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.91s
Wait for host to be available ------------------------------------------ 20.78s
download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 15.53s
gather facts from all instances ---------------------------------------- 14.35s
etcd : Gen_certs | Gather etcd master certs ---------------------------- 13.23s
container-engine/docker : Docker | pause while Docker restarts --------- 10.41s
download : file_download | Download item ------------------------------- 10.05s
download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 9.38s
kubernetes/master : slurp kubeadm certs --------------------------------- 8.14s
download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 7.44s
etcd : Configure | Check if etcd cluster is healthy --------------------- 6.01s
download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 5.99s
Persist loaded modules -------------------------------------------------- 5.07s
download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 5.04s
==> kube3: An error occurred. The error will be shown after all tasks complete.
An error occurred while executing multiple actions in parallel.
Any errors that occurred are shown below.

An error occurred while executing the action on the 'kube3'
machine. Please handle this error then try again:

Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
Build step 'Execute shell' marked build as failure
Performing Post build task...
Could not match :Build started  : False
Logical operation result is FALSE
Skipping script  : # cico-node-done-from-ansible.sh
# A script that releases nodes from a SSID file written by
SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid}

for ssid in $(cat ${SSID_FILE})
do
    cico -q node done $ssid
done

END OF POST BUILD TASK 	: 0


More information about the ci-results mailing list