[CI-results] Build failed in Jenkins: gluster_anteater_gcs #74

ci at centos.org ci at centos.org
Sat Feb 9 01:03:11 UTC 2019


See <https://ci.centos.org/job/gluster_anteater_gcs/74/display/redirect?page=changes>

Changes:

[ndevos] Add GlusterFS release-5 branch to tests

[ndevos] Add GlusterFS release-6 branch to tests

------------------------------------------
[...truncated 398.58 KB...]

TASK [network_plugin/contiv : Contiv | Copy the generated certificate on nodes] ***
Saturday 09 February 2019  00:52:30 +0000 (0:00:00.187)       0:14:29.291 ***** 

TASK [network_plugin/contiv : Contiv | Set cni directory permissions] **********
Saturday 09 February 2019  00:52:31 +0000 (0:00:00.367)       0:14:29.658 ***** 

TASK [network_plugin/contiv : Contiv | Copy cni plugins] ***********************
Saturday 09 February 2019  00:52:31 +0000 (0:00:00.306)       0:14:29.965 ***** 

TASK [network_plugin/contiv : Contiv | Copy netctl binary from docker container] ***
Saturday 09 February 2019  00:52:31 +0000 (0:00:00.333)       0:14:30.299 ***** 

TASK [network_plugin/kube-router : kube-router | Add annotations on kube-master] ***
Saturday 09 February 2019  00:52:32 +0000 (0:00:00.362)       0:14:30.661 ***** 

TASK [network_plugin/kube-router : kube-router | Add annotations on kube-node] ***
Saturday 09 February 2019  00:52:32 +0000 (0:00:00.284)       0:14:30.946 ***** 

TASK [network_plugin/kube-router : kube-router | Add common annotations on all servers] ***
Saturday 09 February 2019  00:52:32 +0000 (0:00:00.339)       0:14:31.285 ***** 

TASK [network_plugin/kube-router : kube-roter | Set cni directory permissions] ***
Saturday 09 February 2019  00:52:32 +0000 (0:00:00.297)       0:14:31.583 ***** 

TASK [network_plugin/kube-router : kube-router | Copy cni plugins] *************
Saturday 09 February 2019  00:52:33 +0000 (0:00:00.302)       0:14:31.885 ***** 

TASK [network_plugin/kube-router : kube-router | Create manifest] **************
Saturday 09 February 2019  00:52:33 +0000 (0:00:00.281)       0:14:32.167 ***** 

TASK [network_plugin/cloud : Cloud | Set cni directory permissions] ************
Saturday 09 February 2019  00:52:33 +0000 (0:00:00.274)       0:14:32.441 ***** 

TASK [network_plugin/cloud : Canal | Copy cni plugins] *************************
Saturday 09 February 2019  00:52:34 +0000 (0:00:00.253)       0:14:32.695 ***** 

TASK [network_plugin/multus : Multus | Copy manifest files] ********************
Saturday 09 February 2019  00:52:34 +0000 (0:00:00.254)       0:14:32.950 ***** 

TASK [network_plugin/multus : Multus | Copy manifest templates] ****************
Saturday 09 February 2019  00:52:34 +0000 (0:00:00.408)       0:14:33.359 ***** 

RUNNING HANDLER [kubernetes/kubeadm : restart kubelet] *************************
Saturday 09 February 2019  00:52:34 +0000 (0:00:00.232)       0:14:33.592 ***** 
changed: [kube3]

PLAY [kube-master[0]] **********************************************************

TASK [download : include_tasks] ************************************************
Saturday 09 February 2019  00:52:36 +0000 (0:00:01.411)       0:14:35.003 ***** 

TASK [download : Download items] ***********************************************
Saturday 09 February 2019  00:52:36 +0000 (0:00:00.172)       0:14:35.175 ***** 

TASK [download : Sync container] ***********************************************
Saturday 09 February 2019  00:52:38 +0000 (0:00:01.761)       0:14:36.936 ***** 

TASK [download : include_tasks] ************************************************
Saturday 09 February 2019  00:52:40 +0000 (0:00:01.803)       0:14:38.740 ***** 

TASK [kubespray-defaults : Configure defaults] *********************************
Saturday 09 February 2019  00:52:40 +0000 (0:00:00.164)       0:14:38.905 ***** 
ok: [kube1] => {
    "msg": "Check roles/kubespray-defaults/defaults/main.yml"
}

TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Get default token name] ***
Saturday 09 February 2019  00:52:40 +0000 (0:00:00.489)       0:14:39.394 ***** 
ok: [kube1]

TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Get default token data] ***
Saturday 09 February 2019  00:52:42 +0000 (0:00:01.382)       0:14:40.776 ***** 
ok: [kube1]

TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Test if default certificate is expired] ***
Saturday 09 February 2019  00:52:43 +0000 (0:00:01.440)       0:14:42.217 ***** 
ok: [kube1]

TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Determine if certificate is expired] ***
Saturday 09 February 2019  00:52:45 +0000 (0:00:01.876)       0:14:44.094 ***** 
ok: [kube1]

TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Get all serviceaccount tokens to expire] ***
Saturday 09 February 2019  00:52:45 +0000 (0:00:00.436)       0:14:44.531 ***** 

TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Delete expired tokens] ***
Saturday 09 February 2019  00:52:45 +0000 (0:00:00.120)       0:14:44.652 ***** 

TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Delete pods in system namespace] ***
Saturday 09 February 2019  00:52:46 +0000 (0:00:00.158)       0:14:44.810 ***** 

TASK [win_nodes/kubernetes_patch : Ensure that user manifests directory exists] ***
Saturday 09 February 2019  00:52:46 +0000 (0:00:00.129)       0:14:44.940 ***** 
changed: [kube1]

TASK [win_nodes/kubernetes_patch : Copy kube-proxy daemonset hostnameOverride patch] ***
Saturday 09 February 2019  00:52:47 +0000 (0:00:01.178)       0:14:46.119 ***** 
changed: [kube1]

TASK [win_nodes/kubernetes_patch : Check current command for kube-proxy daemonset] ***
Saturday 09 February 2019  00:52:49 +0000 (0:00:02.277)       0:14:48.396 ***** 
changed: [kube1]

TASK [win_nodes/kubernetes_patch : Apply hostnameOverride patch for kube-proxy daemonset] ***
Saturday 09 February 2019  00:52:51 +0000 (0:00:01.430)       0:14:49.827 ***** 
changed: [kube1]

TASK [win_nodes/kubernetes_patch : debug] **************************************
Saturday 09 February 2019  00:52:52 +0000 (0:00:01.475)       0:14:51.302 ***** 
ok: [kube1] => {
    "msg": [
        "daemonset.extensions/kube-proxy patched"
    ]
}

TASK [win_nodes/kubernetes_patch : debug] **************************************
Saturday 09 February 2019  00:52:53 +0000 (0:00:00.404)       0:14:51.707 ***** 
ok: [kube1] => {
    "msg": []
}

TASK [win_nodes/kubernetes_patch : Copy kube-proxy daemonset nodeselector patch] ***
Saturday 09 February 2019  00:52:53 +0000 (0:00:00.443)       0:14:52.150 ***** 
changed: [kube1]

TASK [win_nodes/kubernetes_patch : Check current nodeselector for kube-proxy daemonset] ***
Saturday 09 February 2019  00:52:55 +0000 (0:00:02.323)       0:14:54.474 ***** 
changed: [kube1]

TASK [win_nodes/kubernetes_patch : Apply nodeselector patch for kube-proxy daemonset] ***
Saturday 09 February 2019  00:52:57 +0000 (0:00:01.404)       0:14:55.878 ***** 
changed: [kube1]

TASK [win_nodes/kubernetes_patch : debug] **************************************
Saturday 09 February 2019  00:52:58 +0000 (0:00:01.452)       0:14:57.331 ***** 
ok: [kube1] => {
    "msg": [
        "daemonset.extensions/kube-proxy patched"
    ]
}

TASK [win_nodes/kubernetes_patch : debug] **************************************
Saturday 09 February 2019  00:52:59 +0000 (0:00:00.445)       0:14:57.776 ***** 
ok: [kube1] => {
    "msg": []
}

PLAY [kube-master] *************************************************************

TASK [download : include_tasks] ************************************************
Saturday 09 February 2019  00:52:59 +0000 (0:00:00.626)       0:14:58.403 ***** 

TASK [download : Download items] ***********************************************
Saturday 09 February 2019  00:52:59 +0000 (0:00:00.205)       0:14:58.608 ***** 

TASK [download : Sync container] ***********************************************
Saturday 09 February 2019  00:53:01 +0000 (0:00:01.681)       0:15:00.290 ***** 

TASK [download : include_tasks] ************************************************
Saturday 09 February 2019  00:53:03 +0000 (0:00:01.824)       0:15:02.115 ***** 

TASK [kubespray-defaults : Configure defaults] *********************************
Saturday 09 February 2019  00:53:03 +0000 (0:00:00.221)       0:15:02.336 ***** 
ok: [kube1] => {
    "msg": "Check roles/kubespray-defaults/defaults/main.yml"
}
ok: [kube2] => {
    "msg": "Check roles/kubespray-defaults/defaults/main.yml"
}

TASK [kubernetes-apps/network_plugin/cilium : Cilium | Start Resources] ********
Saturday 09 February 2019  00:53:04 +0000 (0:00:00.620)       0:15:02.957 ***** 

TASK [kubernetes-apps/network_plugin/cilium : Cilium | Wait for pods to run] ***
Saturday 09 February 2019  00:53:04 +0000 (0:00:00.387)       0:15:03.345 ***** 

TASK [kubernetes-apps/network_plugin/calico : Start Calico resources] **********
Saturday 09 February 2019  00:53:04 +0000 (0:00:00.219)       0:15:03.564 ***** 

TASK [kubernetes-apps/network_plugin/calico : calico upgrade complete] *********
Saturday 09 February 2019  00:53:05 +0000 (0:00:00.241)       0:15:03.806 ***** 

TASK [kubernetes-apps/network_plugin/canal : Canal | Start Resources] **********
Saturday 09 February 2019  00:53:05 +0000 (0:00:00.232)       0:15:04.038 ***** 

TASK [kubernetes-apps/network_plugin/flannel : Flannel | Start Resources] ******
Saturday 09 February 2019  00:53:05 +0000 (0:00:00.364)       0:15:04.403 ***** 
ok: [kube1] => (item={'_ansible_parsed': True, u'md5sum': u'973704ff91b4c9341dccaf1da6003177', u'uid': 0, u'dest': u'/etc/kubernetes/cni-flannel-rbac.yml', '_ansible_item_result': True, '_ansible_no_log': False, u'owner': u'root', 'diff': [], u'size': 836, u'src': u'/home/vagrant/.ansible/tmp/ansible-tmp-1549673538.54-185605657055457/source', u'group': u'root', '_ansible_item_label': {u'type': u'sa', u'name': u'flannel', u'file': u'cni-flannel-rbac.yml'}, 'item': {u'type': u'sa', u'name': u'flannel', u'file': u'cni-flannel-rbac.yml'}, u'checksum': u'8c69db180ab422f55a122372bee4620dfb2ad0ed', u'changed': True, 'failed': False, u'state': u'file', u'gid': 0, u'secontext': u'system_u:object_r:etc_t:s0', u'mode': u'0644', u'invocation': {u'module_args': {u'directory_mode': None, u'force': True, u'remote_src': None, u'dest': u'/etc/kubernetes/cni-flannel-rbac.yml', u'selevel': None, u'_original_basename': u'cni-flannel-rbac.yml.j2', u'delimiter': None, u'regexp': None, u'owner': None, u'follow': False, u'validate': None, u'local_follow': None, u'src': u'/home/vagrant/.ansible/tmp/ansible-tmp-1549673538.54-185605657055457/source', u'group': None, u'unsafe_writes': None, u'checksum': u'8c69db180ab422f55a122372bee4620dfb2ad0ed', u'seuser': None, u'serole': None, u'content': None, u'setype': None, u'mode': None, u'attributes': None, u'backup': False}}, '_ansible_ignore_errors': None})
ok: [kube1] => (item={'_ansible_parsed': True, u'md5sum': u'51829ca2a2d540389c94291f63118112', u'uid': 0, u'dest': u'/etc/kubernetes/cni-flannel.yml', '_ansible_item_result': True, '_ansible_no_log': False, u'owner': u'root', 'diff': [], u'size': 3198, u'src': u'/home/vagrant/.ansible/tmp/ansible-tmp-1549673540.09-200080475438809/source', u'group': u'root', '_ansible_item_label': {u'type': u'ds', u'name': u'kube-flannel', u'file': u'cni-flannel.yml'}, 'item': {u'type': u'ds', u'name': u'kube-flannel', u'file': u'cni-flannel.yml'}, u'checksum': u'0b1393229c9e863d63eff80c96bda56568b58e82', u'changed': True, 'failed': False, u'state': u'file', u'gid': 0, u'secontext': u'system_u:object_r:etc_t:s0', u'mode': u'0644', u'invocation': {u'module_args': {u'directory_mode': None, u'force': True, u'remote_src': None, u'dest': u'/etc/kubernetes/cni-flannel.yml', u'selevel': None, u'_original_basename': u'cni-flannel.yml.j2', u'delimiter': None, u'regexp': None, u'owner': None, u'follow': False, u'validate': None, u'local_follow': None, u'src': u'/home/vagrant/.ansible/tmp/ansible-tmp-1549673540.09-200080475438809/source', u'group': None, u'unsafe_writes': None, u'checksum': u'0b1393229c9e863d63eff80c96bda56568b58e82', u'seuser': None, u'serole': None, u'content': None, u'setype': None, u'mode': None, u'attributes': None, u'backup': False}}, '_ansible_ignore_errors': None})

TASK [kubernetes-apps/network_plugin/flannel : Flannel | Wait for flannel subnet.env file presence] ***
Saturday 09 February 2019  00:53:08 +0000 (0:00:03.135)       0:15:07.539 ***** 
ok: [kube1]
fatal: [kube2]: FAILED! => {"changed": false, "elapsed": 600, "msg": "Timeout when waiting for file /run/flannel/subnet.env"}

NO MORE HOSTS LEFT *************************************************************
	to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry

PLAY RECAP *********************************************************************
kube1                      : ok=364  changed=103  unreachable=0    failed=0   
kube2                      : ok=315  changed=91   unreachable=0    failed=1   
kube3                      : ok=282  changed=78   unreachable=0    failed=0   

Saturday 09 February 2019  01:03:10 +0000 (0:10:01.972)       0:25:09.511 ***** 
=============================================================================== 
kubernetes-apps/network_plugin/flannel : Flannel | Wait for flannel subnet.env file presence - 601.97s
kubernetes/master : kubeadm | Initialize first master ------------------ 39.72s
download : container_download | download images for kubeadm config images -- 39.37s
kubernetes/master : kubeadm | Init other uninitialized masters --------- 38.84s
etcd : Gen_certs | Write etcd master certs ----------------------------- 33.98s
Wait for host to be available ------------------------------------------ 32.06s
Install packages ------------------------------------------------------- 30.08s
kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.47s
download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 17.57s
etcd : Gen_certs | Gather etcd master certs ---------------------------- 12.95s
gather facts from all instances ---------------------------------------- 12.18s
download : file_download | Download item ------------------------------- 10.50s
container-engine/docker : Docker | pause while Docker restarts --------- 10.39s
download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 9.60s
download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 9.25s
kubernetes/master : slurp kubeadm certs --------------------------------- 8.42s
download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 7.74s
Persist loaded modules -------------------------------------------------- 5.59s
etcd : Configure | Check if etcd cluster is healthy --------------------- 5.33s
download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 4.89s
==> kube3: An error occurred. The error will be shown after all tasks complete.
An error occurred while executing multiple actions in parallel.
Any errors that occurred are shown below.

An error occurred while executing the action on the 'kube3'
machine. Please handle this error then try again:

Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
Build step 'Execute shell' marked build as failure
Performing Post build task...
Could not match :Build started  : False
Logical operation result is FALSE
Skipping script  : # cico-node-done-from-ansible.sh
# A script that releases nodes from a SSID file written by
SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid}

for ssid in $(cat ${SSID_FILE})
do
    cico -q node done $ssid
done

END OF POST BUILD TASK 	: 0


More information about the ci-results mailing list