Skip to content

kubeadm master e2e job failing (BeforeSuite, KubeletNotReady) #412

Closed
@pipejakob

Description

The kubeadm master branch CI job has a new failure, during BeforeSuite.

https://k8s-testgrid.appspot.com/sig-cluster-lifecycle#kubeadm-gce

Here is an example of a failed job:

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kubeadm-gce/5219

The cluster comes up, and has weave-net deployed, but this message is spamming the client side logs:

I0824 14:32:32.721] Aug 24 14:32:32.721: INFO: Condition Ready of node e2e-5219-node-3k71 is false instead of true. Reason: KubeletNotReady, message: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
I0824 14:32:32.721] Aug 24 14:32:32.721: INFO: Condition Ready of node e2e-5219-node-zf8h is false instead of true. Reason: KubeletNotReady, message: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
I0824 14:32:32.721] Aug 24 14:32:32.721: INFO: Condition Ready of node e2e-5219-node-8mfj is false instead of true. Reason: KubeletNotReady, message: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
I0824 14:32:32.722] Aug 24 14:32:32.721: INFO: Condition Ready of node e2e-5219-master is false instead of true. Reason: KubeletNotReady, message: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
I0824 14:32:32.722] Aug 24 14:32:32.721: INFO: Condition Ready of node e2e-5219-node-ds17 is false instead of true. Reason: KubeletNotReady, message: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

We'll probably need to dig into the server-side logs to figure out what's going wrong. Since kubernetes-anywhere took a snapshot of the weave-net manifest, it's possible that enough skew has occurred that we need to update our manifests.

@kubernetes/sig-cluster-lifecycle-bugs

Metadata

Assignees

No one assigned

    Labels

    area/testkind/bugCategorizes issue or PR as related to a bug.

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions