-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
taints & tolerations with daemonsets #29738
Comments
I thought this was the desired functionality, i.e. you specify tolerations on some of your daemonsets that you want to run on masters. You probably want to add NodeLabels (or affinity) to express that this daemonset is supposed to run exclusively on masters. |
@kubernetes/sig-apps-bugs @kubernetes/sig-scheduling-bugs did we fix DSs to respect taints and tolerations or is this still an issue? |
We can close this; this was implemented in 1.6 |
It was implemented in 1.5 then reverted and implemented again in 1.6 :) |
This bit our 1.6 cluster hard. We run a self-hosted cluster and things like kube-proxy and flannel are distributed via daemonset. Suddenly every tainted node failed to come up properly because they don't have the overlay network or a kube proxy. So now instead of being able to add a new pool of tainted nodes (we do this not-infrequently) via a simple terraform apply (to spin up a new ASG of nodes) now I have to also patch every DS on my cluster. I'd like to suggest there needs to be some kind of way to do a wildcard toleration for this scenario. |
What do you mean by wildcard toleration? |
Related to #45367 |
I'm trying to use taints and tolerations to run a daemonset only on my master nodes. The daemonset is scheduling pods on all nodes though. I don't know whether daemonsets bypass taints & tolerations, or whether I am doing something wrong.
I taint my master node:
My daemonset:
The pod is scheduled on all my nodes though. One of their annotations (so we can see the final result):
And the annotations on the node:
The text was updated successfully, but these errors were encountered: