-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kube-controller-manager ties node CIDR allocation with cloud route provisioning #25602
Comments
Proposed #25614 - I used "configure-cloud-routes" there as the option name but happy to change. |
Out of curiosity, what is the use-case resolved by |
This allows kube-controller-manager to allocate CIDRs to nodes (with allocate-node-cidrs=true), but will not try to configure them on the cloud provider, even if the cloud provider supports Routes. The default is configure-cloud-routes=true, and it will only try to configure routes if allocate-node-cidrs is also configured, so the default behaviour is unchanged. This is useful because on AWS the cloud provider configures routes by setting up VPC routing table entries, but there is a limit of 50 entries. So setting configure-cloud-routes on AWS would allow us to continue to allocate node CIDRs as today, but replace the VPC route-table mechanism with something not limited to 50 nodes. We can't just turn off the cloud-provider entirely because it also controls other things - node discovery, load balancer creation etc. Fix kubernetes#25602
…routes Automatic merge from submit-queue kube-controller-manager: Add configure-cloud-routes option This allows kube-controller-manager to allocate CIDRs to nodes (with allocate-node-cidrs=true), but will not try to configure them on the cloud provider, even if the cloud provider supports Routes. The default is configure-cloud-routes=true, and it will only try to configure routes if allocate-node-cidrs is also configured, so the default behaviour is unchanged. This is useful because on AWS the cloud provider configures routes by setting up VPC routing table entries, but there is a limit of 50 entries. So setting configure-cloud-routes on AWS would allow us to continue to allocate node CIDRs as today, but replace the VPC route-table mechanism with something not limited to 50 nodes. We can't just turn off the cloud-provider entirely because it also controls other things - node discovery, load balancer creation etc. Fix #25602
@aarondav this is so that a network overlay (like flannel) can use the k8s API as a source of truth. And then, yes, I did build a simple network overlay using GRE tunnels https://github.com/kopeio/kope-routing/tree/master/pkg/routecontroller/routingproviders Source of truth here = we have a replicated HA state store in the form of etcd, k8s can automatically allocate pod CIDRs, so network overlays shouldn't be forced to recreate this. |
Got it, thanks, that's very interesting. It implies that flannel could operate in a read-only mode from the API server, without direct access to an etcd. |
…-allocate-node-cidrs is set kubernetes/kubernetes#25602
kube-controller-manager currently has one flag, allocate-node-cidrs, which turns on two controllers:
However I would like to do the CIDR allocation, without setting up the cloud routes. In particular, AWS has a 50 node limit for cloud routes, so in order to support bigger clusters, I'm pondering setting up a simple complete graph of GRE tunnels between nodes, using a simple controller on each node that watches the nodes and sets up the tunnels.
One way to separate the two controllers would be to add a flag which turns off cloud route management, but defaults to true. It would only come in to play when allocate-node-cidrs=true; then setting manage-cloud-routes=false would allow for kube-controller-manager to set podCIDR on nodes, but would not ask the cloud provider configure the routes.
cc @thockin
The text was updated successfully, but these errors were encountered: