-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add CoreDNS as feature in kubeadm #52501
Add CoreDNS as feature in kubeadm #52501
Conversation
@rajansandeep: GitHub didn't allow me to request PR reviews from the following users: johnbelamaric. Note that only kubernetes members can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Hi @rajansandeep. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/ok-to-test cc @kubernetes/sig-cluster-lifecycle-pr-reviews |
(I'll take a look at this later when I get to it) Thanks for this PR!! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is really great to see, thanks! I left a few comments.
I have one question for @luxas or someone else in @kubernetes/sig-cluster-lifecycle-feature-requests: do we want to create a way to switch from kube-dns to CoreDNS in a running cluster, or is that out of scope for kubeadm? It feels similar to upgrade but not quite the same.
effect: NoSchedule | ||
containers: | ||
- name: coredns | ||
image: coredns/coredns:{{ .Version }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We probably need to add {{ .ImageRepository }}
and {{.Arch}}
here, similar to the existing kube-dns
manifests.
I think this probably also means mirroring CoreDNS into the gcr.io/google_containers
registry. Is this an option? I'm not sure who owns that decision.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
coredns/coredns should be built as a manifest list (ref: k8s multiarch proposal)
Then we don't need to pass {{ .Arch }}
at all, docker will just pull the right variant of the image.
Reach out to me if you want to know how to build a manifest list: https://docs.docker.com/registry/spec/manifest-v2-2/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ping @rajansandeep @johnbelamaric on the manifest list building
name: coredns | ||
items: | ||
- key: Corefile | ||
path: Corefile |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we add multi-architecture support above, we need the beta.kubernetes.io/arch
nodeAffinity selector here to make sure it schedules to an appropriate node.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not needed anymore as the latest image is a manifest list
errors | ||
log stdout | ||
health | ||
kubernetes {{ .DNSDomain }} {{ .Servicecidr }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: maybe s/Servicecidr/ServiceCIDR/
for consistency?
return nil | ||
} | ||
|
||
//convSubnet fetches the servicecidr and modifies the mask to the nearest class |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if this should error out when the configured CIDR doesn't match a full class instead of trying to fix it. Any idea how kube-dns
handles this case?
Could this lead to answering reverse DNS queries with the wrong response if they were for IPs that were part of the service subnet, but not in the nearest class?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Default is a /12 so it would error out by default 😉
This is a general DNS problem. We're looking at some solutions here coredns/coredns#1074
kube-dns today captures ALL PTRs. Leading to things like kubernetes/dns#124 and (I think) the ability to hijack the PTR of any IP in the world.
Even if we nail it down to just the service CIDR, we still have problems with PTR and manually added endpoints in K8s, especially in a multi-tenant deployment.
|
||
// GetCoreDNSVersion returns the right CoreDNS version for a specific k8s version | ||
func GetCoreDNSVersion(kubeVersion *version.Version) string { | ||
// v1.7.0+ uses CoreDNS-011, just return that here |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be v1.8.0+
(here and again below).
Possibly in Thanks for the review @mattmoyer! |
Please squash the PRs. Given that @luxas and @mattmoyer are on it I'll defer to them. |
errors | ||
log stdout | ||
health | ||
kubernetes {{ .DNSDomain }} {{ .Servicecidr }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should prometheus
be added in as part of the default to align with the rest of Kubernetes components?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, that makes sense
if err != nil { | ||
log.Fatal(err) | ||
} | ||
servicecidr = ipv4Net.String() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe it would be good to test that functionality with IPv6 as well ?
IPv6 is coming soon anyway, doesn't make sense to write IPv4 code only.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
some quick feedback.
We have a couple of open questions still:
- Should this be seen as a "drop-in" replacement called kube-dns still or is it a "net-new" replacment for kube-dns?
- Should RBAC ClusterRole & Binding be auto-bootstrapped?
- Should the service still be called
kube-dns
?
Action items:
- Make sure it's possible to "upgrade" any v1.8 cluster to using coredns. Make sure that
kubeadm upgrade
does the right things, shows the right text, removes the old kube-dns if not needed, etc. - The coredns image must be a manifest list. Please make the next release a manifest list so we can use it. Reach out to me if you need help with doing that.
cmd/kubeadm/app/cmd/init.go
Outdated
@@ -382,11 +382,6 @@ func (i *Init) Run(out io.Writer) error { | |||
return err | |||
} | |||
|
|||
// Create/update RBAC rules that makes the nodes to rotate certificates and get their CSRs approved automatically |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why remove this?
cmd/kubeadm/app/cmd/init.go
Outdated
return err | ||
} | ||
} else { | ||
if err := dnsaddonphase.EnsureDNSAddon(i.cfg, client); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you move this logic inside of EnsureDNSAddon
? (i.e. the logic whether to use kube- or core-dns)
effect: NoSchedule | ||
containers: | ||
- name: coredns | ||
image: coredns/coredns:{{ .Version }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
coredns/coredns should be built as a manifest list (ref: k8s multiarch proposal)
Then we don't need to pass {{ .Arch }}
at all, docker will just pull the right variant of the image.
Reach out to me if you want to know how to build a manifest list: https://docs.docker.com/registry/spec/manifest-v2-2/
protocol: TCP | ||
` | ||
|
||
//ConfigMap is the CoreDNS ConfigMap manifest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
needs to be a godoc comment
kind: ClusterRole | ||
metadata: | ||
labels: | ||
kubernetes.io/bootstrapping: rbac-defaults |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should not be here
@@ -38,3 +39,18 @@ func GetKubeDNSManifest(kubeVersion *version.Version) string { | |||
// In the future when the kube-dns version is bumped at HEAD; add conditional logic to return the right manifest | |||
return v170AndAboveKubeDNSDeployment | |||
} | |||
|
|||
// GetCoreDNSVersion returns the right CoreDNS version for a specific k8s version | |||
func GetCoreDNSVersion(kubeVersion *version.Version) string { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's use the same functions -- don't create a new one. Instead, choose what version to return based on DNS provider.
This way the right values will be shown for upgrades as well.
k8s-app: coredns | ||
annotations: | ||
scheduler.alpha.kubernetes.io/critical-pod: '' | ||
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This annotation isn't respected anymore, tolerations are now set on the PodSpec. See other manifests in cmd/kubeadm
@@ -127,6 +131,161 @@ func createKubeDNSAddon(deploymentBytes, serviceBytes []byte, client clientset.I | |||
return nil | |||
} | |||
|
|||
func EnsureCoreDNSAddon(cfg *kubeadmapi.MasterConfiguration, client clientset.Interface) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
make this private and call from EnsureDNSAddon
// CoreDNSService is the CoreDNS Service manifest | ||
CoreDNSService = ` | ||
apiVersion: v1 | ||
kind: Service |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What did we think here? Would it be possible to keep the same kube-dns
service?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ping @kubernetes/sig-network-pr-reviews should we keep the kube-dns
name or not?
In some way, it would be cool to keep the name to signal "this is the DNS service for Kubernetes, regardless of implementation"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That sounds reasonable to me
|
||
//convSubnet fetches the serviceCIDR and modifies the mask to the nearest class | ||
//CoreDNS requires CIDR notations for reverse zones as classful. | ||
func convSubnet(cidr string) (serviceCIDR string) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unit tests please
@luxas I fixed the golint errors. I am not able to verify the other error which is failing the test. |
@rajansandeep you need to run |
/test pull-kubernetes-unit |
/retest |
Looks like the very same test were ok on the precedent launch (that was ignored because I did this /retest before the end). So it seems to be a flaky test .. let's retry once.. |
/retest |
/lgtm Yes, flaked |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: luxas, rajansandeep Associated issue: 427 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these OWNERS Files:
You can indicate your approval by writing |
/retest |
/retest Review the full test history for this PR. |
1 similar comment
/retest Review the full test history for this PR. |
bazel builds seems to be legitimate errors:
and
|
The BUILD file seems to already include the missing dependencies. Am I missing something? |
I think these are just flakes. Bazel seems fine. The TestCRD unit test fails right now. IIRC someone is working on fixing the flakiness there already
/retest
… On 08 Nov 2017, at 14:47, Sandeep Rajan ***@***.***> wrote:
The BUILD file seems to already include the missing dependencies. Am I missing something?
Also, the dates in the error logs seem old.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
/retest |
The command for retest isn't restarting the test. |
/test all |
/retest |
@rajansandeep: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/retest |
/test all [submit-queue is verifying that this PR is safe to merge] |
Automatic merge from submit-queue (batch tested with PRs 54493, 52501, 55172, 54780, 54819). If you want to cherry-pick this change to another branch, please follow the instructions here. |
What this PR does / why we need it:
This PR adds CoreDNS as a DNS plugin via the feature-gate option in Kubeadm init.
Which issue this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close that issue when PR gets merged):Fixes kubernetes/kubeadm#446
Special notes for your reviewer:
Release note:
/cc @johnbelamaric