-
Notifications
You must be signed in to change notification settings - Fork 40.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubeadm Networking Configuration E2E Tests #80259
Kubeadm Networking Configuration E2E Tests #80259
Conversation
@Arvinderpal: Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Hi @Arvinderpal. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@fabriziopandini @neolit123 @yastij @aojea |
/ok-to-test |
6caaef8
to
1ec0290
Compare
/assign @neolit123 @yastij |
thanks @Arvinderpal /assign @fabriziopandini @neolit123 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Arvinderpal thanks for taking care of improving kubeadm e2e test coverage!
The PR goes definitely in the right direction, but I would like to check with you/other reviewers if:
- we can make detection of cluster settings more robust (or set clear expectations)
- make this test verify kubeadm behavior directly, without going through the indirection of other components responsible for applying network settings to node/services
d := cc["networking"].(map[interface{}]interface{}) | ||
if ps, ok := d["podSubnet"]; ok { | ||
// Check that the pod CIDR allocated to the node(s) is within the kubeadm-config podCIDR. | ||
nodes, err := f.ClientSet.CoreV1().Nodes().List(metav1.ListOptions{}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test is checking kubeadm behaviour indirectly, and I'm wondering if instead we should test the kubeadm bit more specifically
Kubeadm only responsibility for networking is to pass the settings to the control plane components (in this case, to translate podSubnet
into cluster-cidr
, allocate-node-cidrs
and node-cidr-mask-size
flags for the controller manager)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you have in mind?
I followed this approach based on what I saw in other tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for not being clear.
I think we should have a test that checks kubeadm assigns the expected value to cluster-cidr, allocate-node-cidrs and node-cidr-mask-size (in addition or in replacement to the current test)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@fabriziopandini
allocate-node-cidrs
is internal to controller-manager and is not exposed, so I don't see any way to verify that.
node-cidr-mask-size
is calcuated by kubeadm (see func calcNodeCidrSize). There are some issues with verifying it:
- This func currently assumes IPv4 (assigns /24 to IPv6 -- a known problem).
- We'll need to change this func to support dual-stack, so
node-cidr-mask-size
will actually become a slice. node-cidr-mask-size
is not written to kubeadm config, so we'll have to duplicate this func in the e2e tests in order to verify it.- calcNodeCidrSize is unit tested already.
Because of these reasons, I would prefer not to add e2e tests for this. Perhaps when dual-stack changes land, we can revisit this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I got your point and I don't want to block on this.
However, I think this test is useful and doable reading the args in the controller-manager pod, so please add an item on the umbrella issue: "implement a test that checks kubeadm assigns the expected value to cluster-cidr, allocate-node-cidrs and node-cidr-mask-size (in addition or in replacement to the current test"
@fabriziopandini @yastij Thank you for you feedback. Please see my comments. |
1ec0290
to
bbe1866
Compare
xref #79033 |
bbe1866
to
b81b246
Compare
b81b246
to
a500078
Compare
@Arvinderpal thanks for the update; please update the umbrella issue as suggested |
@fabriziopandini I have created a new umbrella issue for these e2e tests since they are not directly related to the dual-stack work. kubernetes/kubeadm#1686 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @Arvinderpal !
Apart from the incomplete dual stack test, this looks good.
test/e2e_kubeadm/networking_test.go
Outdated
}) | ||
}) | ||
}) | ||
Context("dual-stack [Feature:IPv6DualStack]", func() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's omit this test case from this PR and add it instead in its entirety in #79033.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
* podSubnet check: if a podSubnet is specified in kubeadm-config then the e2e test will check that pod-cidrs of individual nodes fall within this range. * serviceSubnet check: if a serviceSubnet is specified in kubeadm-config then the e2e test will check that the kubernetes service created in the default namespace got a service IP from the configured range.
a500078
to
ca3bdfd
Compare
@rosti / @neolit123 / @yastij I think this PR can be merged. I have addressed all the comments. |
/release-note-none |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/priority backlog
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: Arvinderpal, fabriziopandini, neolit123 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest Review the full test history for this PR. Silence the bot with an |
What type of PR is this?
/kind feature
What this PR does / why we need it:
Currently, kubeadm e2e tests don't check any of the networking configuration values. This PR adds 2 new e2e_kubeadm tests for network configurations:
then the e2e test will check that pod-cidrs of individual nodes
fall within this range.
kubeadm-config then the e2e test will check that the kubernetes
service created in the default namespace got a service IP
from the configured range.
Special notes for your reviewer:
There will be follow up dual-stack tests as that feature lands in master. These tests are however single-stack (either IPv4 or IPv6) only.
To run just these tests:
make test-e2e-kubeadm FOCUS=setup-networking
Does this PR introduce a user-facing change?:
NONE