Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubeadm Networking Configuration E2E Tests #80259

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions test/e2e_kubeadm/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ go_test(
"kubeadm_certs_test.go",
"kubeadm_config_test.go",
"kubelet_config_test.go",
"networking_test.go",
"nodes_test.go",
"proxy_addon_test.go",
],
Expand Down
139 changes: 139 additions & 0 deletions test/e2e_kubeadm/networking_test.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,139 @@
/*
Copyright 2019 The Kubernetes Authors.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/

package e2e_kubeadm

import (
"net"
"strings"

metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/kubernetes/test/e2e/framework"
e2elog "k8s.io/kubernetes/test/e2e/framework/log"

"github.com/onsi/ginkgo"
)

var (
dualStack bool
podSubnetInKubeadmConfig bool
serviceSubnetInKubeadmConfig bool
)

// Define container for all the test specification aimed at verifying
// that kubeadm configures the networking as expected.
// in case you want to skip this test use SKIP=setup-networking
var _ = KubeadmDescribe("networking [setup-networking]", func() {

// Get an instance of the k8s test framework
f := framework.NewDefaultFramework("networking")

// Tests in this container are not expected to create new objects in the cluster
// so we are disabling the creation of a namespace in order to get a faster execution
f.SkipNamespaceCreation = true

// Tests are either single-stack (either IPv4 or IPv6 address) or dual-stack
// We can determine this from the kubeadm config by looking at the number of
// IPs specified in networking.podSubnet
ginkgo.BeforeEach(func() {

// gets the ClusterConfiguration from the kubeadm kubeadm-config ConfigMap as a untyped map
cc := getClusterConfiguration(f.ClientSet)
// Extract the networking.podSubnet
// Please note that this test assumes that the user does not alter network configs
// using the extraArgs option.
if _, ok := cc["networking"]; ok {
Arvinderpal marked this conversation as resolved.
Show resolved Hide resolved
netCC := cc["networking"].(map[interface{}]interface{})
if ps, ok := netCC["podSubnet"]; ok {
// If podSubnet is not specified, podSubnet cases will be skipped.
// Note that kubeadm does not currently apply defaults for PodSubnet, so we skip.
podSubnetInKubeadmConfig = true
Arvinderpal marked this conversation as resolved.
Show resolved Hide resolved
cidrs := strings.Split(ps.(string), ",")
if len(cidrs) > 1 {
dualStack = true
}
}
if _, ok := netCC["serviceSubnet"]; ok {
// If serviceSubnet is not specified, serviceSubnet cases will be skipped.
serviceSubnetInKubeadmConfig = true
}
}
})

ginkgo.Context("single-stack", func() {
ginkgo.Context("podSubnet", func() {
ginkgo.It("should be properly configured if specified in kubeadm-config", func() {
if dualStack {
framework.Skipf("Skipping because cluster is dual-stack")
}
if !podSubnetInKubeadmConfig {
framework.Skipf("Skipping because podSubnet was not specified in kubeadm-config")
}
cc := getClusterConfiguration(f.ClientSet)
if _, ok := cc["networking"]; ok {
netCC := cc["networking"].(map[interface{}]interface{})
if ps, ok := netCC["podSubnet"]; ok {
// Check that the pod CIDR allocated to the node(s) is within the kubeadm-config podCIDR.
nodes, err := f.ClientSet.CoreV1().Nodes().List(metav1.ListOptions{})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test is checking kubeadm behaviour indirectly, and I'm wondering if instead we should test the kubeadm bit more specifically

Kubeadm only responsibility for networking is to pass the settings to the control plane components (in this case, to translate podSubnet into cluster-cidr, allocate-node-cidrs and node-cidr-mask-size flags for the controller manager)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you have in mind?
I followed this approach based on what I saw in other tests.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for not being clear.
I think we should have a test that checks kubeadm assigns the expected value to cluster-cidr, allocate-node-cidrs and node-cidr-mask-size (in addition or in replacement to the current test)

Copy link
Contributor Author

@Arvinderpal Arvinderpal Jul 23, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fabriziopandini
allocate-node-cidrs is internal to controller-manager and is not exposed, so I don't see any way to verify that.
node-cidr-mask-size is calcuated by kubeadm (see func calcNodeCidrSize). There are some issues with verifying it:

  1. This func currently assumes IPv4 (assigns /24 to IPv6 -- a known problem).
  2. We'll need to change this func to support dual-stack, so node-cidr-mask-size will actually become a slice.
  3. node-cidr-mask-size is not written to kubeadm config, so we'll have to duplicate this func in the e2e tests in order to verify it.
  4. calcNodeCidrSize is unit tested already.
    Because of these reasons, I would prefer not to add e2e tests for this. Perhaps when dual-stack changes land, we can revisit this.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I got your point and I don't want to block on this.
However, I think this test is useful and doable reading the args in the controller-manager pod, so please add an item on the umbrella issue: "implement a test that checks kubeadm assigns the expected value to cluster-cidr, allocate-node-cidrs and node-cidr-mask-size (in addition or in replacement to the current test"

framework.ExpectNoError(err)
for _, node := range nodes.Items {
if !subnetWithinSubnet(ps.(string), node.Spec.PodCIDR) {
e2elog.Failf("failed due to node(%v) IP %v not inside configured pod subnet: %s", node.Name, node.Spec.PodCIDR, ps)
}
}
}
}
})
})
ginkgo.Context("serviceSubnet", func() {
ginkgo.It("should be properly configured if specified in kubeadm-config", func() {
if dualStack {
framework.Skipf("Skipping because cluster is dual-stack")
}
if !serviceSubnetInKubeadmConfig {
framework.Skipf("Skipping because serviceSubnet was not specified in kubeadm-config")
}
cc := getClusterConfiguration(f.ClientSet)
if _, ok := cc["networking"]; ok {
netCC := cc["networking"].(map[interface{}]interface{})
if ss, ok := netCC["serviceSubnet"]; ok {
// Get the kubernetes service in the default namespace.
// Check that service CIDR allocated is within the serviceSubnet range.
svc, err := f.ClientSet.CoreV1().Services("default").Get("kubernetes", metav1.GetOptions{})
framework.ExpectNoError(err)
if !ipWithinSubnet(ss.(string), svc.Spec.ClusterIP) {
e2elog.Failf("failed due to service(%v) cluster-IP %v not inside configured service subnet: %s", svc.Name, svc.Spec.ClusterIP, ss)
}
}
}
})
})
})
})

// ipWithinSubnet returns true if an IP (targetIP) falls within the reference subnet (refIPNet)
func ipWithinSubnet(refIPNet, targetIP string) bool {
_, rNet, _ := net.ParseCIDR(refIPNet)
Arvinderpal marked this conversation as resolved.
Show resolved Hide resolved
tIP := net.ParseIP(targetIP)
return rNet.Contains(tIP)
}

// subnetWithinSubnet returns true if a subnet (targetNet) falls within the reference subnet (refIPNet)
func subnetWithinSubnet(refIPNet, targetNet string) bool {
_, rNet, _ := net.ParseCIDR(refIPNet)
Arvinderpal marked this conversation as resolved.
Show resolved Hide resolved
tNet, _, _ := net.ParseCIDR(targetNet)
return rNet.Contains(tNet)
}