Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quality of life enhancements for CNI delegation code #63

Merged
merged 2 commits into from
Apr 1, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 14 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -243,15 +243,18 @@ Spec:
Validation: True
Events: <none>
```

__BE WARNED: DANM stores pretty important information in DanmNet objects. Under no circumstances shall a DanmNet be deleted, if there are any runnning Pods referencing it!__
__Such action will undoubtedly lead to ruin and DANMation!__
#### Generally supported DANM API features
##### Naming container interfaces
Generally speaking, you need to care about how the network interfaces of your Pods are named inside their respective network namespaces.
The hard reality to keep in mind is that you shall always have an interface literally called "eth0" created within all your Kubernetes Pods, because Kubelet will always search for the existence of such an interface at the end of Pod instantiation.
If such an interface does not exist after CNI is invoked, the state of the Pod will be considered "faulty", and it will be re-created in a loop.
If such an interface does not exist after CNI is invoked (also having an IPv4 address), the state of the Pod will be considered "faulty", and it will be re-created in a loop.
To be able to comply with this Kubernetes limitation, DANM supports both explicit, and implicit interface naming schemes for all NetworkTypes!

An interface connected to a DanmNet containing the container_prefix attribute will be always named accordingly. You can use this API to explicitly set descriptive, unique names to NICs connecting to this network.
In case container_prefix is not set in an interface's network descriptor, DANM will automatically name the interface "ethX", where X is a unique integer number corresponding to the sequence number of the network connection (e.g. the first interface defined in the annotation is called "eth0", second interface "eth1" etc.)
In case container_prefix is not set in an interface's network descriptor, DANM automatically names the interface "ethX", where X is a unique integer number corresponding to the sequence number of the network connection (e.g. the first interface defined in the annotation is called "eth0", second interface "eth1" etc.)
DANM even supports the mixing of the networking schemes within the same Pod, and it supports the whole naming scheme for all network backends.
While the feature provides complete control over the name of interfaces, ultimately it is the network administrators' responsibility to:
- make sure exactly one interface is named eth0 in every Pod
Expand All @@ -273,7 +276,7 @@ Pay special attention to the DanmNet attribute called "NetworkType". This parame
In case this parameter is set to "ipvlan", or is missing; then DANM's in-built IPVLAN CNI plugin creates the network (see next chapter for details).
In case this attribute is provided and set to another value than "ipvlan", then network management is delegated to the CNI plugin with the same name.
The binary will be searched in the configured CNI binary directory.
Example: when a Pod is created and requests a network connection to a DanmNet with "NetworkType" set to "flannel", then DANM will delegate the creation of this network interface to the /opt/cni/bin/flannel binary.
Example: when a Pod is created and requests a network connection to a DanmNet with "NetworkType" set to "flannel", then DANM will delegate the creation of this network interface to the <CONFIGURED_CNI_PATH_IN_KUBELET>/flannel binary.
##### Creating the configuration for delegated CNI operations
We strongly believe that network management in general should be driven by one, generic API. Therefore, DANM is capable to "translate" the generic options coming from a DanmNet object into the specific "language" the delegated CNI plugin understands.
This way users can dynamically configure various networking solutions via the same, abstract interface without caring about how a specific option is called exactly in the terminology of the delegated solution.
Expand All @@ -282,23 +285,26 @@ A generic framework supporting this method is built into DANM's code, but still
As a result, DANM currently supports two integration levels:

- **Dynamic integration level:** CNI-specific network attributes (such as IP ranges, parent host devices etc.) can be controlled on a per network level, taken directly from a DanmNet object
- **Static integration level:** CNI-specific network attributes (such as IP ranges, parent host devices etc.) can be only configured on a per node level, via a static CNI configuration files (Note: this is the default CNI configuration method)
- **Static integration level:** CNI-specific network attributes (such as IP ranges, parent host devices etc.) can be only configured via static CNI configuration files (Note: this is the default CNI configuration method)

Our aim is to integrate all the popular CNIs into the DANM eco-system over time, but currently the following CNI's achieved dynamic integration level:

- DANM's own, in-built IPVLAN CNI plugin
- Set the "NetworkType" parameter to value "ipvlan" to use this backend
- Intel's DPDK-capable [SR-IOV CNI plugin](https://github.com/intel/sriov-cni )
- Intel's [SR-IOV CNI plugin](https://github.com/intel/sriov-cni )
- Set the "NetworkType" parameter to value "sriov" to use this backend
- Generic MACVLAN CNI from the CNI plugins example repository [MACVLAN CNI plugin](https://github.com/containernetworking/plugins/blob/master/plugins/main/macvlan/macvlan.go )
- Set the "NetworkType" parameter to value "macvlan" to use this backend

No separate configuration needs to be provided to DANM when it connects Pods to DanmNets, if the network is backed by a CNI plugin with dynamic integration level.
Everything happens automatically based on the DanmNet API itself!

When network management is delegated to CNI plugins with static integration level; DANM will read their configuration from the configured CNI config directory.
For example, when a Pod is connected to a DanmNet with "NetworkType" set to "flannel", DANM will pass the content of /etc/cni/net.d/flannel.conf file to the /opt/cni/bin/flannel binary by invoking a standard CNI operation.
Generally supported DANM API-based features are configured even in this case.
When network management is delegated to CNI plugins with static integration level; DANM reads their configuration from the configured CNI config directory.
The directory can be configured via setting the "CNI_CONF_DIR" environment variable in DANM CNI's context (be it in the host namespace, or inside a Kubelet container). Default value is "/etc/cni/net.d".
In case there are multiple configuration files present for the same backend, users can control which one is used in a specific network provisioning operation via the NetworkID DanmNet parameter.

So, all in all: a Pod connecting to a DanmNet with "NetworkType" set to "bridge", and "NetworkID" set to "example_network" gets an interface provisioned by the <CONFIGURED_CNI_PATH_IN_KUBELET>/bridge binary based on the <CNI_CONF_DIR>/example_network.conf file!
In addition to simply delegating the interface creation operation, generally supported DANM API-based features -such as static and dynamic IP route provisioning, flexible interface naming- are also configured by DANM.
##### Connecting Pods to DanmNets
Pods can request network connections to DanmNets by defining one or more network connections in the annotation of their (template) spec field, according to the schema described in the **schema/network_attach.yaml** file.

Expand Down
15 changes: 13 additions & 2 deletions glide.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

14 changes: 6 additions & 8 deletions pkg/cnidel/cniconfs.go
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,10 @@ package cnidel

import (
"errors"
"log"
"io/ioutil"
"encoding/json"
"github.com/nokia/danm/pkg/danmep"
"io/ioutil"
danmtypes "github.com/nokia/danm/crd/apis/danm/v1"
"github.com/nokia/danm/pkg/danmep"
sriov_utils "github.com/intel/sriov-cni/pkg/utils"
)

Expand Down Expand Up @@ -34,12 +33,12 @@ var (
)

//This function creates CNI configuration for all static-level backends
//The CNI binary matching with NetowrkType is invoked with the CNI config file matching with NetworkID parameter
func readCniConfigFile(netInfo *danmtypes.DanmNet) ([]byte, error) {
cniType := netInfo.Spec.NetworkType
//TODO: the path from where the config is read should not be hard-coded
rawConfig, err := ioutil.ReadFile("/etc/cni/net.d/" + cniType + ".conf")
cniConfig := netInfo.Spec.NetworkID
rawConfig, err := ioutil.ReadFile(cniConfigDir + "/" + cniConfig + ".conf")
if err != nil {
return nil, errors.New("Could not load CNI config file for plugin:" + cniType)
return nil, errors.New("Could not load CNI config file: " + cniConfig +" for plugin:" + netInfo.Spec.NetworkType)
}
return rawConfig, nil
}
Expand Down Expand Up @@ -80,7 +79,6 @@ func getMacvlanCniConfig(netInfo *danmtypes.DanmNet, ipamOptions danmtypes.IpamC
MTU: 1500,
Ipam: ipamOptions,
}
log.Printf("LOFASZ MACVLAN CONFIG %v/n",macvlanConfig)
rawConfig, err := json.Marshal(macvlanConfig)
if err != nil {
return nil, errors.New("Error putting together CNI config for MACVLAN plugin: " + err.Error())
Expand Down
9 changes: 5 additions & 4 deletions pkg/cnidel/cnidel.go
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,7 @@ var (
ipamType = "fakeipam"
defaultDataDir = "/var/lib/cni/networks"
flannelBridge = getEnv("FLANNEL_BRIDGE", "cbr0")
dpdkNicDriver = os.Getenv("DPDK_NIC_DRIVER")
dpdkDriver = os.Getenv("DPDK_DRIVER")
dpdkTool = os.Getenv("DPDK_TOOL")
cniConfigDir = getEnv("CNI_CONF_DIR", "/etc/cni/net.d")
)

// IsDelegationRequired decides if the interface creation operations should be delegated to a 3rd party CNI, or can be handled by DANM
Expand Down Expand Up @@ -104,7 +102,10 @@ func getCniIpamConfig(options danmtypes.DanmNetOption, ip4, ip6 string) danmtype
subnet string
ip string
)
if options.Cidr != "" {
if ip4 == "" && ip6 == "" {
return danmtypes.IpamConfig{}
}
if ip4 != "" {
ip = ip4
subnet = options.Cidr
} else {
Expand Down
5 changes: 1 addition & 4 deletions pkg/danmep/ep.go
Original file line number Diff line number Diff line change
Expand Up @@ -49,9 +49,6 @@ func createContainerIface(ep danmtypes.DanmEp, dnet *danmtypes.DanmNet, device s
log.Println("Could not switch back to default ns during IPVLAN interface creation:" + err.Error())
}
}()
//cns,_ := ns.GetCurrentNS()
cpath := origns.Path()
log.Println("EP NS BASE PATH:" + cpath)
iface, err := netlink.LinkByName(device)
if err != nil {
return errors.New("cannot find host device because:" + err.Error())
Expand Down Expand Up @@ -138,7 +135,7 @@ func sendGratArps(srcAddrIpV4, srcAddrIpV6, ifaceName string) {
err = executeArping(srcAddrIpV6, ifaceName)
}
if err != nil {
log.Println(err.Error())
log.Println("WARNING: sending gARP Reply failed with error:" + err.Error(), " , but we will ignore that for now!")
}
}

Expand Down
Loading