Skip to content
This repository has been archived by the owner on Sep 4, 2021. It is now read-only.

runtime toggle #551

Merged
merged 7 commits into from
Aug 12, 2016
Merged

runtime toggle #551

merged 7 commits into from
Aug 12, 2016

Conversation

peebs
Copy link
Contributor

@peebs peebs commented Jun 21, 2016

This adds the ability to toggle container runtime in coreos-kubernetes.

This is still a WIP but mostly complete and blocked on some external factors. To use rktnetes you just set the runtime flag to rkt and the release channel to alpha if using kube-aws.

One temporary workaround is escaping the container chroot for invocations of rkt. I expect to merge this for now though.

Updates #501

@yifan-gu
Copy link

cc @tmrts @s-urbaniak @euank

ExecStart=/usr/lib/coreos/kubelet-wrapper \
--api-servers={{.SecureAPIServers}} \
--network-plugin-dir=/etc/kubernetes/cni/net.d \
--network-plugin={{.K8sNetworkPlugin}} \
--container-runtime=rkt \
--rkt-path=/usr/bin/rkt \
--rkt-stage1-image=/usr/lib/rkt/stage1-images/stage1-coreos.aci \

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will need to change this to the image name in the next beta: coreos.com/rkt/stage1-coreos to enable stage1 caching.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you mean by next beta? Should just changing that path to https://github.com/coreos/rkt/blob/master/Documentation/configuration.md work? I don't have this working locally yet.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pbx0 I mean since now kubernetes/kubernetes#27707 is merged, so instead of using the path to the aci, we will change to using the stage1's name (to enable caching instead of unpacking the image every time)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pbx0 This will take effect after we build another k8s release

@yifan-gu
Copy link

@pbx0 Could you point me a guide on how to use this?

ExecStart=/usr/lib/coreos/kubelet-wrapper \
--api-servers=http://localhost:8080 \
--network-plugin-dir=/etc/kubernetes/cni/net.d \
--network-plugin={{.K8sNetworkPlugin}} \
--container-runtime={{.ContainerRuntime}} \
--rkt-path=/usr/bin/rkt \
--rkt-stage1-image=/usr/lib/rkt/stage1-images/stage1-coreos.aci \

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Needs to be changed to coreos.com/rkt/stage1-coreos

@Quentin-M
Copy link

Bump @pbx0

@peebs
Copy link
Contributor Author

peebs commented Jul 11, 2016

@Quentin-M updated the description to match the current state of this PR

@Quentin-M
Copy link

Thanks.

@Quentin-M
Copy link

As a side note, we might have to disable rkt-gc when the CNI plugin is being used on the kubelet to let recycle the pods/networks itself.

@aaronlevy
Copy link
Contributor

@pbx0 I just merged the v1.3 changes -- would you be able to rebase this to minimize the changeset?

@peebs
Copy link
Contributor Author

peebs commented Jul 11, 2016

rebased

@@ -33,6 +33,9 @@ export DNS_SERVICE_IP=10.3.0.10
# Whether to use Calico for Kubernetes network policy.
export USE_CALICO=false

# Determines the container runtime for kubernetes to use. Accepts 'docker' or 'rkt'.
export RUNTIME=docker
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit, CONTAINER_RUNTIME seems clearer.

For consistency, should this be K8S_CONTAINER_RUNTIME (to match the network-plugin one)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll change it to CONTAINER_RUNTIME. The K8S prefix seems a bit repetitive since we are clearly (to me at least) configuring kubernetes. The use of that prefix already seems inconsistent in my mind anyway.

@@ -151,7 +153,7 @@ func (c Cluster) Config() (*Config, error) {
config.APIServers = fmt.Sprintf("http://%s:8080", c.ControllerIP)
config.SecureAPIServers = fmt.Sprintf("https://%s:443", c.ControllerIP)
config.APIServerEndpoint = fmt.Sprintf("https://%s", c.ExternalDNSName)
if config.UseCalico {
if config.UseCalico || config.ContainerRuntime == "rkt" {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless there's a specific reason, we should just start always using CNI

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a commit that moves everything to cni and takes away the complication here.


[Install]
RequiredBy=rkt-api.service
Before=rkt-api.service
WantedBy=multi-user.target
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nix

@colhom
Copy link
Contributor

colhom commented Aug 11, 2016

@pbx0 looks good minus the WantedBy stuff. Anywhere you have a service that is already RequiredBy dependents, you don't want to say WantedBy: multi-user.target.

@peebs
Copy link
Contributor Author

peebs commented Aug 11, 2016

@colhom Thanks for the massive help fixing my systemd units. But, since kubelet.service has dependent units, we also want to git rid of the multi-user.target thing there as well, right?

Or rather is it because kubelet.service isn't a dependancy for another unit, we do want it?

ExecStart=/usr/bin/rkt fetch /usr/lib/rkt/stage1-images/stage1-coreos.aci /usr/lib/rkt/stage1-images/stage1-fly.aci --insecure-options=image

[Install]
RequiredBy=rkt-api.service
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not have rkt-api use a Requires and After under unit? Since these requirements are the rkt-api's expectations.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we do that, then we have to start the rkt-api and start the kubelet. The way we have it now, we just start the kubelet and systemd will start the rkt-api for us.

I think this is right, but, systemd units have been confusing me today. @colhom?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, you can disregard now.

@peebs
Copy link
Contributor Author

peebs commented Aug 11, 2016

OK, I believe all comments are addressed up to this point and we've settled on the unit stuff.

RestartSec=10
[Install]
RequiredBy=kubelet.service
Before=kubelet.service
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops sorry @pbx0, Before and After both go in the [Unit] section, not [Install]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Man, I am terrible at following directions regarding systemd units. Fixing now.

@colhom
Copy link
Contributor

colhom commented Aug 11, 2016

@pbx0 missed something in my review, @dghubble pointed it out to me. Before= directive goes in the [Unit] section. Anywhere you have it in [Install], just move it up.

@colhom
Copy link
Contributor

colhom commented Aug 11, 2016

After that, LGTM from me.

@peebs peebs force-pushed the rktnetes branch 2 times, most recently from 2127dcd to 7f582cc Compare August 12, 2016 00:08
Patrick Baxter added 6 commits August 11, 2016 17:08
Must disable passing the bridge ip parameter from flannel to docker
directly or else two bridge with the same IP will be created.
Previously the /sys mount wouldn't work without stage1 fly and docker
didn't like the /proc mount point.
For single-node and mult-node vagrant, we will default back to CoreOS
alpha. For kube-aws, we will exit if the image version is less then
1122.0.0 when using rkt as the runtime.
#!/bin/sh
# This is bind mounted into the kubelet rootfs and all rkt shell-outs go
# through this rkt wrapper. It essentially enters the host mount namespace
# (which is it already in) only for the purpose of breaking out of the chroot
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: s/is it/it is/

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Calling rkt from the kubelet fly container will call rkt in such a way
that its invocation will break out of the container chroot. This should
be removed once the write-api stuff is finalized in rktnetes or made a
rkt fly feature.

See: rkt/rkt#2878
@peebs
Copy link
Contributor Author

peebs commented Aug 12, 2016

OK, now I think we've got the unit file stuff sorted out.

@colhom
Copy link
Contributor

colhom commented Aug 12, 2016

Thanks @pbx0 for shepherding this through, great job rktnetes team.

@colhom colhom merged commit 80c11ad into coreos:master Aug 12, 2016
@peebs peebs deleted the rktnetes branch August 12, 2016 22:06
dghubble added a commit to poseidon/matchbox that referenced this pull request Oct 9, 2016
* Set the Docker bridge IP and IP masq to empty string
* coreos/coreos-kubernetes#551
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants