-
Notifications
You must be signed in to change notification settings - Fork 463
Conversation
ExecStart=/usr/lib/coreos/kubelet-wrapper \ | ||
--api-servers={{.SecureAPIServers}} \ | ||
--network-plugin-dir=/etc/kubernetes/cni/net.d \ | ||
--network-plugin={{.K8sNetworkPlugin}} \ | ||
--container-runtime=rkt \ | ||
--rkt-path=/usr/bin/rkt \ | ||
--rkt-stage1-image=/usr/lib/rkt/stage1-images/stage1-coreos.aci \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will need to change this to the image name in the next beta: coreos.com/rkt/stage1-coreos
to enable stage1 caching.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean by next beta? Should just changing that path to https://github.com/coreos/rkt/blob/master/Documentation/configuration.md
work? I don't have this working locally yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pbx0 I mean since now kubernetes/kubernetes#27707 is merged, so instead of using the path to the aci, we will change to using the stage1's name (to enable caching instead of unpacking the image every time)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pbx0 This will take effect after we build another k8s release
@pbx0 Could you point me a guide on how to use this? |
ExecStart=/usr/lib/coreos/kubelet-wrapper \ | ||
--api-servers=http://localhost:8080 \ | ||
--network-plugin-dir=/etc/kubernetes/cni/net.d \ | ||
--network-plugin={{.K8sNetworkPlugin}} \ | ||
--container-runtime={{.ContainerRuntime}} \ | ||
--rkt-path=/usr/bin/rkt \ | ||
--rkt-stage1-image=/usr/lib/rkt/stage1-images/stage1-coreos.aci \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Needs to be changed to coreos.com/rkt/stage1-coreos
64c5172
to
e27d527
Compare
Bump @pbx0 |
@Quentin-M updated the description to match the current state of this PR |
Thanks. |
As a side note, we might have to disable rkt-gc when the CNI plugin is being used on the kubelet to let recycle the pods/networks itself. |
@pbx0 I just merged the v1.3 changes -- would you be able to rebase this to minimize the changeset? |
rebased |
@@ -33,6 +33,9 @@ export DNS_SERVICE_IP=10.3.0.10 | |||
# Whether to use Calico for Kubernetes network policy. | |||
export USE_CALICO=false | |||
|
|||
# Determines the container runtime for kubernetes to use. Accepts 'docker' or 'rkt'. | |||
export RUNTIME=docker |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit, CONTAINER_RUNTIME
seems clearer.
For consistency, should this be K8S_CONTAINER_RUNTIME
(to match the network-plugin one)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll change it to CONTAINER_RUNTIME
. The K8S
prefix seems a bit repetitive since we are clearly (to me at least) configuring kubernetes. The use of that prefix already seems inconsistent in my mind anyway.
@@ -151,7 +153,7 @@ func (c Cluster) Config() (*Config, error) { | |||
config.APIServers = fmt.Sprintf("http://%s:8080", c.ControllerIP) | |||
config.SecureAPIServers = fmt.Sprintf("https://%s:443", c.ControllerIP) | |||
config.APIServerEndpoint = fmt.Sprintf("https://%s", c.ExternalDNSName) | |||
if config.UseCalico { | |||
if config.UseCalico || config.ContainerRuntime == "rkt" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unless there's a specific reason, we should just start always using CNI
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a commit that moves everything to cni and takes away the complication here.
|
||
[Install] | ||
RequiredBy=rkt-api.service | ||
Before=rkt-api.service | ||
WantedBy=multi-user.target |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nix
@pbx0 looks good minus the |
@colhom Thanks for the massive help fixing my systemd units. But, since kubelet.service has dependent units, we also want to git rid of the multi-user.target thing there as well, right? Or rather is it because kubelet.service isn't a dependancy for another unit, we do want it? |
ExecStart=/usr/bin/rkt fetch /usr/lib/rkt/stage1-images/stage1-coreos.aci /usr/lib/rkt/stage1-images/stage1-fly.aci --insecure-options=image | ||
|
||
[Install] | ||
RequiredBy=rkt-api.service |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not have rkt-api use a Requires
and After
under unit? Since these requirements are the rkt-api's expectations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we do that, then we have to start the rkt-api and start the kubelet. The way we have it now, we just start the kubelet and systemd will start the rkt-api for us.
I think this is right, but, systemd units have been confusing me today. @colhom?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, you can disregard now.
OK, I believe all comments are addressed up to this point and we've settled on the unit stuff. |
RestartSec=10 | ||
[Install] | ||
RequiredBy=kubelet.service | ||
Before=kubelet.service |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oops sorry @pbx0, Before
and After
both go in the [Unit]
section, not [Install]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Man, I am terrible at following directions regarding systemd units. Fixing now.
@pbx0 missed something in my review, @dghubble pointed it out to me. |
After that, LGTM from me. |
2127dcd
to
7f582cc
Compare
Must disable passing the bridge ip parameter from flannel to docker directly or else two bridge with the same IP will be created.
Previously the /sys mount wouldn't work without stage1 fly and docker didn't like the /proc mount point.
For single-node and mult-node vagrant, we will default back to CoreOS alpha. For kube-aws, we will exit if the image version is less then 1122.0.0 when using rkt as the runtime.
#!/bin/sh | ||
# This is bind mounted into the kubelet rootfs and all rkt shell-outs go | ||
# through this rkt wrapper. It essentially enters the host mount namespace | ||
# (which is it already in) only for the purpose of breaking out of the chroot |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: s/is it/it is/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
Calling rkt from the kubelet fly container will call rkt in such a way that its invocation will break out of the container chroot. This should be removed once the write-api stuff is finalized in rktnetes or made a rkt fly feature. See: rkt/rkt#2878
OK, now I think we've got the unit file stuff sorted out. |
Thanks @pbx0 for shepherding this through, great job rktnetes team. |
* Set the Docker bridge IP and IP masq to empty string * coreos/coreos-kubernetes#551
This adds the ability to toggle container runtime in coreos-kubernetes.
This is still a WIP but mostly complete and blocked on some external factors. To use rktnetes you just set the runtime flag to
rkt
and the release channel toalpha
if using kube-aws.One temporary workaround is escaping the container chroot for invocations of rkt. I expect to merge this for now though.
Updates #501