-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Windows Containers Support #22623
Comments
I'd like to plan a kickoff meeting with @BenjaminArmstrong and @sschuller sometime in the next couple of weeks. I'd also like to ask @sarahnovotny to create the Windows SIG as well. |
/cc @kubernetes/sig-node |
And here we go! |
Here are a few reasonable intro's, for those following along: The Register: Hands On |
Thanks @quinton-hoole greatly appreciate the references. Exciting times @asultan001 @timothysc |
This is great to see! Just for a quick intro I am the lead program manager for all server container technologies in Windows, my team is responsible for Windows Server Containers and Hyper-V Containers. |
Thanks @taylorb-microsoft looking forward to connecting once we have something a bit more concrete. |
I've created a shared document available at: https://goo.gl/NE0ABx to track our planning discussions. Thanks for helping us @taylorb-microsoft. |
A few of us at Apprenda, @taylorb-microsoft and @johngossman are going to do a quick sync up this week. We'll add to the shard doc @preillyme |
Could someone please grant me comment permission on the shared doc. I'm Thanks Q On Mon, Mar 14, 2016 at 11:35 AM, rakeshm notifications@github.com wrote:
|
cc/ @colemickens |
I added a summary of the Apprenda/Microsoft meeting to the doc with some upcoming key action items. |
We have an "ok" story for container runtime specific flags, but it needs to be upgraded to a "great" story. We'd follow the
Great question, we should try to figure out what those would be and then have the discussion of a set of them together. ContainerSpecs were designed to be more generic than the default Docker container spec originally was, but I suspect we'll simply have options that don't work on all runtimes, and a way to document that. For instance, SELinux and AppArmor are already two items that don't work on all linux distros, but they're encapsulated within higher level security groupings. Path specs are likely to be painful on volumes. Dealing with persistent volumes across windows systems may not change that much, although certain options simply won't be available. We've started the "how do we deal with images across multiple runtimes" discussion, but since Windows Docker would probably use the same format as Linux I don't expect that to be an issue. |
@smarterclayton I know it isn't perfect, but would it be possible to take an approach similar to cygwin where we leverage a small util to transform paths to/from? Also Docker on Windows (2016 TP3) appears to use the same image format. |
Can someone grant me, dawnchen@google.com the comment permission on that shared doc? |
Hey @dchen1107 I've added you as an editor to the shared document. |
I've been thinking about if the kubelet should annotate or label itself with the platform it's running on. That might make arm, arm64, ppc64le, amd64 and windows handling easier, if there ever will be cross-clusters. WDYT? |
I think self-labeling based on host platform makes a ton of sense. It will help with idiomatic configuration per platform, but also lends itself to helping with cluster segregation. Is there anything else in k8s that currently self-labels? |
#9044 says cloud provider but can also cover platform stuff. |
OK, after reading it through #9044, it would seem we could capture 'Platform' as a standard label in |
I guess samples value for @davidopp When talking code changes, what should be added more than this or is this fine? # pkg/api/unversioned/well_known_labels.go:22
const LabelPlatform = "beta.kubernetes.io/platform" # pkg/kubelet/kubelet.go:1042
node.ObjectMeta.Labels[unversioned.LabelPlatform] = runtime.GOOS + "/" + runtime.GOARCH I could send a PR for this if you like. |
Early prototype: csrwng@e755508 More than anything it helps to identify gaps in function (just look at the chunks of code that are stubbed or commented out). Main issue I ran into was that Windows containers don't lend themselves well to the Linux model where 1 container = (mostly) 1 process. Windows containers tend to include more of the OS, including the service manager and at least as of right now don't allow namespace sharing. Therefore it doesn't make sense to start a separate infra container to hold on to the ip as on the Linux side. More importantly, a pod cannot be represented as a set of containers that share certain things. In Windows, it may make more sense to have 1 pod = 1 windows container, and each container from the pod simply represent separate process on that container. If modeled that way, it means that containers in a windows pod cannot use a different image each, it also means that resource requirements, security constraints, etc. would apply to the entire pod, and not to each container. |
1 pod = 1 container is a dramatic shift, so it's worth diving in what On Apr 14, 2016, at 11:08 AM, Cesar Wong notifications@github.com wrote: Early prototype: csrwng/kubernetes@e755508 More than anything it helps to identify gaps in function (just look at the Main issue I ran into was that Windows containers don't lend themselves — |
no (at least as of today). Actually the container IP is not yet surfaced through the Docker API, but I suspect that's just a bug with the implementation.
yes We're going to find out more next week as we talk to Microsoft and understand what will eventually be possible vs never possible. |
I would even say volume sharing is the key differentiator for multiple On Fri, Apr 15, 2016 at 10:28 AM, Cesar Wong notifications@github.com
|
Another thing to consider is the level of configurability we will get with windows containers. If we can emulate pods by dynamically configuring containers, that might work as well. |
I guess we'll learn more next week on what MSFT would recommend but IMO, having a short term (if indeed it is short term) limitation on Windows that 1 pod = 1 container is better than having a host of materially important caveats on Windows when you have 1 pod = n containers if that's what it ends up coming down to. Clear statements of limitations - and we know there will be limitations in a variety of areas - are important b/c if you're constantly reading the fine print, it just creates lots of friction. I'm with you Clayton that we should be very hesitant to limit 1 pod = 1 container but if alternatives are going to be messy, we shouldn't force a model into place that isn't ready. |
I'm not against making progress, but limitations of a container
runtime don't invalidate the intent of the pod to be a scheduling
unit. Colocated work in pods is the core Kube abstraction, and so we
wouldn't lightly propagate a restriction like that up the stack (to
the APIs, validations, or clients).
I'd want a much deeper discussion of the trade offs due to long term
technical gaps (as this work progresses) before we jump to a
limitation like that. That's all info that should be part of the
proposal doc (even if we can't answer it yet) as you move forward.
|
Agreed. Just for clarification though - we'd be talking about deciding on the composition of a pod, not the fact that a pod is the unit of scheduling right? I'm certainly not suggesting we even consider the latter. |
If you want to achieve scheduling colocation onto a node for two
service endpoints you place them in a pod (even if you don't need
volume or network sharing). Using other constructs to achieve that
would be discouraged / not supported / "a bad idea". So in the worst
possible case (no way to technically implement network IP sharing) we
very well may want multiple containers per pod on Windows.
|
Got it. Looks like it will come down to what can actually be shared across containers within a pod and which life cycle operations can be mutually guaranteed before this becomes a tougher call. |
hi everyone, members from Apprenda and Red Hat have created the first version of a technical investigations document on how to bring the kubelet to Windows. https://docs.google.com/document/d/1qhbxqkKBF8ycbXQgXlwMJs7QBReiSxp_PdsNNNUPRHs/edit?usp=sharing Our goal is to share some of these findings with Microsoft during our Wednesday meeting. The focal point of that meeting is to go over some of the questions for Microsoft that we started accumulating in this document. If you have additional questions to bring during that discussion, please add them to the document. |
Automatic merge from submit-queue Automatically add node labels beta.kubernetes.io/{os,arch} Proposal: #17981 As discussed in #22623: > @davidopp: #9044 says cloud provider but can also cover platform stuff. Adds a label `beta.kubernetes.io/platform` to `kubelet` that informs about the os/arch it's running on. Makes it easy to specify `nodeSelectors` for different arches in multi-arch clusters. ```console $ kubectl get no --show-labels NAME STATUS AGE LABELS 127.0.0.1 Ready 1m beta.kubernetes.io/platform=linux-amd64,kubernetes.io/hostname=127.0.0.1 $ kubectl describe no Name: 127.0.0.1 Labels: beta.kubernetes.io/platform=linux-amd64,kubernetes.io/hostname=127.0.0.1 CreationTimestamp: Thu, 31 Mar 2016 20:39:15 +0300 ``` @davidopp @vishh @fgrzadkowski @thockin @wojtek-t @ixdy @bgrant0607 @dchen1107 @preillyme
Today in the Kubernetes community meeting, we demo'ed the alpha version of the Windows Server Container support in Kubernetes, with kubernetes running on Microsoft Azure. Feature: kubernetes/enhancements#116 will track bringing the work of SIG-Windows to beta with release 1.5 of Kubernetes. If you want to help, join SIG-Windows at https://kubernetes.slack.com/messages/sig-windows cc: @sarahnovotny , @brendandburns |
Issues go stale after 30d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
I think this issue can be closed in favor for kubernetes/enhancements#116; where the feature state is tracked. Also, this is implemented to a large extent already (beta); whohoo! Thanks all for the great work 👍 and reopen if you disagree with this assessment |
Add Windows Containers Support at least at the node level.
The text was updated successfully, but these errors were encountered: