Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Communication between Kubernetes components #6363

Closed
bgrant0607 opened this issue Apr 2, 2015 · 71 comments
Closed

Communication between Kubernetes components #6363

bgrant0607 opened this issue Apr 2, 2015 · 71 comments
Assignees
Labels
area/api Indicates an issue on api area. area/kubelet-api kind/design Categorizes issue or PR as related to design. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/node Categorizes an issue or PR as relevant to SIG Node.

Comments

@bgrant0607
Copy link
Member

Conversation started by @davidopp due to complexities arising from node-to-master communication (e.g., #6193, #6285, #6077, #6063, #6052, #5953).

Overall communication issues:

  • performance, latency, scalability
  • DoS
  • RAW and WAW consistency
  • clock skew / time synchronization
  • true and false conflicts
  • auth
  • component discovery

I'll wait for @davidopp to fill in his proposal.

cc @erictune @lavalamp @thockin @dchen1107 @smarterclayton @gmarek

@bgrant0607 bgrant0607 added kind/design Categorizes issue or PR as related to design. area/api Indicates an issue on api area. area/kubelet-api priority/design sig/node Categorizes an issue or PR as relevant to SIG Node. labels Apr 2, 2015
@gmarek
Copy link
Contributor

gmarek commented Apr 2, 2015

+1 to anything that will make the reasoning simple. Current state when multiple component can write exactly the same thing make reasoning about system semantics very hard (to put it lightly).

@smarterclayton
Copy link
Contributor

cc @liggitt

On Apr 2, 2015, at 12:30 PM, Brian Grant notifications@github.com wrote:

Conversation started by @davidopp due to complexities arising from node-to-master communication (e.g., #6193, #6285, #6077, #6063, #6052, #5953).

Overall communication issues:

performance, latency, scalability
DoS
RAW and WAW consistency
clock skew / time synchronization
true and false conflicts
auth
component discovery
I'll wait for @davidopp to fill in his proposal.

cc @erictune @lavalamp @thockin @dchen1107 @smarterclayton @gmarek


Reply to this email directly or view it on GitHub.

@bgrant0607
Copy link
Member Author

And types of communication:

To node:

  • Node/Kubelet configuration
  • Pods to run and their desired states
  • Services and endpoints
  • Secrets
  • Persistent volumes

From node:

@davidopp
Copy link
Member

davidopp commented Apr 2, 2015

Thanks for filing the issue. Give me two seconds before you start discussing it, so that I can actually write it up. :)

@zmerlynn
Copy link
Member

zmerlynn commented Apr 2, 2015

cc @zmerlynn, and I suspect @roberthbailey for SSLness

@smarterclayton
Copy link
Contributor

  • Node bootstrap (distributing client certs to the nodes)

On Apr 2, 2015, at 1:09 PM, Brian Grant notifications@github.com wrote:

And types of communication:

To node:

Node/Kubelet configuration
Pods to run and their desired states
Services and endpoints
Secrets
Persistent volumes
From node:

System info (software versions, etc.)
Pod, container, and volume statuses
Node heart beats (Kubelet alive)
Node health status (docker working, disks ok, etc.)
Resource usage (#4057)

Reply to this email directly or view it on GitHub.

@bgrant0607
Copy link
Member Author

  • Cluster (master and node) upgrade

@bgrant0607
Copy link
Member Author

  • "Free range" nodes (no master), such as ContainerVM

@davidopp
Copy link
Member

davidopp commented Apr 2, 2015

[Note: I am likely to update this text as I think of more things, so please read this issue through the web, not email.]

I think we need to talk about how we structure communication between components in Kubernetes. I'm not necessarily proposing a change pre-1.0; this is more of a longer-term idea.

Currently, all communication between components is through objects that are (1) persistent, and (2) considered to be part of the external API, which is stable/slow-evolving, versioned with long deprecation periods, and subject to multiple levels of review, debate, and approval for changes. I believe this one-size-fits-all model is bad for performance/scalability and bad for the project's velocity.

I'm proposing an additional, lighter-weight model inter-component communication mechanism, which has neither property (1) nor (2), to be used in situations where the existing model is deemed inappropriate. (Exactly what those circumstances are is TBD, but it would only be used for some subset of the internal communication between Kubernetes components.) This model is a very small tweak to the existing model. The only difference is that the objects it uses would not be persisted. We would exploit this fact to allow APIs that use these objects to evolve more quickly. But everything would still go through PUT/POST/GET to the API server.

Let me give you a concrete example of how it would work. Let's say we move NodeStatus to this model. Let's also assume the API server is sharded, to alleviate concerns that this approach doesn't work with sharding. A node would POST/PUT a NodeStatus object to its API server shard. The API server (or NodeController) would do whatever business logic processing it wants, and if there is some state that clients need to know, and/or all of the API server (or NodeController) replicas need to know, and/or that is needed for crash recovery, it would write a new object with just that information to etcd. For example, we might have a UserVisibleNodeStatus object that is the one external clients read; it contains a processed subset of the information from NodeStatus (and perhaps from other objects as well) and can be updated less frequently.

(Please don't dwell too much on my use of NodeStatus as the example. I think this approach applies to inter-component communication in Kubernetes more generally.)

Here are the advantages I see

  • performance/scalability: The current model, where components can only communicate with one another through disk, is an obvious performance bottleneck. Offloading some of this communication to network+memory-only paths is an obvious performance/scalability win.
  • state partitioning: The current model conflates objects that are needed for inter-component communication with objects that are needed for crash recovery. Only the second type need to be persisted, but we're forced to persist everything.
  • API layering: The current model doesn't distinguish internal APIs from external APIs. I know some people think there shouldn't actually be a distinction, but I disagree. If we can declare some APIs/API objects to be internal-only and never persisted, then we do not have to be as rigorous about versioning, long deprecation periods, evolving slowly/carefully, etc. for these APIs (of course there are caveats here about atomicity of component upgrade for loosening the versioning requirements). The messages used to communicate between components could be completely invisible to the user, with the API server responsible for generating the objects that are part of the user-visible API. Components could have input and output objects reminiscent of RPCs without having to worry about how they affect the user-facing API

The only gotcha I can see is that we'd need to implement some kind of replacement for etcd watch, to allow components to watch these ephemeral objects. I think we wouldn't need something like ResourceVersion if we assume each of these ephemeral objects only have one writer.

@smarterclayton
Copy link
Contributor

On Apr 2, 2015, at 4:51 PM, David Oppenheimer notifications@github.com wrote:

[Note: I am likely to update this text as I think of more things, so please read this issue through the web, not email.]

I think we need to talk about how we structure communication between components in Kubernetes. I'm not necessarily proposing a change pre-1.0; this is more of a longer-term idea.

Currently, all communication between components is through objects that are (1) persistent, and (2) considered to be part of the external API, which is stable/slow-evolving, versioned with long deprecation periods, and subject to multiple levels of review, debate, and approval for changes. I believe this one-size-fits-all model is bad for performance/scalability and bad for the project's velocity.

I'm proposing an additional, lighter-weight model inter-component communication mechanism, which has neither property (1) nor (2), to be used in situations where the existing model is deemed inappropriate. (Exactly what those circumstances are is TBD, but it would only be used for some subset of the internal communication between Kubernetes components.) This model is a very small tweak to the existing model. The only difference is that the objects it uses would not be persisted. We would exploit this fact to allow APIs that use these objects to evolve more quickly. But everything would still go through PUT/POST/GET to the API server.

Let me give you a concrete example of how it would work. Let's say we move NodeStatus to this model. Let's also assume the API server is sharded, to alleviate concerns that this approach doesn't work with sharding. A node would POST/PUT a NodeStatus object to its API server shard. The API server would do whatever business logic processing it wants, and if there is some state that all of the API server replicas need to know, and/or that is needed for crash recovery, it would write a new object with just that information to etcd. For example, we might have a UserVisibleNodeStatus object that is the one external clients read; it contains a processed subset of the information from NodeStatus (and perhaps from other objects as well) and can be updated less frequently.

(Please don't dwell too much on my use of NodeStatus as the example. I think this approach applies to inter-component communication in Kubernetes more generally.)

Here are the advantages I see

performance/scalability: The current model, where components can only communicate with one another through disk, is an obvious performance bottleneck. Offloading some of this communication to network+memory-only paths is an obvious performance/scalability win.
state partitioning: The current model conflates objects that are needed for inter-component communication with objects that are needed for crash recovery. Only the second type need to be persisted, but we're forced to persist everything.
API layering: The current model doesn't distinguish internal APIs from external APIs. I know some people think there shouldn't actually be a distinction, but I disagree. If we can declare some APIs/API objects to be internal-only and never persisted, then we do not have to be as rigorous about versioning, long deprecation periods, evolving slowly/carefully, etc. for these APIs (of course there are caveats here about atomicity of component upgrade for loosening the versioning requirements). The messages used to communicate between components could be completely invisible to the user, with the API server responsible for generating the objects that are part of the user-visible API. Components could have input and output objects reminiscent of RPCs without having to worry about how they affect the user-facing API
To be honest, no matter how you specify the APIs, people will use them if the need them to solve the same use case, and then be upset when they break. So we should be careful to identify when that happens and address the need.

I don't see an issue with evolving internal APIs at a different pace and with different guarantees. Virtual resources makes a lot of sense when we want to specialize the use case.

The only gotcha I can see is that we'd need to implement some kind of replacement for etcd watch, to allow components to watch these ephemeral objects. I think we wouldn't need something like ResourceVersion if we assume each of these ephemeral objects only have one writer.


Reply to this email directly or view it on GitHub.

@lavalamp
Copy link
Member

lavalamp commented Apr 2, 2015

The only difference is that the objects it uses would not be persisted.

I am not sure I see the point if that is the only difference. All the heavyweight process is also for maintaining API compatibility, which you still need-- clusters have to upgrade.

@smarterclayton
Copy link
Contributor

When someone says "heavyweight process" w.r.t. API design, I mentally substitute "discipline" :)

On Apr 2, 2015, at 5:13 PM, Daniel Smith notifications@github.com wrote:

The only difference is that the objects it uses would not be persisted.

I am not sure I see the point if that is the only difference. All the heavyweight process is also for maintaining API compatibility, which you still need-- clusters have to upgrade.


Reply to this email directly or view it on GitHub.

@lavalamp
Copy link
Member

lavalamp commented Apr 2, 2015

API server is sharded

It might be replicated, but that's not sharding. I would propose that the storage layer (etcd today) is the place where we do sharding, should that be necessary. Otherwise every client needs to be aware of the sharding details, or we have to put in another layer.

@lavalamp
Copy link
Member

lavalamp commented Apr 2, 2015

performance/scalability: The current model, where components can only communicate with one another through disk, is an obvious performance bottleneck. ...

This is an argument for changing the characteristics of our storage backend, IMO.

state partitioning: The current model conflates objects that are needed for inter-component communication with objects that are needed for crash recovery. Only the second type need to be persisted, but we're forced to persist everything.

Not true. See our binding object, which is not persisted. (directly)

API layering: The current model doesn't distinguish internal APIs from external APIs. ... If we can declare some APIs/API objects to be internal-only and never persisted, then we do not have to be as rigorous about versioning, long deprecation periods, evolving slowly/carefully, etc. for these APIs ...

I guess I just don't see how that would actually cash out in a simplified compatibility matrix.

@dchen1107
Copy link
Member

[Note: copied from internal email with some modification]

@davidopp I agreed with you on requiring every message to be persisted is an issue for our performance and scalability (#5953). On kubelet side, to cope with the rule and avoid too much writes, kubelet only post PodStatus on change (which is good anyway). But we use NodeStatus as ping message today to indicate if a node alive, which is quite expensive since it requires posting in a high
frequency. It was hard to me to accept persisting a ping message to store.

Related to today's issue caused by NodeStatus and NodeController, there are several ways potentially to solve today's issue if we could loose the requirement (1): all communication between components is through objects should be persistent:

  1. Rate limit on checkpointing NodeStatus (Measure and optimize push-based heartbeats from Kubelet to NodeController #5953): this causes Daniel's concern on apiserver replication / sharding.
  2. Kubelet posts NodeStatus to NodeController as ping message,
    NodeController process those messages, and post a comprehensive one to APIserver
  3. Similar to 2), but introducing a new ping message. Kubelet sends ping message to NodeController regularly, and only post NodeStatus on change.

@roberthbailey
Copy link
Contributor

  1. Similar to 2), but introducing a new ping message. Kubelet sends ping message to NodeController regularly, and only post NodeStatus on change.

If we don't want the master to initiate contact with the Kubelet and we move to this model, I'd suggest adding a control channel where the master can respond to a ping message with a request for the full NodeStatus. This will allow the master to have the ability to more proactively get current state without needing to change anything in the Kubelet.

@davidopp
Copy link
Member

davidopp commented Apr 3, 2015

I'd prefer that we discuss details of our current NodeStatus problems elsewhere. This proposal is intended to be generic.

I don't think "changing the characteristics of our storage backend" is going to solve the problem that we force every message between Kubernetes components to hit disk. Sure, we can run our key-value store on a huge distributed SSD-based cluster to improve performance but why? Some data really is ephemeral and is just a message between two components, not meant for public scrutiny or long-term storage. Why build a crazy-complicated storage system to store things that have no reason to be stored in the first place, instead of just identifying the things that don't need to be stored, and not storing them? And why require a full API sign-off for changing the way two components communicate with each other, if it's not meant for public consumption?

I do think API evolution is related. If you have something that is just an ephemeral message between two components and you can guarantee that the components will be updated atomically, then you don't have to worry about API evolution because you can just restart them together. Even if you can't guarantee atomic update, the fact that they don't use persistent objects for communication means you at least don't have to worry about rewriting on-disk data formats for the objects they use for communicating. And because they're internal components, even in the non-atomic case you only have to support one forward/backward version compatibility--that is, knowing that two components will be upgraded "approximately together" makes things simpler than having to support multiple versions on each end.

I think this proposal kills two birds with one stone: it addresses performance problems we know we're going to have, and by distinguishing internal APIs from external APIs it gives us more flexibility in evolving the former (I admit that the benefit is perhaps even more psychological than it is technical).

@davidopp
Copy link
Member

davidopp commented Apr 3, 2015

BTW see #3247 for an example of where someone was going to use events as an inter-component communication mechanism. This issue is going to come up many times and I think we should try to support it in a reasonable way rather than forcing everything to be a full persisted user-facing API object.

@davidopp
Copy link
Member

davidopp commented Apr 3, 2015

knowing that two components will be upgraded "approximately together" makes things simpler than
having to support multiple versions on each end.

For example, you could say that version N of component A will only talk to version N of component B (i.e. no inter-version compatibility). Even if they don't run on the same machine, as long as you upgrade them approximately together, you can have whichever one comes up first just block until it's talking to a compatible peer. So the total downtime is limited to the time skew between upgrading the components on the two machines. This doesn't work when you have to do rolling upgrade (e.g. you can't have NodeController just block until all Kubelets have been upgraded to a new version) but it does work in the case of master components where there is only a single logical instance of each component.

@davidopp
Copy link
Member

davidopp commented Apr 3, 2015

@thockin @erictune

@bgrant0607
Copy link
Member Author

Replying piecemeal: I agree we will want some API resources that are stored in separate etcd instances, or aren't necessarily stored in etcd, or potentially not persisted to disk at all. I've insisted that resource usage collected from nodes be kept out of core API objects for that exact reason:
https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/resources.md#usage-data

@bgrant0607
Copy link
Member Author

I am one of the people who thinks we shouldn't hide "internal" APIs:
https://github.com/GoogleCloudPlatform/kubernetes/blob/master/DESIGN.md#overview
(last paragraph of section)

But I do think we should group APIs into multiple API prefixes that can be independently versioned: #3806, #635

@bgrant0607
Copy link
Member Author

In Borg, we started with a lot of state not stored persistently and then persisted more and more of it over time, as an inter-component communication medium, as an archival medium, and for stability across master restarts and elections.

In Omega, I enforced a single-writer rule for each object. This meant no true write conflicts, but had other consequences, such as more objects. Joins were never fully realized, which is one reason why Kubernetes started with unified objects containing both spec and status.

@bgrant0607
Copy link
Member Author

If we had API plugins #991, we could potentially create API endpoints that more directly dispatched to controllers.

@smarterclayton
Copy link
Contributor

Nowhere. The principle is sound though. We can also expose bulk endpoints if need be when the time comes.

On Apr 15, 2015, at 9:45 PM, Derek Carr notifications@github.com wrote:

Need to review that I guess. Where are we posting LIST resources today so I can reference?

Sent from my iPhone

On Apr 15, 2015, at 9:40 PM, Clayton Coleman notifications@github.com wrote:

We have that with causes :). Already thought about it.

On Apr 15, 2015, at 8:53 PM, Derek Carr notifications@github.com wrote:

Assuming it made no transactional commitment, sure, but I think if we start supporting a post of a list, we need a sensible way to give errors if some items in that list fail.

Sent from my iPhone

On Apr 15, 2015, at 8:51 PM, Clayton Coleman notifications@github.com wrote:

Why is that a problem?

POST /pods {"kind": "PodList"}

Done.

On Apr 15, 2015, at 8:34 PM, David Oppenheimer notifications@github.com wrote:

Another example of the kind of thing that is difficult/impossible to do because of the requirement that inter-component communication must be via API objects: batching object mutations from clients (scheduler, kubelet, controller manager, etc.) to the API server into a single message. @yujuhong pointed out that today if 30 pods start at the same time on the same kubelet, we do 30 Pod writes. There is presumably some efficiency to be gained by sending a single message to the API server containing all 30 writes. The API server could then turn this into 30 object updates locally.


Reply to this email directly or view it on GitHub.


Reply to this email directly or view it on GitHub.


Reply to this email directly or view it on GitHub.


Reply to this email directly or view it on GitHub.


Reply to this email directly or view it on GitHub.

@bgrant0607
Copy link
Member Author

I don't object to bulk endpoints, but what efficiency do we hope to gain by it? We definitely need to profile/trace before we do that. I don't believe that it will help, and it could even hurt by serializing operations that could otherwise be performed in parallel. In order to be of benefit, batching would need to reduce the amount of data sent over the wire and/or reduce decoding and other work in the apiserver.

If etcd supported batching, that could help amortize transaction overheads, but we could send mutations in batches without changing our API.

@smarterclayton
Copy link
Contributor

One benefit would be using speedy on larger chunks - probably see much higher compression (keys and common values) for bulk updates than singletons.

On Apr 16, 2015, at 1:40 PM, Brian Grant notifications@github.com wrote:

I don't object to bulk endpoints, but what efficiency do we hope to gain by it? We definitely need to profile/trace before we do that. I don't believe that it will help, and it could even hurt by serializing operations that could otherwise be performed in parallel. In order to be of benefit, batching would need to reduce the amount of data sent over the wire and/or reduce decoding and other work in the apiserver.

If etcd supported batching, that could help amortize transaction overheads, but we could send mutations in batches without changing our API.


Reply to this email directly or view it on GitHub.

@thockin thockin added kind/design Categorizes issue or PR as related to design. and removed kind/design Categorizes issue or PR as related to design. priority/design labels May 19, 2015
@ghost ghost added sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. and removed team/master labels Aug 20, 2015
@gmarek
Copy link
Contributor

gmarek commented Aug 31, 2015

What do we want to do with this? It seems that some of things brought up are already implemented, but not all issues mentioned in the first entry are solved.

@bgrant0607 @davidopp @smarterclayton

@davidopp
Copy link
Member

davidopp commented Sep 1, 2015

Let's keep this issue open for future discussion.

@wojtek-t
Copy link
Member

wojtek-t commented Sep 1, 2015

Let's keep this issue open for future discussion.

+1

@bgrant0607
Copy link
Member Author

Update:

Unresolved issues remain (in no particular order):

  • Non-persistent replicated (HA-friendly) resources, such as for status, Events, Pod/Node Metrics, and resource predictions: storage is now separate from registry. We don't have experience with a non-etcd replication mechanism. We could try an in-memory etcd cluster, or could try moving to Redis or another in-memory caching system.
  • Reusable API client and server frameworks, API federation, state sharding, and proxy offload: We don't want to link all our resources into a single binary eventually, and don't want a single data path. It doesn't scale and hampers evolution. In addition to making apiserver and the basic client generic and reusable (or switching to a new API platform), we need a federation strategy (e.g., using an API gateway layer). We need to retain the ability for resources to cross-reference each other and for kubectl and other clients to interact with all configuration and status resources in a generic manner, however. Among other use cases, we'd like to use the API framework in Kubelet Clean up Kubelet RESTful APIs #2098 and for resource predictions Initial Resources proposal #12472.
  • Internal APIs: Even with a reusable API framework, people would like it to be easier to develop APIs, by relaxing requirements on API versioning, backward compatibility, documentation, conventions, etc. Out of necessity, we do use a number of different API mechanisms to communicate with "external" dependencies, such as Heapster, InfluxDB, etcd, Docker, and cAdvisor. Except for cAdvisor and Heapster, the others are fairly isolated (completely wrapped by a single system component) and we're developing more comprehensive abstractions to hide them completely over time, such that they are pluggable at the code level (e.g., Cassandra rather than etcd or Rocket instead of Docker). However, the more different mechanisms and conventions we use, the more times we need to re-solve the same problems: authentication and authorization, backward compatibility, client libraries, server infrastructure, defaulting and validation, API discovery, HA, performance, etc. Additionally, in an open-source, extensible toolkit, the notion of "private" APIs is questionable. Someone else is going to use them. And, we also don't have direct control over the cluster upgrade process and frequency. There may be APIs that don't have quite the same requirements, such as use as a declarative configuration schema -- the "simple REST" convention should work for those cases. We could explore alternatives, but shouldn't use too many different mechanisms and approaches.
  • Extension hooks: We have requests for HTTP-based extensions, such as for admission control, cloudprovider Create HTTP cloudprovider #10503, and scheduler Scheduler extension Proposal #11470, which are a special case of internal APIs.

@bgrant0607
Copy link
Member Author

It's also the case that unstable, non-backward-compatible APIs are hostile to extensibility. That's tantamount to declaring that nobody else in the entire Kubernetes community should be able to extend or replace the client or server without replacing/forking both. That's contradictory to our goals and design principles. To me, that suggests the communication point and/or API abstraction were not chosen correctly.

@davidopp
Copy link
Member

davidopp commented Sep 2, 2015

That's tantamount to declaring that nobody else in the entire Kubernetes community should be able to extend or replace the client or server without replacing/forking both.

To be clear, what I'm proposing is that we be able to create components that offer only an "internal-only" API. A component that exports an "internal-only" API would be part of the system that people aren't expected to extend or replace independently. (Of course, they could submit PRs that we would upstream into these components.) We can put restrictions on these components like: these API objects are never persisted, and the component must only communicate with a client if the client is of an expected version.

I agree that you absolutely would have to upgrade the component and its client component(s) together; this is what relieves you from having to worry about forward/backward compatibility. The upgrade doesn't have to be atomic, but the component would refuse to talk to a client until the client is upgraded.

You could argue that wanting to be able to quickly evolve an API in incompatible ways is a sign that "the communication point and/or API abstraction were not chosen correctly." But it's often hard to choose these kinds of things correctly up-front; it's useful to have a balance between "prototype and iterate" and making sure you get it right the first time.

I think this is basically a philosophical argument about "in an open-source, extensible toolkit, the notion of "private" APIs is questionable." Clearly there are private interfaces within components, for example the interface between controller manager and controllers, or the way you write a plugin (for our various types of plugins). I'm just suggesting that we also be able to have private interfaces between components, in some limited situations.

@davidopp
Copy link
Member

davidopp commented Sep 2, 2015

@bgrant0607 and I are going to discuss this further offline.

@davidopp
Copy link
Member

davidopp commented Sep 2, 2015

But to clarify one thing I said earlier -- the mechanism I'm describing isn't really for "prototype and iterate." Having alpha API versions and the experimental API prefix already give you that capability. It's about components that you want to have an private/internal API "forever."

@bgrant0607
Copy link
Member Author

Every time we create such interdependencies, they also make Kubernetes harder to deploy and upgrade. Many people don't use our /cluster implementation.

We also discussed offline that we could do what we've suggested with the scheduler and its proposed extension API: That when we broke compatibility, WE would fork both sides of the API, so that we wouldn't break users that were dependent on one side or the other.

@bgrant0607
Copy link
Member Author

I'll also point out that while our external dependencies (Heapster, InfluxDB, etcd, Docker, cAdvisor) have different API conventions, they maintain backward compatibility.

@smarterclayton
Copy link
Contributor

Closing due to age.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/api Indicates an issue on api area. area/kubelet-api kind/design Categorizes issue or PR as related to design. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet
Development

No branches or pull requests