Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move kubectl client logic into server #12143

Closed
jimmidyson opened this issue Aug 3, 2015 · 67 comments
Closed

Move kubectl client logic into server #12143

jimmidyson opened this issue Aug 3, 2015 · 67 comments
Labels
area/api Indicates an issue on api area. area/ecosystem area/kubectl lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/cli Categorizes an issue or PR as relevant to SIG CLI.

Comments

@jimmidyson
Copy link
Member

Doing things like kubectl replace requires a lot of logic in kubectl, such as waiting for RC to scale pods down by polling before deleting the RC itself.

Putting this in kubectl makes it hard to replicate in clients written in other languages to get consistent behaviour. Moving this to the api server would mean consistency across languages.

For background, we created the Java Kubernetes client in fabric8 (https://github.com/fabric8io/kubernetes-client) & I've also contributed to the Ruby client at https://github.com/abonas/kubeclient). Having consistent behaviour means replicating undocumented functionality across clients which seems a bit dirty.

@alex-mohr alex-mohr added area/api Indicates an issue on api area. team/friction labels Aug 3, 2015
@nikhiljindal
Copy link
Contributor

cc @bgrant0607

@nikhiljindal nikhiljindal added the sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. label Aug 3, 2015
@jimmidyson
Copy link
Member Author

Another prime example is rolling-update. The logic is all embedded in the client which means providing similar functionality in another language is hard, especially if you want to ensure that rolling-updates work in the same way to be interoperable.

@timothysc
Copy link
Member

@jimmidyson If you were to put them into the apiserver, then it is no-longer stateless.

Perhaps a better / cleaner breakdown of the api-calls makes a bit more sense here? Ideally I would expect api calls to trend to be idempotent where possible.

@jimmidyson
Copy link
Member Author

@timothysc Good point. Yeah perhaps the best way is to ensure it is well documented, the problem being that things like RC names used during rolling update are implementation details that you probably don't want to expose. Or maybe it's a bad idea to try to have consistent behaviour across polyglot clients? Seems to make sense to me though.

@bgrant0607
Copy link
Member

I agree with the principle that non-trivial functionality should be moved server-side. There are already issues filed for these:

We're starting on Deployment and /scale. Configuration generation will likely be added as a service layered on top of the system. We currently have no specific plans for the wait API, but it might be done as needed for the configuration layer. I imagine all external orchestrators will need something similar. cc @jackgr

If there are other specific features you'd like to move server-side, please file new issues or send PRs.

cc @lavalamp

@bgrant0607 bgrant0607 added kind/support Categorizes issue or PR as a support question. and removed team/friction labels Aug 3, 2015
@smarterclayton
Copy link
Contributor

Keep in mind though that long running server operations in the apiserver are an antipattern, and need to be decoupled from rest style operations (wait technically gets a pass because it's a watch).

@jimmidyson
Copy link
Member Author

@smarterclayton So are you saying this shouldn't be in the api server? Where should it live? Deployment config if merged? Being in the client is pretty horrible right now.

@smarterclayton
Copy link
Contributor

Not that I don't agree that more of this should be simple API operations - just that the design when we move it to the API server has to be carefully considered as well.

We lack the API endpoint that represents kill this rc and stop its pods together today. I don't disagree things like that should exist - but when we do them, we should be careful to keep our current behavior of having dumb actions that controllers make eventually consistent.

@bgrant0607
Copy link
Member

I agree with @smarterclayton. It's not about whether we should do this, but how.

@bgrant0607
Copy link
Member

@jimmidyson Do you have specific features in mind?

@jimmidyson
Copy link
Member Author

@bgrant0607 Rolling update is the big one really, but also replace.

@bgrant0607
Copy link
Member

Replace or scale? Your PR description mentioned replace, but you described scale ("waiting for RC to scale pods down").

@bgrant0607
Copy link
Member

And do you care about waiting? Because scaling doesn't actually require waiting.

@jimmidyson
Copy link
Member Author

Replace with cascade scales down first iirc before delete.

I'm not too bothered about waiting but I would like to be able to possibly get the progress of the request.

@smarterclayton
Copy link
Contributor

Scaling an RC down to zero gracefully sounds like graceful termination of
an RC to me. I don't think however graceful termination is flexible enough
to do that today because we just don't have the experience in using it.

On Fri, Aug 7, 2015 at 3:00 AM, Jimmi Dyson notifications@github.com
wrote:

Replace with cascade scales down first iirc before delete.

I'm not too bothered about waiting but I would like to be able to possibly
get the progress of the request.


Reply to this email directly or view it on GitHub
#12143 (comment)
.

Clayton Coleman | Lead Engineer, OpenShift

@bgrant0607
Copy link
Member

I'd really like to move to the exist-dependency model and rely on the garbage collector to clean up the pods.

Whether or not there's a way to wait for them to be cleaned up, which certainly needs graceful termination, is a separate issue.

@lavalamp
Copy link
Member

The most obvious (to me) way of implementing these potentially long running operations, is to have an (e.g.) RollingUpdateOperation API type; you'd POST one of these to apiserver, the spec would have relevant parameters (old RC, new RC, etc), and the controller that does the work would update the status so you could watch that to see progress.

That's a lot of work to implement an operation, though.

@smarterclayton
Copy link
Contributor

For lightweight operations as long as the status can be used as an atomic
checkpoint by the controller AND your spec can describe the desired state
(as per RCs and replicas), it's enough to use a controller. For rolling
updates we've said that belongs in a deployment object.

For deletion of RCs we have graceful deletion, where we can potentially
describe the mapping between a graceful request to delete an RC. We could
potentially add "cascade" as a delete option, which each API object could
transform to mean different things. So if you delete with cascade, the
object is not terminated until all objects it owns are also terminated.
For RC you could manage that as "set deletion timestamp and grace period,
and also set replicas = 0". An RC in graceful deletion with replicas 0
probably means "cascade". An RC in graceful deletion with replicas != 0
probably means abandon. I could imagine that not providing a grace period
for an RC means "abandon", while providing one means "shutdown gracefully
and always scale to zero".

On Tue, Aug 25, 2015 at 3:01 PM, Daniel Smith notifications@github.com
wrote:

The most obvious (to me) way of implementing these potentially long
running operations, is to have an (e.g.) RollingUpdateOperation API type;
you'd POST one of these to apiserver, the spec would have relevant
parameters (old RC, new RC, etc), and the controller that does the work
would update the status so you could watch that to see progress.

That's a lot of work to implement an operation, though.


Reply to this email directly or view it on GitHub
#12143 (comment)
.

Clayton Coleman | Lead Engineer, OpenShift

@0xmichalis
Copy link
Contributor

kubectl drain issue: #25625

@adohe-zz
Copy link

@pwittrock @fabianofranz @bgrant0607 as what we talked on last SIG-CLI meeting, I would hope we can move a little bit forward on this in 1.7. wdyt?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 23, 2017
@jfoy
Copy link
Contributor

jfoy commented Jan 19, 2018

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 19, 2018
@bgrant0607
Copy link
Member

/lifecycle frozen

@soltysh
Copy link
Contributor

soltysh commented Mar 31, 2021

Some bits (where that was possible) were moved to the server, and the other ones are exposed through publishing https://github.com/kubernetes/kubectl/ repository where each command can be easily consumed by anyone.

/close

@k8s-ci-robot
Copy link
Contributor

@soltysh: Closing this issue.

In response to this:

Some bits (where that was possible) were moved to the server, and the other ones are exposed through publishing https://github.com/kubernetes/kubectl/ repository where each command can be easily consumed by anyone.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/api Indicates an issue on api area. area/ecosystem area/kubectl lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/cli Categorizes an issue or PR as relevant to SIG CLI.
Projects
None yet
Development

No branches or pull requests