-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move kubectl client logic into server #12143
Comments
cc @bgrant0607 |
Another prime example is |
@jimmidyson If you were to put them into the apiserver, then it is no-longer stateless. Perhaps a better / cleaner breakdown of the api-calls makes a bit more sense here? Ideally I would expect api calls to trend to be idempotent where possible. |
@timothysc Good point. Yeah perhaps the best way is to ensure it is well documented, the problem being that things like RC names used during rolling update are implementation details that you probably don't want to expose. Or maybe it's a bad idea to try to have consistent behaviour across polyglot clients? Seems to make sense to me though. |
I agree with the principle that non-trivial functionality should be moved server-side. There are already issues filed for these:
We're starting on Deployment and /scale. Configuration generation will likely be added as a service layered on top of the system. We currently have no specific plans for the wait API, but it might be done as needed for the configuration layer. I imagine all external orchestrators will need something similar. cc @jackgr If there are other specific features you'd like to move server-side, please file new issues or send PRs. cc @lavalamp |
Keep in mind though that long running server operations in the apiserver are an antipattern, and need to be decoupled from rest style operations (wait technically gets a pass because it's a watch). |
@smarterclayton So are you saying this shouldn't be in the api server? Where should it live? Deployment config if merged? Being in the client is pretty horrible right now. |
Not that I don't agree that more of this should be simple API operations - just that the design when we move it to the API server has to be carefully considered as well. We lack the API endpoint that represents kill this rc and stop its pods together today. I don't disagree things like that should exist - but when we do them, we should be careful to keep our current behavior of having dumb actions that controllers make eventually consistent. |
I agree with @smarterclayton. It's not about whether we should do this, but how. |
@jimmidyson Do you have specific features in mind? |
@bgrant0607 Rolling update is the big one really, but also replace. |
Replace or scale? Your PR description mentioned replace, but you described scale ("waiting for RC to scale pods down"). |
And do you care about waiting? Because scaling doesn't actually require waiting. |
Replace with cascade scales down first iirc before delete. I'm not too bothered about waiting but I would like to be able to possibly get the progress of the request. |
Scaling an RC down to zero gracefully sounds like graceful termination of On Fri, Aug 7, 2015 at 3:00 AM, Jimmi Dyson notifications@github.com
Clayton Coleman | Lead Engineer, OpenShift |
I'd really like to move to the exist-dependency model and rely on the garbage collector to clean up the pods. Whether or not there's a way to wait for them to be cleaned up, which certainly needs graceful termination, is a separate issue. |
The most obvious (to me) way of implementing these potentially long running operations, is to have an (e.g.) RollingUpdateOperation API type; you'd POST one of these to apiserver, the spec would have relevant parameters (old RC, new RC, etc), and the controller that does the work would update the status so you could watch that to see progress. That's a lot of work to implement an operation, though. |
For lightweight operations as long as the status can be used as an atomic For deletion of RCs we have graceful deletion, where we can potentially On Tue, Aug 25, 2015 at 3:01 PM, Daniel Smith notifications@github.com
Clayton Coleman | Lead Engineer, OpenShift |
kubectl drain issue: #25625 |
@pwittrock @fabianofranz @bgrant0607 as what we talked on last SIG-CLI meeting, I would hope we can move a little bit forward on this in 1.7. wdyt? |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
/remove-lifecycle stale |
/lifecycle frozen |
Some bits (where that was possible) were moved to the server, and the other ones are exposed through publishing https://github.com/kubernetes/kubectl/ repository where each command can be easily consumed by anyone. /close |
@soltysh: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Doing things like
kubectl replace
requires a lot of logic inkubectl
, such as waiting for RC to scale pods down by polling before deleting the RC itself.Putting this in
kubectl
makes it hard to replicate in clients written in other languages to get consistent behaviour. Moving this to the api server would mean consistency across languages.For background, we created the Java Kubernetes client in fabric8 (https://github.com/fabric8io/kubernetes-client) & I've also contributed to the Ruby client at https://github.com/abonas/kubeclient). Having consistent behaviour means replicating undocumented functionality across clients which seems a bit dirty.
The text was updated successfully, but these errors were encountered: