-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple schedule policies for different pods. #9920
Comments
Two ideas were suggested in that thread:
We haven't tried either of these, but you are definitely welcome to try it out and let us know if you have problems. |
OK. I am going to try them and let you know if there is any progress/problem. |
I have a similar issue. My first thought was multiple schedulers, then add to the podspec/rcspec to allow the pod to specify the scheduler to use for it. Resort to the default (with some warning message) if the desired scheduler isn't available. In my case, it may be viable to just allow multiple default scheduler instances, each with their own configuration file and name (label). |
Please see #367 , but I'm all for multiple schedulers too ;-) |
After a rough consideration, I would prefer to go for a single scheduler with multiple policies for specific pods. I am working on the codes and hope to show something sooner:) |
Issues go stale after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
In production, we need multiple schedule policies to meet the requirements of different pods. For instance, some pods may prefer a schedule policy in which ServiceSpreading is considered to be import, while some other pods just want themselves to be deployed on the nodes with more free resources regardless of the ServiceSpreading. To meet these "conflict" requirement, different schedule policies should be allowed when kube-scheduler is up.
How should we do this?
An initial discussion at https://groups.google.com/forum/#!topic/google-containers/wyn8dNXq6xI.
The text was updated successfully, but these errors were encountered: