-
Notifications
You must be signed in to change notification settings - Fork 40.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
added spreading by shared labels #2312
Conversation
1f04f78
to
419f375
Compare
} | ||
|
||
for _, pod := range pods { | ||
podSet[pod.ObjectMeta.Name] = pod |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is broken when two pods have the same name in different namespaces.
Please key on pod.ObjectMeta.Namespace+pod.ObjectMeta.Name, and leave a //TODO to look at replacing with pod.ObjectMeta.UID when its populated so we do not lose this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch. Done.
Please add a test that works with pods with same name in different namespaces. |
ab0f270
to
ed5fca8
Compare
Done and rebased. These travis tests are a little finicky. |
// TODO: look at replacing podSet key with pod.ObjectMeta.UID when it's populated | ||
func getMatchingPods(labelSet map[string]string, podLister PodLister) (map[string]api.Pod, error) { | ||
podSet := map[string]api.Pod{} | ||
for k, v := range labelSet { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is going to be pretty expensive. We're probably better off pulling the complete list of pods and then running the different label queries ourselves.
Can you factor this as a different spreading function (maybe spreading v2?) I think there are problems here where if you have one node with lots of pods, with 1 label match each and another node with a single pod and an exact label match, they're going to get assigned the same priority score, which seems incorrect. I think this is a reasonable addition to our spreading function(s) but I'm not convinced its a drop in replacement. I think that @bgrant0607 is asking for a list of Services and then using those service pod sets as the label sets to calculate the spreading function across. |
23867dc
to
481d4e9
Compare
Fair point. I've pulled it into a second function. I couldn't see a way to access the service selectors without significant refactoring of the scheduler whereas this addition is relatively isolated. |
I agree with @brendandburns that the behavior of this could be bad in common cases. I tried to think of ways to tweak the weighting function so that exact and near-exact matches would count a lot more than partial matches, but didn't come up with anything I thought would work well. One fundamental problem with this approach is that it doesn't take into account the purpose of the labels. For example, if a user put a env=dev label on all their pods, they'd all be spread even if otherwise unrelated. To spread by service, I'd just annotate the pods with the services targeting them, and then use that information the way you're using raw labels here. I'm fine with adding this, but I wouldn't make it the default. We need a way to configure the scheduler factory to select the fit predicates and algorithm(s). |
I'm convinced. I'll close the PR and think about it a bit more. |
Could we start simply and just have an explicit label that we spread by? On Fri, Nov 14, 2014 at 2:35 PM, Mike Danese notifications@github.com
|
The code we have already spreads all pods belonging to the same replication controller. |
Spread pods that share labels rather than only spreading pods that have exactly equal label sets.
Referencing #1965, although maybe not a resolution, this may be an improvement over the current implementation