Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add LabelSelectorSpreadPriority for ReplicaSet in scheduler #21074

Merged
merged 1 commit into from
Feb 16, 2016

Conversation

mqliang
Copy link
Contributor

@mqliang mqliang commented Feb 11, 2016

replace ReplicationController to ReplicaSet in scheduler

@bgrant0607 @madhusudancs

@k8s-github-robot
Copy link

Labelling this PR as size/L

@k8s-github-robot k8s-github-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Feb 11, 2016
@k8s-bot
Copy link

k8s-bot commented Feb 11, 2016

GCE e2e build/test failed for commit 27c77c85f4ff82e4ae5b6fd0b5966ca2b741fafc.

@bgrant0607
Copy link
Member

@mqliang Most users are going to be using ReplicationController for a while, so we can't remove the RC-based functionality from the scheduler yet. We should add ReplicaSet, but keep ReplicationController.

@bgrant0607 bgrant0607 added this to the v1.2 milestone Feb 11, 2016
@bgrant0607
Copy link
Member

cc @janetkuo @Kargakis

@davidopp
Copy link
Member

+1 to @bgrant0607 's comment. Thanks for noticing this, @mqliang .

@mqliang
Copy link
Contributor Author

mqliang commented Feb 12, 2016

I think I got it: We can replace ReplicationController to ReplicaSet in DeploymentController because k8s v1.0 has no DeploymentController. But for selector_spreading functionality in scheduler, we must make it back compatible with v1.0. Hence, we should _ADD_ ReplicaSet, and keep ReplicationController.

@k8s-bot
Copy link

k8s-bot commented Feb 12, 2016

GCE e2e build/test failed for commit 93b96758d087426dcab0170bd1f3e0697c9bbf48.

@mqliang mqliang force-pushed the scheduler-ReplicaSet branch from 93b9675 to 15f9591 Compare February 12, 2016 03:37
@mqliang mqliang changed the title migrate ReplicationController to ReplicaSet in scheduler add LabelSelectorSpreadPriority for ReplicaSet in scheduler Feb 12, 2016
@k8s-bot
Copy link

k8s-bot commented Feb 12, 2016

GCE e2e build/test failed for commit 15f9591f3399c8f0f89337ed78945362879dcca1.

@davidopp
Copy link
Member

Instead of creating a new LabelSelectorSpread type and associated CalculateSpreadPriority(), it would be better to combine these into the existing SelectorSpread and its CalculateSpreadPriority(). So you would have

type SelectorSpread struct {
    podLister        algorithm.PodLister
    serviceLister    algorithm.ServiceLister
    controllerLister algorithm.ControllerLister
    replicaSetLister algorithm.ReplicaSetLister
}

and in SelectorSpread.CalculateSpreadPriority() you would do

    services, err := s.serviceLister.GetPodServices(pod)
    if err == nil {
        for _, service := range services {
            selectors = append(selectors, labels.SelectorFromSet(service.Spec.Selector))
        }
    }
    controllers, err := s.controllerLister.GetPodControllers(pod)
    if err == nil {
        for _, controller := range controllers {
            selectors = append(selectors, labels.SelectorFromSet(controller.Spec.Selector))
        }
    }
    rss, err := s.replicaSetLister.GetPodReplicaSets(pod)
    if err == nil {
        for _, rs := range rss {
            if selector, err := unversioned.LabelSelectorAsSelector(rs.Spec.Selector); err == nil {
                selectors = append(selectors, selector)
            }
        }
    }

Once you have created the selectors list, there's really no difference between how you handle services, ReplicationControllers, or ReplicaSets. So I don't think there's any need to add a new LabelSelectorSpreadPriority priority function. You can just add ReplicaSet to existing SelectorSpread.

Does that sound reasonable?

@mqliang mqliang force-pushed the scheduler-ReplicaSet branch from 15f9591 to cbde1f3 Compare February 15, 2016 16:15
@k8s-bot
Copy link

k8s-bot commented Feb 15, 2016

GCE e2e build/test failed for commit cbde1f3fdb142c74a7def7f9192bab99d5e018ab.

@davidopp
Copy link
Member

ping @mqliang -- I'd really like to get this into 1.2

@mqliang mqliang force-pushed the scheduler-ReplicaSet branch from cbde1f3 to d7269ab Compare February 16, 2016 02:54
@k8s-bot
Copy link

k8s-bot commented Feb 16, 2016

GCE e2e build/test failed for commit d7269ab44a6b578b4651b79e6bf1e9e8a2772e3d.

@mqliang mqliang force-pushed the scheduler-ReplicaSet branch from d7269ab to c888649 Compare February 16, 2016 03:38
@k8s-bot
Copy link

k8s-bot commented Feb 16, 2016

GCE e2e test build/test passed for commit c888649882c075f81ca4778a694841c634c6b987.

@mqliang
Copy link
Contributor Author

mqliang commented Feb 16, 2016

@davidopp Updated. PTAL.

@@ -229,6 +234,11 @@ func (f *ConfigFactory) CreateFromKeys(predicateKeys, priorityKeys sets.String,
// Cache this locally.
cache.NewReflector(f.createControllerLW(), &api.ReplicationController{}, f.ControllerLister.Store, 0).RunUntil(f.StopEverything)

// Watch and cache all ReplicaSet objects. Scheduler needs to find all pods
// created by the same services or ReplicaSets, so that it can spread them correctly.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/services or ReplicaSets/services or ReplicationControllers or ReplicaSets/

also, please make the same fix on L233 and L228

@davidopp
Copy link
Member

This looks great -- thanks! I just had one very minor comment.

@mqliang mqliang force-pushed the scheduler-ReplicaSet branch from c888649 to fbf859c Compare February 16, 2016 06:30
@k8s-bot
Copy link

k8s-bot commented Feb 16, 2016

GCE e2e test build/test passed for commit fbf859c09d0614320674352e3baae3b7eb6c8da4.

@mqliang mqliang force-pushed the scheduler-ReplicaSet branch from fbf859c to 0aab44a Compare February 16, 2016 07:27
@mqliang
Copy link
Contributor Author

mqliang commented Feb 16, 2016

@davidopp Comments addressed. PTAL

@davidopp
Copy link
Member

LGTM

Thanks!

@davidopp davidopp added ok-to-merge lgtm "Looks good to me", indicates that a PR is ready to be merged. labels Feb 16, 2016
@k8s-bot
Copy link

k8s-bot commented Feb 16, 2016

GCE e2e test build/test passed for commit 0aab44a.

@k8s-github-robot
Copy link

@k8s-bot test this [submit-queue is verifying that this PR is safe to merge]

@k8s-bot
Copy link

k8s-bot commented Feb 16, 2016

GCE e2e test build/test passed for commit 0aab44a.

@k8s-github-robot
Copy link

Automatic merge from submit-queue

k8s-github-robot pushed a commit that referenced this pull request Feb 16, 2016
@k8s-github-robot k8s-github-robot merged commit 8a815b9 into kubernetes:master Feb 16, 2016
@mqliang mqliang deleted the scheduler-ReplicaSet branch February 16, 2016 09:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/app-lifecycle lgtm "Looks good to me", indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants