-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move Deployments to ReplicaSets and switch the Deployment selector to the new LabelSelector. #19840
Move Deployments to ReplicaSets and switch the Deployment selector to the new LabelSelector. #19840
Conversation
Labelling this PR as size/XS |
9b31346
to
81dd187
Compare
GCE e2e test build/test passed for commit 9b31346b2f8ddff923011f67b113e8cf994207f1. |
GCE e2e test build/test passed for commit 81dd187f72b52816f380fd0ef3d775b417711162. |
81dd187
to
7f7eb21
Compare
GCE e2e test build/test passed for commit 7f7eb2187467126eb9eb08afcdc059c90a7cf61c. |
Yes, we should describe how to migrate: Simple way: Delete all Deployments, their RCs, and their pods, and re-create new ones using selector.matchLabels. Way to not disrupt the running pods:
|
Could one of @janetkuo, @nikhiljindal, @Kargakis, or @ironcladlou please review this? |
This PR is insanely huge. I would propose breaking it down to smaller self-contained PRs (1. addition of Replica Sets, 2. Move Deployments to use Replica Sets, 3. Change Deployment label selector) |
Ah I now noticed that only the two last commits need reviewing here. Sorry for the noise |
@Kargakis Yes, all the ReplicaSet PRs seemed to be blocked on the first PR in the submit queue. |
Also just to clarify, it doesn't make sense to move deployments to replica sets and then switch to label selector in two separate PRs while having the build and all the tests pass in both the PRs. |
LGTM once comments from previous PRs are addressed. Will wait for previous PRs to merge before applying LGTM label. This will need to be rebased, regenerate code, etc., anyway. |
7f7eb21
to
2564f34
Compare
GCE e2e build/test failed for commit 2564f34c8a57386e5774af86c6a9acd472026be1. |
2564f34
to
a2247fe
Compare
Labelling this PR as size/XXL |
GCE e2e test build/test passed for commit b929f1d16b58d2297d81335da8bad8117b2f7c78. |
@janetkuo PTAL. Ran the e2e tests manually again with --test_args='--ginkgo.focus=Deployment.*' and they all passed. |
@@ -149,9 +149,9 @@ func RunExpose(f *cmdutil.Factory, out io.Writer, cmd *cobra.Command, args []str | |||
// For objects that need a pod selector, derive it from the exposed object in case a user | |||
// didn't explicitly specify one via --selector | |||
if s, found := params["selector"]; found && kubectl.IsZero(s) { | |||
s, err := f.PodSelectorForObject(inputObject) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will expose
support replicasets? If so, we need to add replicasets to CanBeExposed
in factory.go. And modify the description of kubectl expose
in this file too. This can be done in a follow up PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
b929f1d
to
7f7dda7
Compare
GCE e2e test build/test passed for commit 7f7dda7. |
@madhusudancs LGTM other than a minor suggestion. Feel free to add the tag. |
7f7dda7
to
e7a9f30
Compare
@janetkuo Thanks! Addressed the comment. Adding LGTM. |
GCE e2e test build/test passed for commit e7a9f30. |
@k8s-bot test this [submit-queue is verifying that this PR is safe to merge] |
GCE e2e test build/test passed for commit e7a9f30. |
@k8s-bot test this [submit-queue is verifying that this PR is safe to merge] |
GCE e2e test build/test passed for commit e7a9f30. |
Automatic merge from submit-queue |
Auto commit by PR queue bot
@@ -149,9 +149,9 @@ func RunExpose(f *cmdutil.Factory, out io.Writer, cmd *cobra.Command, args []str | |||
// For objects that need a pod selector, derive it from the exposed object in case a user | |||
// didn't explicitly specify one via --selector | |||
if s, found := params["selector"]; found && kubectl.IsZero(s) { | |||
s, err := f.PodSelectorForObject(inputObject) | |||
s, err := f.MapBasedSelectorForObject(inputObject) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the difference between MapBasedSelectorForObject
and PodSelectorForObject
? Do we need both?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just saw the discussion in https://github.com/kubernetes/kubernetes/pull/19840/files#r52379928
@madhusudancs I think you can safely collapse both methods into one as expose is the only place they are used.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Kargakis Since this is in pkg/kubectl/... and an exported function, we weren't sure who else outside Kubernetes was using this, particularly OpenStack. So we left the existing function as is. If you are certain that OpenStack doesn't use the factory, we can probably eliminate one of the two functions.
cc @bgrant0607
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are using the old method but the signatures are the same and we can repurpose factory methods however we like. That being said, let's keep it around and once we rebase and make use of the new method then I'll submit a fix to remove the old one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, thanks. We did not want to surprise.
Release note for v1.2 (Copying @bgrant0607's comment from below for better visibility)
Delete all Deployments, their RCs, and their pods, and re-create new ones using selector.matchLabels.
Way to not disrupt the running pods:
End of release note
@bgrant0607 I have added the "release-note-breaking-change" label, but I am not sure if this change qualifies for a release note describing "a way to achieve or approximate previous behavior, or migration that should/could be done prior to upgrade" that you mentioned in your email. Do you think this deserves a description?
Update the Deployments' API types, defaulting code, conversions, helpers
and validation to use ReplicaSets instead of ReplicationControllers and
LabelSelector instead of map[string]string for selectors.
Also update the Deployment controller, registry, kubectl subcommands,
client listers package and e2e tests to use ReplicaSets and
LabelSelector for Deployments.
Only the last two commits in this PR need reviews.