-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create a new Deployment in kube-system for every version. #23512
Conversation
Labelling this PR as size/XS |
GCE e2e build/test passed for commit 4c973ecaaa0740bdc7fdf1e915d3a2ff3b6dd875. |
@@ -25,7 +25,6 @@ spec: | |||
metadata: | |||
labels: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These need a version label or two deployments will select the same set of replicasets
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ack.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mikedanese, I noticed your comment "need a version label or two deployments will select the same set of replicasets". This is news to me, but I'm a novice so am probably confused. Could you spare a moment explain this concern in more detail, or point me to some relevant docs? Thanks.
I think we should rename deployment names from |
Fair point about get-suffix. |
Labelling this PR as size/M |
Thanks. This looks good to me. @fgrzadkowski I think this fixes #23471. WDYT? |
GCE e2e build/test failed for commit aa94464e68c02ee6f53a50ff4bea4b89ebb53402. Please reference the list of currently known flakes when examining this failure. If you request a re-test, you must reference the issue describing the flake. |
GCE e2e build/test passed for commit 7f4487b21c5c73964a4a189f20ed3dbce5225471. |
template: | ||
metadata: | ||
labels: | ||
k8s-app: heapster | ||
kubernetes.io/cluster-service: "true" | ||
version: v1.1.0.beta |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The version labels don't match the controller name. Was that intentional? It's like this in a few spots.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, I just hadn't tested all 4 configs yet.
One last comment then I will apply LGTM. |
LGTM |
GCE e2e build/test passed for commit 0dcd49d. |
Flaky test: #23610 |
GCE e2e build/test passed for commit 0dcd49d. |
@k8s-bot test this [submit-queue is verifying that this PR is safe to merge] |
GCE e2e build/test passed for commit 0dcd49d. |
@k8s-bot test this [submit-queue is verifying that this PR is safe to merge] |
GCE e2e build/test passed for commit 0dcd49d. |
@k8s-bot test this [submit-queue is verifying that this PR is safe to merge] |
GCE e2e build/test passed for commit 0dcd49d. |
Automatic merge from submit-queue |
LGTM. Thanks for doing this. My only concern here is that there is no profit from using Deployment instead of Replication Controller because we are removing and creating the deployment object during each update. We should use Deployment upgrade feature instead of this but it requires significant change in the kube-addon upgrade script. |
template: | ||
metadata: | ||
labels: | ||
k8s-app: heapster | ||
kubernetes.io/cluster-service: "true" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With this label removed from the pods and replica sets, what will delete them when the version is upgraded in the future?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addon updater uses kubectl stop.
The addon-resizer doesn't fight with the addon-updater? |
@piosz I agree that we should be using deployment rolling updates between versions of addons. The reason we need deployments for this addon today is the resizer directly modifies the deployment to adjust resource requirements in the pod spec and relies on the rolling update. This wouldn't work if we were using replication controller. @bgrant0607 we only update when the name changes. We don't look for modifications of the spec. So they shouldn't fight. |
@Q-Lee -- can you create a cherry pick PR for this change on the release-1.2 branch? |
We'll need to cherry pick #22893 too |
Commit found in the "release-1.2" branch appears to be this PR. Removing the "cherrypick-candidate" label. If this is an error find help to get your PR picked. |
…ck-of-#22893-kubernetes#23512-upstream-release-1.2 Automated cherry pick of kubernetes#22893 kubernetes#23512
…ck-of-#22893-kubernetes#23512-upstream-release-1.2 Automated cherry pick of kubernetes#22893 kubernetes#23512
UPSTREAM: 78883: Fix incorrect procMount defaulting Origin-commit: 62c1be4dbd78bdab64685bd8bdc119e0463af3c3
It appears that version numbers have already been properly added to these files. Small change to delete an old deployment entirely, so we can make a new one per version (like replication controllers).
We'll want to change this back once the kube-addons support deployments in a later version.