Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compute container hash based on API content, not go type #57741

Merged
merged 1 commit into from
Aug 22, 2019

Conversation

dixudx
Copy link
Member

@dixudx dixudx commented Jan 2, 2018

What this PR does / why we need it:
Currently when upgrading the cluster, docker containers may get restarted if struct Container get changed.

What we expected is to keep the containers unchanged, since they did not change at all.

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #53644

Special notes for your reviewer:
/assign @tallclair @vishh
WARNING: This change will still let container spec hash get changed and restarted.

But it will keep the hash value consistent for future releases.
Release note:

Omit nil or empty field when calculating container hash value to avoid hash changed. For a new field with a non-nil default value in the container spec, the hash would still get changed.

@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Jan 2, 2018
@dixudx
Copy link
Member Author

dixudx commented Jan 2, 2018

v1.9.0 has already introduced a new field attribute VolumeDevices for Container, which is causing the hash value get changed.

@dixudx dixudx force-pushed the container_hash branch 2 times, most recently from 9ddc712 to 16dce31 Compare January 3, 2018 03:00
@dixudx
Copy link
Member Author

dixudx commented Jan 3, 2018

/cc @yujuhong

@k8s-ci-robot k8s-ci-robot requested a review from yujuhong January 3, 2018 06:51
@yujuhong
Copy link
Contributor

yujuhong commented Jan 3, 2018

This wouldn't help much. If you read the old issue #23104 completely, you'd realize that if you introduce a new field in the container spec with a non-nil default value, the hash would still be different.

I do think this behavior (container restarts) should be call out in the release note.

@liggitt
Copy link
Member

liggitt commented Jan 9, 2018

I don't understand how the serialization/deserialization helps the issue

@dixudx
Copy link
Member Author

dixudx commented Jan 9, 2018

I don't understand how the serialization/deserialization helps the issue

@liggitt By serialization/deserialization, the new obj removes nil ptr and empty slice from the struct, which will keep the to-be-hashed obj unchanged even with new ptr and array type added.

Currently the container hash will get changed if struct Container is appended/updated with a new field. This is making the container restarted when upgrading to a new version, where a new field is added to Container struct.

This is why we should omit nil or empty field when calculating container hash value.

@dixudx
Copy link
Member Author

dixudx commented Jan 9, 2018

you'd realize that if you introduce a new field in the container spec with a non-nil default value, the hash would still be different.

@yujuhong Yes, right. IMO, if we change the default behavior, we should let the container get restarted. That means the hash got changed is desired.

What this PR is going to do is to omit nil or empty new/old fields when calculating container hash. For new-added non-nil or non-empty filed, it is inevitable to let the hash changed.

@liggitt
Copy link
Member

liggitt commented Jan 9, 2018

Currently the container hash will get changed if struct Container is appended/updated with a new field. This is making the container restarted when upgrading to a new version, where a new field is added to Container struct.
This is why we should omit nil or empty field when calculating container hash value.

I still don't understand. All the pods the kubelet deals with should be obtained from deserializing API responses or manifests from disk. How does reserializing and deserializing again help change the container the kubelet deals with?

@dixudx
Copy link
Member Author

dixudx commented Jan 9, 2018

All the pods the kubelet deals with should be obtained from deserializing API responses or manifests from disk.

Right.

How does reserializing and deserializing again help change the container the kubelet deals with?

@liggitt Everytime kubelet gets restarted, the container hash is calculated to identify whether it needs restarted. What this PR is trying to fix is to keep the hash value consistent as far as possible to avoid un-needed restarts. This PR won't change the pod spec and container spec.

By reserializing and deserializing again to a new obj (map[string]interface{}), nil or empty field is omitted when calculating container hash value. This will let the hash value unchanged even new field with nil/empty value is appended to the Container struct.

Currently v1.9.0 introduces a new field attribute VolumeDevices for Container, which is causing the hash value get changed when upgrading cluster < v1.9.0.

@liggitt
Copy link
Member

liggitt commented Jan 9, 2018

By reserializing and deserializing again to a new obj, nil or empty field is omitted when calculating container hash value.

Can you describe how the kubelet loaded this v1 container into memory from an api response or a manifest in a way that would result in different content than if it were roundtripped back to json and then back into memory?

@liggitt
Copy link
Member

liggitt commented Jan 9, 2018

it seems like the issue is less with nil fields and more with the way the hash is being computed. This change will not prevent a new defaulted field from changing the hash, and seems like it would guarantee that hashes would change between 1.9 and 1.10, which isn't great.

Do we not save the serialized container spec we had when we launched the container? If we did, we could load it, apply defaults, and do a semantic deepequal to see if there was a diff that required restarting. Working to detect actual differences seems better than trying to improve the hashing approach which still leaves many unnecessary restart scenarios in place.

@dixudx
Copy link
Member Author

dixudx commented Jan 9, 2018

This change will not prevent a new defaulted field from changing the hash

Right.

Do we not save the serialized container spec we had when we launched the container?

We do save the hash value in container labels, but not the serialized container spec.

If we did, we could load it, apply defaults, and do a semantic deepequal to see if there was a diff that required restarting.
Working to detect actual differences seems better than trying to improve the hashing approach which still leaves many unnecessary restart scenarios in place.

@liggitt Makes sense. But this will not gonna be a small change.

  1. Marks current hash annotation deprecated; May takes 2 releases to remove it?

  2. Adds new label, such as containerSerializedSpecLabel, to save serialized container spec; Load and apply the defaults for deep equal.

  3. Backwards compatible. For an older container without containerSerializedSpecLabel in labels, how can we identify whether it should be restarted.

    • Still follow the old behavior, finding old containerHashLabel? If hash value matches, no need to restart. Currently the container labels cannot be updated (Proposal: Update labels  moby/moby#21721) without a restart. This means the new label containerSerializedSpecLabel won't be enforced.
    • Enforce label containerSerializedSpecLabel directly. If this label is missing, simply restart it and append this new annotation.

I prefer the second option. @liggitt Any suggestions?

ping @kubernetes/sig-node-api-reviews Needs your opinions. Thanks.

@k8s-ci-robot k8s-ci-robot added sig/node Categorizes an issue or PR as relevant to SIG Node. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API labels Jan 9, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 25, 2018
@dixudx
Copy link
Member Author

dixudx commented May 15, 2018

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 15, 2018
@tallclair
Copy link
Member

/assign @yujuhong

@dixudx
Copy link
Member Author

dixudx commented Aug 13, 2019

As the PR is tagged for 1.16, is it still planned for this release?

@xmudrii Yes, still targeted to v1.16.

@k8s-ci-robot k8s-ci-robot added size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels Aug 16, 2019
@fejta-bot
Copy link

This PR may require API review.

If so, when the changes are ready, complete the pre-review checklist and request an API review.

Status of requested reviews is tracked in the API Review project.

@dixudx
Copy link
Member Author

dixudx commented Aug 16, 2019

Ping @liggitt for review. Thanks.

@liggitt
Copy link
Member

liggitt commented Aug 16, 2019

the way I'd probably test this is to take your new HashContainer method, and compute a hash for a stubbed v1.Container in 1.15, then compare that to the same stubbed container in 1.16. The prior implementation would differ because new fields were added to the struct. The new implementation should remain the same since the json serializations would not have changed

rather than committing the copies of types.go files, I'd just record the computed hash for a given pod spec in 1.14 and 1.15 and test against the same HEAD pod spec

@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. labels Aug 19, 2019
@dixudx
Copy link
Member Author

dixudx commented Aug 22, 2019

@liggitt Please take another review. Thanks.

@k8s-ci-robot
Copy link
Contributor

k8s-ci-robot commented Aug 22, 2019

@dixudx: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kubernetes-local-e2e-containerized 16dce31162ae1ea11a68e18069a61099aee50b90 link /test pull-kubernetes-local-e2e-containerized
pull-kubernetes-e2e-kops-aws 16dce31162ae1ea11a68e18069a61099aee50b90 link /test pull-kubernetes-e2e-kops-aws

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@liggitt
Copy link
Member

liggitt commented Aug 22, 2019

/lgtm
/approve

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Aug 22, 2019
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: dixudx, liggitt

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/kubelet cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API kind/bug Categorizes issue or PR as related to a bug. lgtm "Looks good to me", indicates that a PR is ready to be merged. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/node Categorizes an issue or PR as relevant to SIG Node. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Container spec hash changed. Container will be killed and recreated.
9 participants