Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rename fields in ContainerImage object #21182

Merged
merged 1 commit into from
Feb 19, 2016

Conversation

vishh
Copy link
Contributor

@vishh vishh commented Feb 12, 2016

Reference: #18248

@vishh vishh added this to the v1.2-candidate milestone Feb 12, 2016
@vishh
Copy link
Contributor Author

vishh commented Feb 12, 2016

cc @bgrant0607 @dchen1107 @yujuhong

@k8s-github-robot
Copy link

Labelling this PR as size/S

@k8s-github-robot k8s-github-robot added kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Feb 12, 2016
@k8s-bot
Copy link

k8s-bot commented Feb 12, 2016

GCE e2e test build/test passed for commit 012ec0f42875df068e29cc6267543c34ffa863b4.

@k8s-github-robot
Copy link

Labelling this PR as size/M

@k8s-github-robot k8s-github-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Feb 12, 2016
@k8s-bot
Copy link

k8s-bot commented Feb 12, 2016

GCE e2e build/test failed for commit d69d1152b81e1feb2c9e8801bcde0ad179a8d15f.

@vishh
Copy link
Contributor Author

vishh commented Feb 12, 2016

@k8s-bot test this github issue: #21138

@k8s-bot
Copy link

k8s-bot commented Feb 12, 2016

GCE e2e test build/test passed for commit d69d1152b81e1feb2c9e8801bcde0ad179a8d15f.

@bgrant0607
Copy link
Member

LGTM

@bgrant0607 bgrant0607 modified the milestones: v1.2, v1.2-candidate Feb 12, 2016
@bgrant0607 bgrant0607 added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Feb 12, 2016
@k8s-github-robot
Copy link

PR needs rebase

@k8s-github-robot k8s-github-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Feb 13, 2016
…tus.

Signed-off-by: Vishnu kannan <vishnuk@google.com>
@k8s-github-robot
Copy link

PR changed after LGTM, removing LGTM.

@k8s-github-robot k8s-github-robot removed lgtm "Looks good to me", indicates that a PR is ready to be merged. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels Feb 16, 2016
@vishh
Copy link
Contributor Author

vishh commented Feb 17, 2016

@bgrant0607: Need another LGTM due to a rebase.

@k8s-bot
Copy link

k8s-bot commented Feb 17, 2016

GCE e2e test build/test passed for commit 2623fdd.

@bgrant0607 bgrant0607 added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Feb 17, 2016
@bgrant0607
Copy link
Member

LGTM

@k8s-github-robot
Copy link

@k8s-bot test this

Tests are more than 48 hours old. Re-running tests.

@k8s-bot
Copy link

k8s-bot commented Feb 19, 2016

GCE e2e test build/test passed for commit 2623fdd.

@bgrant0607 bgrant0607 added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Feb 19, 2016
@bgrant0607
Copy link
Member

@liggitt @deads2k

Do you have any concerns about putting image names in NodeStatus? It seems like they might contain sensitive information, and it would be unfortunate to have to restrict access to nodes.

@liggitt
Copy link
Member

liggitt commented Feb 19, 2016

is this the name that matches the container image name? if we expect someone to have access to the node API object, but not access to the bound pods for the node, then yes... this exposes a lot more information.

I'd also be concerned about the size of the node records listing every image they contained, especially as frequently as they are updated

@deads2k
Copy link
Contributor

deads2k commented Feb 19, 2016

Do you have any concerns about putting image names in NodeStatus? It seems like they might contain sensitive information, and it would be unfortunate to have to restrict access to nodes.

We already restrict access to node resources pretty tightly (cluster-admin sort of roles), so I'm not too concerned about adding this information since its already available on the node itself.

@deads2k
Copy link
Contributor

deads2k commented Feb 19, 2016

What is the utility is having this information? I'd be more worried about someone trying to use this for scheduling decisions, which is a big security hole if you aren't verifying pull access to local images.

This would allow someone to try to get access to a restricted image, a "smart" scheduler would look for the node that already has that image, schedule it there and bypass the authorization check, thus making a really bad choice. Hopefully that's either a) not why you're doing this, b) you've considered an solved thsi problem, c) you never have the concept of a private image (unlikely).

@k8s-github-robot
Copy link

@k8s-bot test this [submit-queue is verifying that this PR is safe to merge]

@k8s-bot
Copy link

k8s-bot commented Feb 19, 2016

GCE e2e test build/test passed for commit 2623fdd.

@k8s-github-robot
Copy link

Automatic merge from submit-queue

k8s-github-robot pushed a commit that referenced this pull request Feb 19, 2016
@k8s-github-robot k8s-github-robot merged commit 057b835 into kubernetes:master Feb 19, 2016
@bgrant0607
Copy link
Member

@deads2k Yes, the information was intended to be used for scheduling.

ImageLocalityPriority:
https://github.com/kubernetes/kubernetes/blob/master/plugin/pkg/scheduler/algorithm/priorities/priorities.go#L156

Filed #21560 to discuss this type of policy/convention.

@bgrant0607
Copy link
Member

@deads2k Security-wise, I don't see the difference between starting a set of pods spread across all the nodes (and there are many ways to achieve that) vs. choosing the specific node(s) that contain a particular image.

@deads2k
Copy link
Contributor

deads2k commented Feb 19, 2016

@deads2k Security-wise, I don't see the difference between starting a set of pods spread across all the nodes (and there are many ways to achieve that) vs. choosing the specific node(s) that contain a particular image.

I don't see a reason to help user do it, but I will concede that I can get a similar effect by scaling an RC up and down over and over again. Other than an "always pull policy" (in which case this change does nothing), is anyone looking at a way to secure access to images already on the node?

@bgrant0607
Copy link
Member

For the scheduling use case, I think a one-way hash of the image names would actually be good enough.

@bgrant0607
Copy link
Member

@deads2k Re. access to images on nodes: not AFAIK. I'll file an issue.

@bgrant0607
Copy link
Member

@davidopp @resouer Would obfuscating image names (e.g., using a one-way hash) cause any problems for ImageLocalityPriority?

@vishh
Copy link
Contributor Author

vishh commented Feb 19, 2016

FYI. Image security issue is currently solved using an admission controller that overrides the image pull policy to be PullAlways.

@smarterclayton
Copy link
Contributor

One way hash of image names is actually going to result in inefficient scheduler behavior in a lot of common cases where the same image is present as multiple image repository tags.

Someone who pulls "mysql:latest" and "mysql:5.8" will in all likelihood have the same underlying image digest (mysql@sha256:....). The hash of the name obscures the commonality of that image. For Docker based images, if the name on the node is to a digest, we should return the digest as the hash rather than further hashing it. Anyone using digests directly (like OpenShift, or some of the other automated systems that use digest) is likely going to not see much benefit from the scheduler work if we do only a hash of the naive name.

Docker 1.10 is much more strongly organized around using digest internally - fairly soon we should have that as the actual image ID on disk.

@resouer
Copy link
Contributor

resouer commented Feb 22, 2016

One way hash of image names is actually going to result in inefficient scheduler behavior in a lot of common cases where the same image is present as multiple image repository tags

+1 for this.

@bgrant0607
Copy link
Member

I like the digest idea.

@bgrant0607
Copy link
Member

One challenge with that is that the scheduler doesn't have the digest.

@smarterclayton
Copy link
Contributor

The scheduler may have the digest - certainly we do in almost all pods
(openshift transforms tags into digests when the image starts being
tracked). It just depends on having a lot of good integration with a
registry and the changing tags (to trigger deterministic deployments).

The scheduler would at minimum have to parse the docker image to make that
determination - but it's a fairly low cost opportunistic check.

On Mon, Feb 22, 2016 at 12:57 PM, Brian Grant notifications@github.com
wrote:

One challenge with that is that the scheduler doesn't have the digest.


Reply to this email directly or view it on GitHub
#21182 (comment)
.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API lgtm "Looks good to me", indicates that a PR is ready to be merged. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants