-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rename fields in ContainerImage object #21182
Conversation
Labelling this PR as size/S |
GCE e2e test build/test passed for commit 012ec0f42875df068e29cc6267543c34ffa863b4. |
012ec0f
to
d69d115
Compare
Labelling this PR as size/M |
GCE e2e build/test failed for commit d69d1152b81e1feb2c9e8801bcde0ad179a8d15f. |
GCE e2e test build/test passed for commit d69d1152b81e1feb2c9e8801bcde0ad179a8d15f. |
LGTM |
PR needs rebase |
…tus. Signed-off-by: Vishnu kannan <vishnuk@google.com>
d69d115
to
2623fdd
Compare
PR changed after LGTM, removing LGTM. |
@bgrant0607: Need another LGTM due to a rebase. |
GCE e2e test build/test passed for commit 2623fdd. |
LGTM |
@k8s-bot test this Tests are more than 48 hours old. Re-running tests. |
GCE e2e test build/test passed for commit 2623fdd. |
is this the name that matches the container image name? if we expect someone to have access to the node API object, but not access to the bound pods for the node, then yes... this exposes a lot more information. I'd also be concerned about the size of the node records listing every image they contained, especially as frequently as they are updated |
We already restrict access to node resources pretty tightly (cluster-admin sort of roles), so I'm not too concerned about adding this information since its already available on the node itself. |
What is the utility is having this information? I'd be more worried about someone trying to use this for scheduling decisions, which is a big security hole if you aren't verifying pull access to local images. This would allow someone to try to get access to a restricted image, a "smart" scheduler would look for the node that already has that image, schedule it there and bypass the authorization check, thus making a really bad choice. Hopefully that's either a) not why you're doing this, b) you've considered an solved thsi problem, c) you never have the concept of a private image (unlikely). |
@k8s-bot test this [submit-queue is verifying that this PR is safe to merge] |
GCE e2e test build/test passed for commit 2623fdd. |
Automatic merge from submit-queue |
Auto commit by PR queue bot
@deads2k Yes, the information was intended to be used for scheduling. ImageLocalityPriority: Filed #21560 to discuss this type of policy/convention. |
@deads2k Security-wise, I don't see the difference between starting a set of pods spread across all the nodes (and there are many ways to achieve that) vs. choosing the specific node(s) that contain a particular image. |
I don't see a reason to help user do it, but I will concede that I can get a similar effect by scaling an RC up and down over and over again. Other than an "always pull policy" (in which case this change does nothing), is anyone looking at a way to secure access to images already on the node? |
For the scheduling use case, I think a one-way hash of the image names would actually be good enough. |
@deads2k Re. access to images on nodes: not AFAIK. I'll file an issue. |
FYI. Image security issue is currently solved using an admission controller that overrides the image pull policy to be |
One way hash of image names is actually going to result in inefficient scheduler behavior in a lot of common cases where the same image is present as multiple image repository tags. Someone who pulls "mysql:latest" and "mysql:5.8" will in all likelihood have the same underlying image digest (mysql@sha256:....). The hash of the name obscures the commonality of that image. For Docker based images, if the name on the node is to a digest, we should return the digest as the hash rather than further hashing it. Anyone using digests directly (like OpenShift, or some of the other automated systems that use digest) is likely going to not see much benefit from the scheduler work if we do only a hash of the naive name. Docker 1.10 is much more strongly organized around using digest internally - fairly soon we should have that as the actual image ID on disk. |
+1 for this. |
I like the digest idea. |
One challenge with that is that the scheduler doesn't have the digest. |
The scheduler may have the digest - certainly we do in almost all pods The scheduler would at minimum have to parse the docker image to make that On Mon, Feb 22, 2016 at 12:57 PM, Brian Grant notifications@github.com
|
Reference: #18248