Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

downward api volume plugin #5093

Merged
merged 1 commit into from
Sep 2, 2015

Conversation

sdminonne
Copy link
Contributor

This PR implements #560 especially what is discussed here
Dumping info in files is were discussed in #386 as well.

Needs rebasing after #8788 is merged

@erictune
Copy link
Member

erictune commented Mar 5, 2015

@thockin loves sidecars.

@pmorie
Copy link
Member

pmorie commented Mar 5, 2015

@erictune This PR is hand-crafted to please @thockin

@sdminonne sdminonne changed the title WIP for https://github.com/GoogleCloudPlatform/kubernetes/issues/560 WIP: pod info sidecar Mar 5, 2015
}

// generateTmpFile writes input parameter 'values' in a temporary file. File descriptor
// is returned. In case error is returned file descriptor is not meaningfull
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: typo: meaningful

@thockin
Copy link
Member

thockin commented Mar 7, 2015

I do like this! Is this something we think we want to pursue? It seems viable to me...

@pmorie
Copy link
Member

pmorie commented Mar 9, 2015

@thockin If it does the job, I think we can pursue it. One thing that has come up in discussion is concern about the load on the api server as you scale up usage of this sidecar in pods. There's been some concern over making the pod have to use a sidecar container for this in environments where every pod needs to consume this info. @sdminonne @EricMountain-1A do you want to add to that at all?

All that said, this seems like what we will want to go with if we want to move toward all user concerns that use resources happening in a user container. Either way you will have the same amount of disk IO, and if you make the platform provide some baked-in first-class support for this the complexity of the API required to express it starts to approach the API required to fully express a container (as you've mentioned before). I believe the only thing that would change is that you'd have one pod watch per node (once #4777 is implemented) instead of an additional watch per pod.

Any thoughts, @smarterclayton?

@sdminonne
Copy link
Contributor Author

I've a good discussion with @smarterclayton who clarified me some point about re-labeling process (more specifically the fact that a pod is going to be restarted automatically) and the expected load on the master. The latter was my major concern. As soon the load on the master is under control I'm ok with this mechanism (which I finalize anyway) and we may use it.

@thockin
Copy link
Member

thockin commented Mar 10, 2015

Assuming this uses watch properly, how would the load on the API server be
any different in every pod watches or if kubelet watches on behalf of every
pod? Maybe kubelet could do a single watch for many pods?

If that is an issue, I bet we could do something like watch the kubelet's
API for your own pod, which kubelet would proxy into a group watch on the
API server.

On Mon, Mar 9, 2015 at 10:58 AM, Salvatore Dario Minonne <
notifications@github.com> wrote:

I've a good discussion with @smarterclayton
https://github.com/smarterclayton who clarified me some point about
re-labeling process (more specifically the fact that a pod is going to be
restarted automatically) and the expected load on the master. The latter
was my major concern. As soon the load on the master is under control
I'm ok with this mechanism (which I finalize anyway) and we may use it.


Reply to this email directly or view it on GitHub
#5093 (comment)
.

@smarterclayton
Copy link
Contributor

Since the kubelet has the labels, and already watches, why can't the pod talk to the kubelet to get this, maybe via the metadata service thing that Rocket has proposed.

----- Original Message -----

Assuming this uses watch properly, how would the load on the API server be
any different in every pod watches or if kubelet watches on behalf of every
pod? Maybe kubelet could do a single watch for many pods?

If that is an issue, I bet we could do something like watch the kubelet's
API for your own pod, which kubelet would proxy into a group watch on the
API server.

On Mon, Mar 9, 2015 at 10:58 AM, Salvatore Dario Minonne <
notifications@github.com> wrote:

I've a good discussion with @smarterclayton
https://github.com/smarterclayton who clarified me some point about
re-labeling process (more specifically the fact that a pod is going to be
restarted automatically) and the expected load on the master. The latter
was my major concern. As soon the load on the master is under control
I'm ok with this mechanism (which I finalize anyway) and we may use it.


Reply to this email directly or view it on GitHub
#5093 (comment)
.


Reply to this email directly or view it on GitHub:
#5093 (comment)

@thockin
Copy link
Member

thockin commented Mar 10, 2015

I think that is more or less what I said :) If you squint.

On Tue, Mar 10, 2015 at 8:25 AM, Clayton Coleman notifications@github.com
wrote:

Since the kubelet has the labels, and already watches, why can't the pod
talk to the kubelet to get this, maybe via the metadata service thing that
Rocket has proposed.

----- Original Message -----

Assuming this uses watch properly, how would the load on the API server
be
any different in every pod watches or if kubelet watches on behalf of
every
pod? Maybe kubelet could do a single watch for many pods?

If that is an issue, I bet we could do something like watch the kubelet's
API for your own pod, which kubelet would proxy into a group watch on the
API server.

On Mon, Mar 9, 2015 at 10:58 AM, Salvatore Dario Minonne <
notifications@github.com> wrote:

I've a good discussion with @smarterclayton
https://github.com/smarterclayton who clarified me some point about
re-labeling process (more specifically the fact that a pod is going to
be
restarted automatically) and the expected load on the master. The
latter
was my major concern. As soon the load on the master is under control
I'm ok with this mechanism (which I finalize anyway) and we may use it.


Reply to this email directly or view it on GitHub
<
#5093 (comment)

.


Reply to this email directly or view it on GitHub:

#5093 (comment)


Reply to this email directly or view it on GitHub
#5093 (comment)
.

@smarterclayton
Copy link
Contributor

You've proven I actually don't read what anyone else writes. /hangs-head

On Mar 10, 2015, at 11:40 AM, Tim Hockin notifications@github.com wrote:

I think that is more or less what I said :) If you squint.

On Tue, Mar 10, 2015 at 8:25 AM, Clayton Coleman notifications@github.com
wrote:

Since the kubelet has the labels, and already watches, why can't the pod
talk to the kubelet to get this, maybe via the metadata service thing that
Rocket has proposed.

----- Original Message -----

Assuming this uses watch properly, how would the load on the API server
be
any different in every pod watches or if kubelet watches on behalf of
every
pod? Maybe kubelet could do a single watch for many pods?

If that is an issue, I bet we could do something like watch the kubelet's
API for your own pod, which kubelet would proxy into a group watch on the
API server.

On Mon, Mar 9, 2015 at 10:58 AM, Salvatore Dario Minonne <
notifications@github.com> wrote:

I've a good discussion with @smarterclayton
https://github.com/smarterclayton who clarified me some point about
re-labeling process (more specifically the fact that a pod is going to
be
restarted automatically) and the expected load on the master. The
latter
was my major concern. As soon the load on the master is under control
I'm ok with this mechanism (which I finalize anyway) and we may use it.


Reply to this email directly or view it on GitHub
<
#5093 (comment)

.


Reply to this email directly or view it on GitHub:

#5093 (comment)


Reply to this email directly or view it on GitHub
#5093 (comment)
.


Reply to this email directly or view it on GitHub.

@bgrant0607
Copy link
Member

How is the name of the pod supposed to get set?

This seems pretty heavyweight and opaque to me.

@pmorie
Copy link
Member

pmorie commented Mar 10, 2015

@thockin @smarterclayton Yep, I had had pod querying kubelet and a single watch from kubelet in mind.

@sdminonne Any opinions on that?

@sdminonne
Copy link
Contributor Author

@thockin, @smarterclayton @pmorie sorry for the dealy on this.
I would be OK to watch kubelet and not api-server but to be honest I cannot answer to @bgrant0607's question.

Clues?

@smarterclayton
Copy link
Contributor

Kubelet already has watch

On Mar 10, 2015, at 3:01 PM, Paul Morie notifications@github.com wrote:

@thockin @smarterclayton Yep, I had had pod querying kubelet and a single watch from kubelet in mind.

@sdminonne Any opinions on that?


Reply to this email directly or view it on GitHub.

@thockin
Copy link
Member

thockin commented Mar 16, 2015

Kubelet's API should be a reflection of the API server for things that the
kubelet knows about. I'm sure it's not there yet.

On Thu, Mar 12, 2015 at 7:49 AM, Clayton Coleman notifications@github.com
wrote:

Kubelet already has watch

On Mar 10, 2015, at 3:01 PM, Paul Morie notifications@github.com
wrote:

@thockin @smarterclayton Yep, I had had pod querying kubelet and a
single watch from kubelet in mind.

@sdminonne Any opinions on that?


Reply to this email directly or view it on GitHub.


Reply to this email directly or view it on GitHub
#5093 (comment)
.

@sdminonne
Copy link
Contributor Author

@smarterclayton @thockin the problem I'm facing is the way to get the pod name. For the kubelet I can watch or poll

@thockin
Copy link
Member

thockin commented Mar 16, 2015

Yeah, we've been punting on exposing that information - Clayton doesn't
like the obvious approach (just slap on some env vars) and nobody has
properly proposed and implemented the alternative (pre-process some env var
values to be "special expansions")

On Mon, Mar 16, 2015 at 9:30 AM, Salvatore Dario Minonne <
notifications@github.com> wrote:

@smarterclayton https://github.com/smarterclayton @thockin
https://github.com/thockin the problem I'm facing is the way to get the
pod name. For the kubelet I can watch or poll


Reply to this email directly or view it on GitHub
#5093 (comment)
.

@smarterclayton
Copy link
Contributor

I'll make it a priority to iterate the current pull request proposal. The upstream change to Docker to resolve env automatically was the latest wrinkle

On Mar 16, 2015, at 12:37 PM, Tim Hockin notifications@github.com wrote:

Yeah, we've been punting on exposing that information - Clayton doesn't
like the obvious approach (just slap on some env vars) and nobody has
properly proposed and implemented the alternative (pre-process some env var
values to be "special expansions")

On Mon, Mar 16, 2015 at 9:30 AM, Salvatore Dario Minonne <
notifications@github.com> wrote:

@smarterclayton https://github.com/smarterclayton @thockin
https://github.com/thockin the problem I'm facing is the way to get the
pod name. For the kubelet I can watch or poll


Reply to this email directly or view it on GitHub
#5093 (comment)
.


Reply to this email directly or view it on GitHub.

}
}
if strings.HasPrefix(items[0], "..") && len(items[0]) > 2 {
allErrs = append(allErrs, errs.NewFieldForbidden("path", "must not start with \"..\"."))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should be Invalid

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, done

@thockin
Copy link
Member

thockin commented Aug 31, 2015

Sorry, a bit of imprecision on the last go-around.

@k8s-bot
Copy link

k8s-bot commented Aug 31, 2015

GCE e2e build/test failed for commit b6255d6503c8e3c7e5f8f3a858bf41014908eb62.

@sdminonne
Copy link
Contributor Author

@thockin many apologizes for the misunderstanding.
I've already squashed but I've modified only validation(_test)?.go.

@k8s-bot
Copy link

k8s-bot commented Sep 1, 2015

GCE e2e build/test passed for commit fc738559c9560c5e4f11f06ae9863c63227fd486.


## Example

This is an example of a pod that consumes its labels and annotations via the downward API volume, labels and annotations ared dumped rispectively in `/etc/podlabels` and in `/etc/annotations` files:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are dumped in `/etc/podlabels` and `/etc/annotations`, respectively:

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pmorie thanks

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed and squashed

@k8s-bot
Copy link

k8s-bot commented Sep 1, 2015

GCE e2e build/test failed for commit 0eae3e364addf8db810b94ad86373b1189d0ee8f.

@sdminonne
Copy link
Contributor Author

the fail is really strange just modified the docs... :)

@k8s-bot
Copy link

k8s-bot commented Sep 1, 2015

GCE e2e build/test passed for commit f4dc065.

@thockin thockin added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Sep 2, 2015
@thockin
Copy link
Member

thockin commented Sep 2, 2015

L G T M

@sdminonne
Copy link
Contributor Author

@thockin thanks :)

@pmorie
Copy link
Member

pmorie commented Sep 2, 2015

@sdminonne o/\o ✊

@sdminonne
Copy link
Contributor Author

@pmorie thanks to you (obviously) too

@brendandburns
Copy link
Contributor

@sdminonne Sorry this has to be rebased...

I promise we will then immediately merge...

@brendandburns
Copy link
Contributor

nm, it's generated code errors, I'll do it manually.

@brendandburns brendandburns merged commit f4dc065 into kubernetes:master Sep 2, 2015
@sdminonne
Copy link
Contributor Author

@brendandburns many thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm "Looks good to me", indicates that a PR is ready to be merged. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.