Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: Template object for in-container configuration files #30716

Closed
jberkus opened this issue Aug 16, 2016 · 110 comments
Closed

Proposal: Template object for in-container configuration files #30716

jberkus opened this issue Aug 16, 2016 · 110 comments
Labels
area/app-lifecycle area/configmap-api area/declarative-configuration priority/backlog Higher priority than priority/awaiting-more-evidence. sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/service-catalog Categorizes an issue or PR as relevant to SIG Service Catalog.

Comments

@jberkus
Copy link

jberkus commented Aug 16, 2016

Summary

I would like us to get rid of custom entrypoint.sh scripts by supporting templating of in-container files.

The Problem

Despite current ConfigMap features, it remains necessary for many or most container images under Kubernetes to include "entrypoint.sh" scripts and/or customizations to the containerized applications which are particular to the Kubernetes environment. This results in forking of upstream images and limits portability of images between environments. It also results in some very hackish startup scripts (see Kelsey's "init scripts for hipsters").

Based on feedback through numerous issues and PRs, the main shortcoming in ConfigMap which would allow elimination of entrypoint scripts is the inability to include elements into a configuration file which are only available at deployment time. For example, some services, such as clustered databases or load balancers, need to know the Pod IP or the pod name as part of their configuration.

Suggested Solution: ConfigTemplate

Several proposals have been made to enhance ConfigMaps to embrace templating functionality, and they have met with significant opposition.

My proposal is that we create a new object ... named here a "ConfigTemplate" as placeholder until someone suggests a better one. This Template object would produce a file inside the deployed container, and could consume the Downward API, Secrets, and ConfigMaps in order to populate a template. This would be largely the same as PR #30502, but with a new object type instead of overloading ConfigMaps.

At a sketch, a ConfigTemplate for an ini-style config file could be inlined, and might look like this:

apiVersion: extensions/v1beta1
kind: ConfigTemplate
metadata:
  name: postgresNodeConf
spec:
  ConfigMapRefs:
     - name: patroniMap
     - name: productionMap
  Template: 
     data:
         node_ip: ${status.podIP}
         dcs: ${patroniMap.dcsconn}
         cluster_name: ${productionMap.cluster}
         memory: 128MB

Note: what syntax we use for the exact substitutions is unimportant; let's just use whatever is easiest to support/code.

For inline versions, it is likely that only key: $value substitutions would be supported. For more complex behavior, including differently-formatted configuration files, we'd support file-based substitution, ala:

apiVersion: extensions/v1beta1
kind: ConfigTemplate
metadata:
  name: postgresNodeConf
spec:
  ConfigMapRefs:
     - name: patroniMap
     - name: productionMap
  Template: 
      file: templates/pg.conf.prod

The file in question would be some kind of text file, with substitution tags in whatever syntax we decide on. Either way, the ConfigTemplate would then be used inside Pod definitions like this:

apiVersion: v1
kind: Pod
metadata:
  name: pgProdPod
spec:
  containers:
    - name: postgres
      image: jberkus/patroni
      templates:
         - name: postgresNodeConf
           path: /etc/postgres/pgnode.conf

Why not volumes?

You'll notice above that I'm not taking a regular "volume" approach to this. That's because in many cases ... including some cases I have personally ... the rendered config file needs to share directories with files which come from the container image. Doing this in a volume is complicated, and will lead (again) to custom entrypoint.sh scripts.

However, if there are strong reasons to handle this as a volume, that could be worked around.

Templating Engines and Sidecars

There is some discussion about how templating would be handled and what templating engine we'd use. This includes a suggestion by @thockin that this be entirely sidecar functionality.

I will argue that providing config file templates which allow containers to start inside Kubernetes pods without modification is fairly central functionality to what Kubernetes does, and as such the general Template object should be a core Kubernetes object. However, I can certainly see the value of allowing the user to plug in their choice of rendering library for the actual template rendering for file-based templates; if nothing else, it would forestall a lot of arguments about syntax.

Even in that case, however, I would like us to provide a built-in very simple template renderer, one which is capable of just swapping in upstream facts for some specific variable syntax, such as ${fact} or {{fact}} or similar, and nothing else. Such a built-in renderer would satisfy 90% of users, and not push the user into installing extra dependencies.

Alternatives

I cannot personally think of any alternatives which will lead to the elimination of the majority of entrypoint.sh scripts. Suggestions welcome.

References

@ncdc
Copy link
Member

ncdc commented Aug 17, 2016

cc @smarterclayton @pmorie @bparees @eparis @derekwaynecarr @kubernetes/rh-cluster-infra

@erictune
Copy link
Member

How far can you get using an init-container?

For example: init-container consumes data from a configmap, substitutes values, and writes the resulting complete config file to a a path in an EmptyDir. The main container reads the config file generated by the init container.

@erictune erictune added sig/apps Categorizes an issue or PR as relevant to SIG Apps. and removed area/kubectl labels Aug 17, 2016
@yoojinl
Copy link

yoojinl commented Aug 17, 2016

@erictune I've tried to address init-container suggestion in a separate thread, it works for initial configuration. In the future, If we want to allow proper config change handling, it would be better to have it natively supported by Kubernetes ecosystem.

@jberkus
Copy link
Author

jberkus commented Aug 17, 2016

@erictune two comments on init containers:

  1. we'd need to automate that process via some kind of Kubernetes object; otherwise it would be just a case of moving the burden on container image authors without decreasing it (in fact, we'd be increasing it). If Template works via an init container under the hood, that's fine with me.
  2. See my note above about the issues with volumes and config files. In many cases, you want to only update 1 config file in a directory which may have several of them, such as for Postgres or Apache HTTPD or network-config. Currently AFAIK there's no way to "share" a volume directory between files supplied by the image and files from the volume. That would put users in the position of needing to re-create all of those config files, even if most of them are identical to the ones in the upstream image. Which would then lead to out-of-sync issues if fixes are added to the config files in the upstream image.
    Possibly merging the directories could be handled by the init container? That is, it would pull files from the template, and the rest from the upstream image? That would solve that issue, at least.

@smarterclayton
Copy link
Contributor

Image volumes have been requested - in the future that would address #2 as
well. In the short term we've said use an init container to handle that.

On Wed, Aug 17, 2016 at 3:00 PM, Josh Berkus notifications@github.com
wrote:

@erictune https://github.com/erictune two comments on init containers:

we'd need to automate that process via some kind of Kubernetes object;
otherwise it would be just a case of moving the burden on container image
authors without decreasing it (in fact, we'd be increasing it). If Template
works via an init container under the hood, that's fine with me.
2.

See my note above about the issues with volumes and config files. In
many cases, you want to only update 1 config file in a directory which may
have several of them, such as for Postgres or Apache HTTPD or
network-config. Currently AFAIK there's no way to "share" a volume
directory between files supplied by the image and files from the volume.
That would put users in the position of needing to re-create all of
those config files, even if most of them are identical to the ones in the
upstream image. Which would then lead to out-of-sync issues if fixes are
added to the config files in the upstream image.
Possibly merging the directories could be handled by the init
container? That is, it would pull files from the template, and the rest
from the upstream image? That would solve that issue, at least.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#30716 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABG_pxZSkhgVubMi7mRnIUJepbTty4kOks5qg1plgaJpZM4Jl7Td
.

@smarterclayton
Copy link
Contributor

Devils advocate - why can't you provide a custom Command to the container and apply the transformation in that container? The entrypoint would then be in the pod definition, not the image. While I'm sympathetic to templated config map, you can already templatize in Command and Args (IIRC), so this is already possible.

@smarterclayton
Copy link
Contributor

smarterclayton commented Aug 17, 2016

Would be:

containers:
- name: ...
  env:
  - name: YOU
    valueFrom:
       ...
  command: "/bin/sh"
  args:
  - "-c"
  - |
    cat <<EOF > /etc/postgresql
      myconfig: $(YOU)
      value: $(CAN) 
      other: $(TEMPLATIZE)
    EOF

If config map was a valid target for env valueFrom (which it should be, because of secrets), then this should allow you to at least craft templates without changing the image.

@DirectXMan12
Copy link
Contributor

That strikes me as unsatisfying for a number of reasons:

First, it means sharing the configuration is harder -- with a ConfigTemplate, you can just say kubectl create -f template-object.yaml, and mount as a volume (and edit the template when necessary). With this approach, you have to say "go in to your pod, convert the entrypoint back into a single string, then put your environment file in a HEREDOC with cat, and the append the existing entrypoint, and hope you didn't screw anything up in there with quoting, because your entrypoint is now a big string blob instead of an array of arguments".

Secondly, it makes sharing the config harder -- you can just share the config template as standalone object, you have to share it as part of the pod spec, and then the receiver has to extract the relevant parts from the pod spec.

Also, it makes the entrypoint on an image much harder to read -- you go from just having a command to having a chain of commands that do echo and seds and awks, and then launch the actual image. That seems easy to mistype and misread.

@smarterclayton
Copy link
Contributor

smarterclayton commented Aug 17, 2016

Templates of config are pretty closely coupled to pods. You can always put the template in the config and combine them in the pod.

The entrypoint is under the control of the pod author. If the pod author doesn't control the image, they can control the entrypoint via the pod definition. I'm fairly sure that we can provide an idiomatic representation of config templating in the pod template, which does allow references to other resources. I am very skeptical of configmaps that point to other configmaps, so it's a much harder argument to say that we want to support references between a config map and another resource AND support references between pods and config maps.

@smarterclayton
Copy link
Contributor

smarterclayton commented Aug 17, 2016

Generally our answer to this has been - "init containers". Given that gives you access to a turing complete space, and everything we provide in a config map can be transformed, you can use any templating language you want, combine any config map you want, and control it all, without any changes to kubernetes. The alternative - implement ConfigTemplate - requires us to bless a templating language and implement additional code. Why is ~200 chars of init container definition in the pod worse than ~200 chars of a configtemplate definition?

@smarterclayton
Copy link
Contributor

As a note - my point here is not to dismiss ConfigTemplate out of hand, it is merely to ensure that the mechanisms that we have that should be able to solve this actually solve this in a clear and demonstrable way. If they don't, they absolutely must be fixed.

@jberkus
Copy link
Author

jberkus commented Aug 17, 2016

@smarterclayton can you give an example of an init container implementation, using current Kubernetes? Because my experience is that they're more complicated than you describe. But maybe I'm doing them badly.

@thockin
Copy link
Member

thockin commented Aug 18, 2016

A few thoughts.

First, a new object on its own does nothing for you. You still need a new
volume driver to project it into the FS. given that, I don't think you
actually need an API object at all. What you have described looks like a
volume.

Second, there are tricks that can pull files from multiple volumes (secret,
configmap, downwardapi) and project into a single directory, with the
caveat that auto-updates don't work (bind mount semantics bite us). If
this is really a stumbling block, we can consider a way to project from
multiple sources into a single volume. @pmorie and I already sketched it
out.

Third, there are ways to inject a new entrypoint.sh into a container. You
can make a configmap with a key named entrypoint.sh and the value being a
shell script. Mount that on /entry, and make your container's command be
/entry/entrypoint.sh. You don't need to customize upstream images for
this. If this is not satisfying, I would consider a "literal" volume
driver that just copied a string into a file.

Fourth, I didn't grok why a sidecar doesn't work? Declare an emptydir
called /config. Mount it on both the sidecar and and the main app
container. The sidecar also mounts N configmaps and secrets and downward
API. The sidecar periodically evaluates a template (maybe a configmap or a
flag) and fills in variables from the other configmaps and secrets. It
writes the results to /config/.config.tmp, compares /config/config.file
with that tempfile and renames the tmp to the real name if needed. The app
container just watches inotify for changes to the /config/config.file.

Lastly, nothing stops you from using third-party resource and flex volume
to implement this yourself and prove us wrong. If it turns out to be
overwhelmingly popular we can either adopt it as standard or just endorse
your cleverness :)

On Wed, Aug 17, 2016 at 3:46 PM, Josh Berkus notifications@github.com
wrote:

@smarterclayton https://github.com/smarterclayton can you give an
example of an init container implementation, using current Kubernetes?
Because my experience is that they're more complicated than you describe.
But maybe I'm doing them badly.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#30716 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVDFFUPmAlZt4isyLYIlveGcOCdqSks5qg49jgaJpZM4Jl7Td
.

@jberkus
Copy link
Author

jberkus commented Aug 18, 2016

@thockin I'm not looking to make this possible, I'm looking to make it simple. It's possible now, but a large number of our users can't figure out how to do it. And this is a spec issue, so the purpose of this is "let's determine the spec for this".

Projecting configuration files into running containers is something our users need to do for something like 3/4 of running containers. Right now the standard approach for that is hackish entrypoint.sh scripts, which is a terrible state of the art, and causes a lot of folks to legitimately question whether they made a mistake moving away from CMS and towards orchestration.

Something which 80% of our users need to do 75% of the time shouldn't be complicated or require advanced Kubernetes knowledge. It should be simple, obvious, and there should be a recommended mainstream way to do it. Whether that's templates or config volumes or sidecars, I don't really care, provided that it's something which a new Kubernetes user can learn in 20 minutes.

@thockin
Copy link
Member

thockin commented Aug 18, 2016

No disagreement, but we should first explore refinements on existing
behaviors before piling on new abstractions and API kinds.

On Thu, Aug 18, 2016 at 9:03 AM, Josh Berkus notifications@github.com
wrote:

@thockin https://github.com/thockin I'm not looking to make this
possible, I'm looking to make it simple. It's possible now, but a
large number of our users can't figure out how to do it. And this is a spec
issue, so the purpose of this is "let's determine the spec for this".

Projecting configuration files into running containers is something our
users need to do for something like 3/4 of running containers. Right now
the standard approach for that is hackish entrypoint.sh scripts, which is a
terrible state of the art, and causes a lot of folks to legitimately
question whether they made a mistake moving away from CMS and towards
orchestration.

Something which 80% of our users need to do 75% of the time shouldn't be
complicated or require advanced Kubernetes knowledge. It should be simple,
obvious, and there should be a recommended mainstream way to do it. Whether
that's templates or config volumes or sidecars, I don't really care,
provided that it's something which a new Kubernetes user can learn in 20
minutes.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#30716 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVHGAksYMnwtvaqhTF_20dVsbM5gZks5qhIJLgaJpZM4Jl7Td
.

@jberkus
Copy link
Author

jberkus commented Aug 18, 2016

I'm OK with that. Would you be willing to sketch out a doc on how you'd do it, now? Because you seem to find existing approaches easier than I do. And maybe I'm doing them in an unnecessessarily complicated way. If we can reduce this to a doc item, that would be a win for everyone.

@thockin
Copy link
Member

thockin commented Aug 18, 2016

I have zero bandwidth for the next few weeks. We need community folks
like yourself to come up to speed on alternatives and help make educated
designs. As much as I like having an opinion on everything, there aren't
enough hours in the day. The way this community grows and thrives is when
new people step into the light and own problems :)

I appreciate your vociferous advocacy of the user in this issue, so far...

On Thu, Aug 18, 2016 at 9:20 AM, Josh Berkus notifications@github.com
wrote:

I'm OK with that. Would you be willing to sketch out a doc on how you'd do
it, now? Because you seem to find existing approaches easier than I do.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#30716 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVE0jfp_aZkNiHmZfiqk35llmQJhlks5qhIZfgaJpZM4Jl7Td
.

@pwittrock pwittrock added the priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. label Aug 18, 2016
@zefciu
Copy link
Contributor

zefciu commented Aug 18, 2016

I have created a simple PoC for configmap templating here: #30502

My solution however uses downwardAPI volumes and the configfiles are simple another possible item type that can be used there. Your idea with bare files that can be placed anywhere seems to be superior. I don't know however how it would be with implementation. Also what would be the possibilities to implement some improved change propagation.

@jberkus
Copy link
Author

jberkus commented Aug 18, 2016

@thockin I hear you. Well, anything I do on this will need to come after CNCF day ...

@pigmej
Copy link
Contributor

pigmej commented Aug 19, 2016

Definitely +1 to @jberkus words. This is very important topic (that's why @zefciu and @nebril made PoCs), it definitely should be easy and understandable to end user. I'm pretty sure that assumption that you can store configs in /config is valid only in perfect world, but in reality it's not that easy.

In my opinion any solution which uses init-containers, custom init scripts, init-containers to do templating feels hacky and requires serious k8s knowledge or pollutes pod/container definition with logic that shouldn't be there.

@zefciu
Copy link
Contributor

zefciu commented Aug 19, 2016

About initscripts: besides "feeling hacky" the problem is they lack DRY-ness. You need to specify the data twice. First in the pod definition. Second - in the script. So your deployments become harder to maintain.

My proposal

Let's add an ability to add config file templates. These templates will be able to consume data from the following sources:

  • Configmaps
  • Secrets
  • Pods

The templates will use syntax based on https://golang.org/pkg/text/template/ package (I believe it is powerful enough for our needs). The template can be:

  • Specified as inline data in the definition file
  • In an external file referred by filename relative to the location of definition file.

The files can be inserted to the pod in two ways

  • Mounted separately in any place (at least Docker supports single file mounts)
  • Added to DownwardAPI volumes

The "DownwardAPI" volume could be renamed if it is confusing to newcomers. Generally however the original DownwardAPI and ConfigMap volumes should become deprecated as much less powerful than templating.

What we need

The current state of PoC implements a lot for this proposal. What is still missing is:

  • Single file mounts
  • Templates in separate files
  • Secret data consumption
  • Consuming several configmaps directly (you can use configmap links however)

Questions

  • Are there any objects beside Pods, Secrets and ConfigMaps we want to be source of data?
  • Do we need any tools in our templates beside simple interpolation of values and text/template tools?

I would like to call @nebril and @nhlfr to pay attention to this thread as well. The next two weeks I will login irregularly, but please comment.

@zefciu
Copy link
Contributor

zefciu commented Aug 19, 2016

Oh, also @RustyRobot - please comment if these requirements are enough for us.

@yoojinl
Copy link

yoojinl commented Aug 19, 2016

@zefciu Looks good, there is one of the examples we will be happy to see covered with templates.

@jberkus
Copy link
Author

jberkus commented Aug 19, 2016

@zefciu +1

BTW, one of the examples I'm dealing with is PostgreSQL, where the config files need to go into a specific directory, which also contains files we want to inherit from the upstream image. Postgres also has specific permissions requirements for those files. So I think that's a good test case for any scheme; if it can handle Postgres, it can probably cover any service.

@andrewstuart
Copy link
Contributor

It does seem to be at least tangentially related, though I think the topic of "what happens if the config map changes" is still orthogonal to "can templated config maps change if upstream data sources change?" The latter seems very relevant.

IMO the inline config map vs external config map would also be irrelevant given a good templating solution. If the templating solution is complete, then either option should reasonably accept template placeholders.

Also to address another of @kelseyhightower 's concerns:

The fact that some people want to extract configuration and templates from container images and merge them with user defined configuration and templates will be the source of pain to come.

I think we'd all agree with that (if I'm understanding you correctly). And I think the intent is still that in any case, given k8s objects with template expressions, the final product is k8s objects ready for consumption, e.g. mounting into the volume as a config file, resulting in a container image that does not need to be tailored beforehand to k8s.

@smarterclayton
Copy link
Contributor

I think it's important to note that we already have solutions in the
ecosystem for complex multi resource templating that we have previously
said belong above the core platform - specifically Deployment Manager (now
the server component of Helm), local templating + apply, Ansible, and
others. To Kelsey's point, we are trying not to build one of those
solutions in the core, because they belong as a properly layered component
on top. Half of the discussion is how to build those (which others are
doing) and half has to be on how we make those tools able to do less by
having good primitives.

I don't think any template language we include will make those tools easier
to use, because they will now have multiple languages to apply. The
argument I usually hear is trying to make Kube fit better into Ansible,
existing config mgmt flows, HEAT, Deployment Manager, etc. Mixing template
languages is really painful, so it doesn't seem like a clear win if we're
meeting the community where they are.

For instance, we don't want to put a template language under apply - we
want it to be above (an input to apply), so that people can lean on apply
to avoid having to do local patch merging.

As another example, iterative config application (inputs to template change
-> template generates new output -> deployment triggered) requires at least
minimal outside orchestration, especially when multiple components are
involved. We aren't eager to make that orchestration a first class concept
in kube because it's likely no one orchestrator is a suitable solution for
all use cases.

A template evaluation environment that is even partially or strongly turing
complete has to run in a pod, just by its nature. Deployment Manager does
this with expandybird - as something like what is proposed here grows more
flexible it would also have to be run in a pod. Given that the output of
that pod is a transformed local file, it's really hard to justify a
completely separate implementation that isn't an init container (at least
right now) because the init container is already isolated and under the
user's control and doesn't have to implement a separate API / transport
channel.

The short term things already discussed in this thread have a lot of
individual merit:

  • Better doc and examples around existing init container use here (and a
    really strong minimal example that tries to be as "api like" as possible)
  • Inline config maps
  • Prototyping (or working with Helm or the AppController guys) an
    orchestrator that can copy config map changes for deployments (and can by
    definition also go run templates) to show your desired flow
  • Projection of volumes into pods more effectively (@pmorie spawned an
    issue on this)
  • A better way to catalog / inject helpful sidecar patterns into containers
    (kubectl add-sidecar deployment/foo --from=mylibrary-of-sidecars)

Did I miss any?

On Mon, Sep 12, 2016 at 8:23 PM, Andrew Stuart notifications@github.com
wrote:

It does seem to be at least tangentially related, though I think the topic
of "what happens if the config map changes" is still orthogonal to "can
templated config maps change if upstream data sources change?" The latter
seems very relevant.

IMO the inline config map vs external config map would also be irrelevant
given a good templating solution. If the templating solution is complete,
then either option should reasonably accept template placeholders.

Also to address another of @kelseyhightower
https://github.com/kelseyhightower 's concerns:

The fact that some people want to extract configuration and templates from
container images and merge them with user defined configuration and
templates will be the source of pain to come.

I think we'd all agree with that (if I'm understanding you correctly). And
I think the intent is still that in any case, given k8s objects with
template expressions, the final product is k8s objects ready for
consumption, e.g. mounting into the volume as a config file, resulting in a
container image that does not need to be tailored beforehand to k8s.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#30716 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABG_pyyvsxd90C8F8zUFjA7JywjQ1KW2ks5qpeztgaJpZM4Jl7Td
.

@thockin
Copy link
Member

thockin commented Sep 13, 2016

On Mon, Sep 12, 2016 at 4:13 PM, Clayton Coleman
notifications@github.com wrote:

You create config. App gets deployed. You're happy. Someone updates the
config.

I assume you mean "without bouncing the pod, and the app does not pick
it up dynamically"

A node is evacuated elsewhere in the cluster and one of your pods
picks up the new config. Everything fails. You try to figure out what the
old values of config are. You cry.

Someone laid a land mine and was sad when they stepped on it. I 100%
agree land mines are bad, but this is not the first such case.
ReplicationController and ReplicaSet have the same problems. Docker
image tags being mutable have the same problems.

Conversely, if you had defined it as an inline volume, RC and RS would
fail the same way, and Deployment would try to roll out and fail right
away.

If you don't want a time bomb, you HAVE to actuate changes right away,
regardless of whether that's an inline or not.

The sharp edges of that "surprise, your app doesn't work" are what I'm most
concerned with. We tell everyone to use configmaps, and a non-trivial
percentage of people don't reason about the interaction of config map with

Is "I updated a config map" really more confusing than "I updated my
replication controller" ? In both cases, it is part of your "app DAG"
and you updated it.

an administrator evacuating a node, and a non-trivial percentage of
applications will fail and fail hard with mismatched config. So trying to

Mismatched config is a fact of life, no matter what - whether that is
a rolling update or an accidental land mine.

find places where the path of least surprise is exactly how you expect the
platform to work, even if you don't think about it, and then you can go use
it in sophisticated and awesome ways later. If configmaps defaulted to
readonly after creation, and you had to go think about allowing them to
mutate, that would be one example of safety. Or if the configmap
automatically preserved old versions of itself that you could roll back to.

This is why we generally tell people not to update configmaps, but
create a new one, update your deployment, and do an rolling update.

The advantage of inline configmap is that it cannot be out of sync with the
cluster and can be propagated through a deployment in a trivially correct
way.

I don't buy it - you can lay almost the same traps with an inline, if
you're not careful. I'm not AGAINST an inline volume, per se, I just
don't see the value that justifies the extra complexity. I'm willing
to be swayed by "but UX", but this, in the end, just a TINY piece of
what this bug is aboyut.

On Mon, Sep 12, 2016 at 6:08 PM, Tim Hockin notifications@github.com
wrote:

On Mon, Sep 12, 2016 at 12:40 PM, Clayton Coleman
notifications@github.com wrote:

ConfigMap is not update safe (without extra care today) and can't be
delivered easily as an atomic unit. I agree there are downsides to
inline
volumes, but I do think small ones would be a net usability win.

I don't understand this. It's not atomic wrt all pods getting updates
at the same time, but it's not as if they see a half-cooked state of
the CM. The only upside I see to an inline volume is the inline-ness
of it.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub

#30716 (comment),
or mute the thread

https://github.com/notifications/unsubscribe-auth/ABG_p5qVy02Ep_-V63s82aCHv9CF-5aIks5qpc16gaJpZM4Jl7Td
.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

@jberkus
Copy link
Author

jberkus commented Sep 27, 2016

@smarterclayton, all:

I'm inclined at this point to close this in favor of a proposal of "make init containers easier to use", which seems like the way to go. However, there have been a LOT of other proposals on this issue which aren't mine; is it OK to just close it nevertheless?

@pigmej
Copy link
Contributor

pigmej commented Sep 27, 2016

@jberkus I don't think so that init container is the way to go, proposal submitted by @zefciu looks nice. And I think it's worth solving it a bit at configmap level.

@andrewstuart
Copy link
Contributor

I wonder if resource templating as a whole would be worth discussing on the community call ( cc @sarahnovotny ) (Or, is there an appropriate SIG?). For me, there are already too many in-motion proposals and implementations (the helm tool and this templating issue spring to mind) to keep track of. Even if it's just a "yes, we are going with 23896 for the foreseeable future" acknowledgement, it'd be helpful in my mind for some sort of consensus and communication about the project's direction on this matter.

@pigmej
Copy link
Contributor

pigmej commented Sep 28, 2016

Afair there are (were?) some plans to discuss that on sig-apps.

@pigmej
Copy link
Contributor

pigmej commented Oct 3, 2016

@michelleN @andrewstuart @jberkus @smarterclayton @thockin could we organize a short discussion in sig-apps about that?

@thockin
Copy link
Member

thockin commented Oct 3, 2016

I don't usually attend sig-apps, please let me know if/when this is on the
agenda

On Mon, Oct 3, 2016 at 2:39 AM, Jędrzej Nowak notifications@github.com
wrote:

@michelleN https://github.com/michelleN @andrewstuart
https://github.com/andrewstuart @jberkus https://github.com/jberkus
@smarterclayton https://github.com/smarterclayton @thockin
https://github.com/thockin could we organize a short discussion in
sig-apps about that?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#30716 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVOp7Gd6oVPfIpVPc9l7ulMs06TDYks5qwM1IgaJpZM4Jl7Td
.

@andrewstuart
Copy link
Contributor

andrewstuart commented Oct 3, 2016

@pigmej Ditto @thockin, and also that nobody's paying me to do k8s work (yet), so my regular work will have to take precedence, but I can definitely try to attend, depending on the time.

@andrewstuart
Copy link
Contributor

Also, you may want to comment on #23896 and/or kubernetes/enhancements#35 if there's going to be a generalized discussion as they will almost certainly have either made progress already or have opinions and suggestions of their own to add.

@hobti01
Copy link

hobti01 commented Apr 7, 2017

@pmorie Is there any chance that there is more documentation beyond the examples you mentioned in #30716 (comment) ?

I desperately want to share the available syntax with colleagues, but cannot be certain how much is implemented (and in which version).

I thought we had doc on this, but I couldn't find any after a brief search (we'll fix that); design doc is: https://github.com/kubernetes/kubernetes/blob/master/docs/design/expansion.md

@raoofm
Copy link

raoofm commented Nov 2, 2017

Any decision has been made on this?

@bgrant0607
Copy link
Member

No templates.

@eyalzek
Copy link

eyalzek commented Nov 7, 2017

Is there a recommended way to inject a configuration file with interpolated variables/secret into a container at the moment?

@bgrant0607
Copy link
Member

@eyalzek Please ask on kubernetes-users and/or stackoverflow, or try to build a consensus within SIG Apps.

@jayunit100
Copy link
Member

So , as of now im assuming no we dont / can't encrypt data in configmaps.

@nu007a
Copy link

nu007a commented Dec 6, 2021

/mark

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/app-lifecycle area/configmap-api area/declarative-configuration priority/backlog Higher priority than priority/awaiting-more-evidence. sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/service-catalog Categorizes an issue or PR as relevant to SIG Service Catalog.
Projects
None yet
Development

No branches or pull requests