-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add new option "env-from" for kubectl run
command.
#48684
Conversation
Hi @xingzhou. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/unassign |
/assign @shiywang |
/assign @mengqiy |
/assign @alexandercampbell |
@alexandercampbell FYI, this should probably wait until your refactor is finished. |
pkg/kubectl/run.go
Outdated
}, | ||
}, | ||
} | ||
envFroms = append(envFroms, envFrom) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this line and line952 are same, could move out of code block and merge to one ?
pkg/kubectl/run.go
Outdated
if len(refType) == 0 || len(name) == 0 { | ||
return nil, fmt.Errorf("invalid envFrom: %v", envFrom) | ||
} | ||
if refType == "ConfigMapRef" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer switch
instead of if else if else
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, let me update the code
/ok-to-test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
pkg/kubectl/cmd/run.go
Outdated
@@ -117,6 +123,7 @@ func addRunFlags(cmd *cobra.Command) { | |||
cmd.Flags().Bool("rm", false, "If true, delete resources created in this command for attached containers.") | |||
cmd.Flags().String("overrides", "", i18n.T("An inline JSON override for the generated object. If this is non-empty, it is used to override the generated object. Requires that the object supply a valid apiVersion field.")) | |||
cmd.Flags().StringSlice("env", []string{}, "Environment variables to set in the container") | |||
cmd.Flags().StringSlice("envFrom", []string{}, "Environment variables to set from ConfigMap or Secret in the container") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This flag should be named "env-from" to fit with our standard.
I can find the specific documentation requiring this if you want.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeap, let me correct it
pkg/kubectl/cmd/run_test.go
Outdated
envFromStrings := cmdutil.GetFlagStringSlice(cmd, "envFrom") | ||
if len(envFromStrings) != 2 || !reflect.DeepEqual(envFromStrings, test.expected) { | ||
t.Errorf("expected: %s, saw: %s", test.expected, envFromStrings) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test doesn't test any of our code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is just a test to get the flag from cmd, I saw there is a similar one for env, so added this one, I can remove this.
pkg/kubectl/run.go
Outdated
} | ||
delete(genericParams, "envFrom") | ||
} else { | ||
return nil, fmt.Errorf("expected []string, found: %v", envFromStrings) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not %T
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will update the code then.
kubectl run
command.kubectl run
command.
@alexandercampbell and @shiywang, updated the patch according to your comments, PTAL, thx |
continue | ||
} | ||
if !reflect.DeepEqual(envFroms, test.expected) { | ||
t.Errorf("\nexpected:\n%#v\nsaw:\n%#v (%s)", test.expected, envFroms, test.test) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, I like this test style.
/lgtm |
@nurus Thanks for the explanation. Are you using We definitely want to avoid supporting everything in the API as a flag, but want to support the most common / important use cases. |
It's not possible to use |
No problem @pwittrock. We use The applications tend to be monolithic and require the entire codebase to be baked into the image. The codebase contains methods and scripts related to the application usually for interacting with persistent storage (database, elasticsearch, etc.). The code itself relies on environment variables that it uses as parameters to connect to the persistent storage. For example someone many want to run a one off sql query wrapped up in a script just to glean some information from the database. We could simply exec into a pod that's part of a deployment and has all the environment variables already after orphaning it but we also don't want to endanger any work that pod is currently doing. Part of the problem is the nature of the work itself. It's much safer just to spin up a pod to do the one off task. I definitely see your point about not wanting to clutter up the |
@nurus Thanks for sticking with me. I like to really try to understand our user's use cases as much as possible to make sure we are taking them into account in a holistic manner. Is it correct that the main application is managed through config ( How big of a deal is the "create" + "attach" in one command vs a "create" then "attach or exec" workflow? By adding flag to I'd like to talk through this more. The deadline for 1.8 code freeze is just a couple weeks away. How important to you that something lands in 1.8 vs 1.9? |
Yes, the application and its workers are deployed this way.
Definitely, if we can spin up a pod from the same deployment and strip it of labels which are used as selectors this will work. If we're debugging a web worker we may not want it to be available to the service it would normally belong to.
Not a huge deal. We really like using
It's not critical that this lands in 1.8. Thanks for your help on this @pwittrock. |
Thanks for the insight. I will continue to work with you on this, but won't plan on it landing in 1.8 given the approaching code freeze. One thing we have been talking about is a utility to easily apply basic transformations to a config file and emit the transformed result - e.g. something that can rename and relabel the objects in your config files and emit the transformed result. We are prototyping this now, and it could be a helpful tool - e.g. run transform to change the labels and name on your deployment config, pipe to apply, profit. This approach would allow you to capture things that might not be in the env piece of your config - e.g. initializers or beta fields driven off of annotations, the command and args, resource limits, etc |
@pwittrock that sounds useful and a more complete solution to replicating the environment of a deployment. I'll stay tuned. |
Would these transforms work on |
Is there a decision on this change? We also ran into this not working, and have a similar use-case. We want to run one off Jobs (like django's "python manage.py migrate" or "python manage.py shell") in the same environment variables as a running Deployment; since this is shared between multiple Deployments (like the web server and background workers), it uses envFrom in the deployment, and we'd like to just that in the Job created by kubectl run. I don't think "run" in our use case is really just "create/apply" then "attach"; it's:
So you'd need more than just "set" to replicate the functionality, the "wait conditions" #1899 request would also be needed and some shell scripting around understanding status. I made a half-hearted stab at implementing that before doing the workaround mentioned above (some scripting to transform config maps and secrets to equivalent --env=FOO=bar invocations) This obviously has several drawbacks like a particularly lengthy command line and embedding secrets into a Job spec directly. |
I landed here with the same use case as @toddgardner -- running migrations. I'll also probably fall back to scripting to set the environment variables, but it would sure be handy to have envFrom support here. |
Same use case/feature request here. We want to run separate pod with replicated environment and shell/console access (interactive) that is removed on finish. |
I've gotten envFrom to work with {
"spec":{
"containers":[
{
"name": "podname",
"image": "image",
"args":[
"command"
],
"envFrom":[
{
"configMapRef":{
"name": "configmap"
}
},
{
"secretRef":{
"name": "secrets"
}
}
]
}
]
}
} The final kubectl command looks like this: kubectl run -i --rm --tty podname --image image \
--namespace=namespace --restart=Never \
--overrides='{"spec": {"containers": [{"image": "image", "args": ["command"], "name": "podname", "envFrom": [{"configMapRef": {"name": "configmap"}}, {"secretRef": {"name": "secrets"}}]}]}}' There's some redundancy needed to pass validation. This should meet the use case which brought most of us here. I ended up wrapping it up in a script to make it easier to form the command. |
@nurus Thank you, that is definitely helpful. I managed to get the environment there, only problem I have is that I can't open console. There is no output, kubectl finishes and pod is not listed anywhere.
GIven the |
@kvitajakub it seems anything that isn't explicitly specified in the container spec reverts to the default option. For an interactive session add these fields to the container spec "stdin": True,
"stdinOnce": True,
"tty": True |
@nurus Thank you, it works!
|
This PR hasn't been active in 30 days. It will be closed in 59 days (Feb 6, 2018). cc @alexandercampbell @eparis @mengqiy @pwittrock @shiywang @xingzhou You can add 'keep-open' label to prevent this from happening, or add a comment to keep it open another 90 days |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
This issue seems to have gone stale, but we have the same need where I work. However, rather than add another option to the run command for each pod option now and in the future, we would be completely satisfied if there was an equivalent of the apply command that waited on the container in the pod just like the run command does. In other words, given a full pod manifest (
You can emulate this now by adding a dummy name, a dummy
This seems a little backwards logically though because we want the manifest to be the base definition, where the additional command line options provide overrides. It's also a mess to have to pass the entire manifest on the command line. Going further, as with the create and apply commands, it would be great if the manifest could be supplied in either JSON or YAML formats. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Why this PR is closed while this addition is super useful ? |
This is a pretty useful addition, odd that is has been neglected and closed. We have env and secret objects that are generated from terraform, things like rds connections and such. We have requirements that prevent data from leaving the environment, so in order to interact with and view the data we would like to launch a container inside of the env via This seems like a simple and useful addition so I am wondering what the resistance is to adding it? @xingzhou any desire to reopen this and make another attempt at getting it added? Based on the feedback here from the community it looks like it would be a welcome addition. |
Propose to add new option "env-from" for
kubectl run
command so that usercan config ConfigMap or Secret env variables from run command.
Fixed #48361
Release note: