-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubectl run from a manifest file #63214
Comments
@kubernetes/sig-cli-feature-requests |
@jeremywadsack: Reiterating the mentions to trigger a notification: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Is this feature not available now? I saw a flag named "filename" in the Reference, but it doesn't seem to work. The version I am using is 1.12.1. |
Any news on this? |
any updates or workarounds? |
Any update on this? I'd love to be able to define everything in a manifest file instead of having a mile-long command using overrides. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
+1 here! |
Passing This breaks down when I'm trying to make it part of CI YAML config file. It would be very nice if I could indeed specify the Pod manifest in a file. The following works however:
|
https://gist.github.com/lkoba/e73dcc13bd8d81907b1f4069a19979ec I use this to run pod manifests, it works with multiline commands, volumes, secrets, labels, etc. You may need to adapt it to your needs. At the moment it only works with Pod manifest files. It depends on jq and y2j to parse json and convert yaml files to json. |
This is great, thanks for sharing! To reduce the number of dependencies, I saved my manifest file as JSON - no need for Thank you @lkoba |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Just had a convo with someone today about spinning up a container for shell
access is a common pattern.
/remove-lifecycle stale
On Tue, May 11, 2021 at 9:24 PM fejta-bot ***@***.***> wrote:
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually
close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community
<https://github.com/kubernetes/community>.
/lifecycle stale
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#63214 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAB25LH4KJNXCT65KFHX22LTNH7JFANCNFSM4E45K2VQ>
.
--
Jeremy Wadsack
|
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
We need to run a complex pod definition (with configmaps, secrets, regcred, labels, etc.) and Please add an Basically run should be like Workaround: Currently we use |
I also confirm that using |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened:
As discussed in #48684 we also have a use case where we would like to spin up a pod, run a command, and throw away the pod.
kubectl run
seems to support this except that it gets very complicated to bring up a pod matching the environment of an existing deployment. Specifically, attaching secrets.The commonly cited example is to spin up a pod to run database migrations during a deployment.
That PR has to do with adding an
--end-from
option to thekubectl run
but on further discussion, the use of--override
seems to have resolved some of the problems. However, that has it's own issues, as @javanthropus mentioned:I would like to be able to specify the image on the command-line and override the spec. Specifically, I'd like to have a static manifest to load from and then add a tag to the image from our script so I can make sure I'm using the right image.
Something like this (which currently ignores the command-line:)
Agreed because all our other configuration files are in YAML. I ended up with this so we could store the manifest in YAML like we do everything else:
But I ran into further issues where if
--restart=Never
is on therun
command, it launches a Pod, but ignores the command line and just runs the default command in the Docker image.What you expected to happen:
It would be great to have a
-f
option likeapply
does:Anything else we need to know?:
Environment:
kubectl version
):The text was updated successfully, but these errors were encountered: