-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deployment does not update when adding environment variables #26944
Comments
The same problem as #25585, and the fix landed #26418, @erez-rabih ptal. |
@bgrant0607 I am wondering whether we should create a cherry pick for 1.2, I have seen users ran into this bug several times. |
@adohe Yes, please create a cherrypick PR. |
Hi! I'm facing the same issue now: adding a new env variable does not touch running pods, the new env varibale is not puplished down to the pods. Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:22:21Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} |
@danielloczi I think this has already be fixed, could u please show how to reproduce this? I would take a look. |
cc @apelisse |
Hi! I have a set of environment variables configured for a deployment. The deployment is applied using kubectl apply -f deployment.yaml Example (original yaml):
Edited yaml:
|
Update: I tried to reproduce it a couple of minutes ago, but I was unable. I trying to find the case, when it does not work. |
Got this now in 1.11 on a DaemonSet. |
Same for me |
also having the issue with DaemonSet
|
why this issue is closed? on latest version of kubernetes i am facing the same issue. updating the deployment.yaml with adding new env var , and then kubectl apply , is not updating the deployment/pod with new env var. |
@iahmad-khan Would you mind coming-up with a minimal example that I would be able to build into a test or something? |
I'd need to see:
|
Any fixes, seems like, I'm running commands, not in the correct order. I updated |
Can you show the steps here with a minimal example, the commands that you run and how the yaml looks like after each step? |
hitting this also sometimes, might be control plane &/ persistent problem Once this occurs , it persists for me even i delete deployment & service to recreate both in following orders
|
OK Let's try to reproduce this: $ kubectl create deployment --image nginx:latest nginx
deployment.apps/nginx created
$ kubectl set env deployment nginx --containers=nginx MY_ENV=1234
deployment.apps/nginx env updated
$ kubectl set env deployment nginx --containers=nginx MY_ENV_2=5678
deployment.apps/nginx env updated
$ kubectl get -o yaml deployment nginx | grep -A 5 env
- env:
- name: MY_ENV
value: "1234"
- name: MY_ENV_2
value: "5678"
image: nginx:latest
$ kubectl get pods nginx-7db6cbd99f-pwkxq -o yaml | grep -A 5 env
- env:
- name: MY_ENV
value: "1234"
- name: MY_ENV_2
value: "5678"
image: nginx:latest I still don't see what I should be looking at? |
Hi,
We're using deployments on Kubernetes version 1.2.1.
In order to update deployments we update the corresponding deployment.yml file and the run
kubectl apply -f deployment.yml
Everything works well most of the time except for the case in which we add an environment variable to one of the containers on the deployment.yml.
In that case we see that a new replica set is not created and the old pods keep on running.
The text was updated successfully, but these errors were encountered: