-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Environment variables named specific things not getting passed on from deployment to pods #25585
Comments
It also won't accept "randomshit" as an environment varible. When I add that as an environment variable to an existing deployment and run "kubctl apply -f deployment.json" nothing happens. This is on version v1.2.4 / 3eed1e3 |
Ehh? Can you post a full but minimized yaml that I can run to reproduce it? On Fri, May 20, 2016 at 4:21 PM, drashkov notifications@github.com wrote:
|
We use JSON files, here is the full one. "SLACK_ENDPOINT" does not appear as an environment variable in the pods. Doing "kubectl apply -f .json" does not create new pods, and does not pass these variables to the pods. If I change the number of replicas and add a new pod, that pod will be created, but won't have that varible.
|
I am not able to reproduce on GKE at 1.2.4 when exec'ing into one of the Pods and printing the environment. $ kubectl version
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.4", GitCommit:"3eed1e3be6848b877ff80a93da3785d9034d0a4f", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.4", GitCommit:"3eed1e3be6848b877ff80a93da3785d9034d0a4f", GitTreeState:"clean"} $ kubectl exec test-api-deployment-2124663613-029ak -i -t sh
/ # env
...
SLACK_ENDPOINT=https://slack.com
... This is the yaml I used - note the image is different: {
"apiVersion": "extensions/v1beta1",
"kind": "Deployment",
"metadata": {
"name": "test-api-deployment"
},
"spec": {
"replicas": 3,
"template": {
"metadata":{
"labels": {
"app": "test-api"
}
},
"spec": {
"containers": [{
"image": "gcr.io/google_containers/serve_hostname:v1.4",
"name": "test-api",
"env": [
{
"name": "SLACK_ENDPOINT",
"value": "https://slack.com"
},
{
"name": "API_DB_HOSTS",
"value": "10.0.0.83"
},
{
"name": "CASSANDRA_LOGGING_LEVEL",
"value": "ERROR"
}]
}]
}
}
}
} What are the symptoms you are seeing? Can you provide yaml using standard public image along with the specific symptoms you are seeing? |
@pwittrock did you create the deployment first with out the env vars, then add them via an apply? |
Thanks for looking into it @pwittrock, I managed to reproduce it with the yaml you posted. Here's what I did:
$ kubectl apply -f og.yaml
deployment "test-api-deployment" created
$ kubectl exec test-api-deployment-2124663613-9tl0u env |grep SLACK
SLACK_ENDPOINT=https://slack.com This creates the deployment with all env vars as expected $ kubectl apply -f mod.yaml
$ kubectl exec test-api-deployment-2413078913-547h0 env |grep SLACK This re-creates a new set of pods and they don't have the SLACK_ENDPOINT var, as expected. $ kubectl apply -f og.yaml ; watch kubectl get pods This does not re-create the pods, so the env var is not passed along. kubectl does not return errors. |
@drashkov Thanks, I was able to reproduce the issue. It looks like the config with the ENV does get stored in the last-applied-configurations annotation: kubectl.kubernetes.io/last-applied-configuration: '{"kind":"Deployment","apiVersion":"extensions/v1beta1","metadata":{"name":"test-api-deployment","creationTimestamp":null},"spec":{"replicas":3,"template":{"metadata":{"creationTimestamp":null,"labels":{"app":"test-api","app2":"test-api2"}},"spec":{"containers":[{"name":"test-api","image":"gcr.io/google_containers/serve_hostname:v1.4","env":[{"name":"SLACK_ENDPOINT","value":"https://slack.com"},{"name":"SLACK_ENDPOINT2","value":"https://slack.com"},{"name":"CASSANDRA_LOGGING_LEVEL","value":"ERROR"}],"resources":{}}]}},"strategy":{}},"status":{}}' But does not get put into the pods: containers:
- env:
- name: SLACK_ENDPOINT
value: https://slack.com
- name: CASSANDRA_LOGGING_LEVEL
value: ERROR
image: gcr.io/google_containers/serve_hostname:v1.4
imagePullPolicy: IfNotPresent
name: test-api
resources:
requests:
cpu: 100m |
After a few tries I found that if you add the |
The generated three way merge patch wasn't correct (the annotation is good, but look at the spec part):
|
It only recognizes the last env as diff |
Opened a PR #26418 ptal. |
Thanks everyone. |
Automatic merge from submit-queue fix strategy patch diff list issue fixes #25585 @janetkuo @pwittrock ptal.
Deployment is defined in a json file with several environment varibles. Deployment is deployed using "kubectl apply -f deployment.json". Pods get most of the environment variables defined, but not all.
Specifically, the variables defined in the deployment.json are named:
{ "name": "DB_HOSTS", "value": "10.0.0.1" }, { "name": "SLACK_ENDPOINT", "value": "ERRORNOT" }, { "name": "LACK_ENDPOINT", "value": "ERRORNOT" }, { "name": "SLACK", "value": "ERRORNOT" }, { "name": "CK_ENDPOINT", "value": "ERRORNOT" }, { "name": "ACK_ENDPOINT", "value": "ERRORNOT" }, { "name": "K_ENDPOINT", "value": "ERRORNOT" }, { "name": "CASSANDRA_LOGGING_LEVEL", "value": "ERROR" }, { "name": "ENABLE_GETIP", "value": "https://www.url.com" }
the environment variables that appear in a pod are:
`
ENABLE_GETIP
CASSANDRA_LOGGING_LEVEL
K_ENDPOINT
ACK_ENDPOINT
CK_ENDPOINT
DB_HOSTS
`
That is, ENV variables named "SLACK_ENDPOINT", "LACK_ENDPOINT" and "SLACK" never get passed on to the pods.
The text was updated successfully, but these errors were encountered: