Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Environment variables named specific things not getting passed on from deployment to pods #25585

Closed
drashkov opened this issue May 13, 2016 · 13 comments · Fixed by #26418
Closed
Assignees
Labels
area/app-lifecycle area/kubectl kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.
Milestone

Comments

@drashkov
Copy link

drashkov commented May 13, 2016

Deployment is defined in a json file with several environment varibles. Deployment is deployed using "kubectl apply -f deployment.json". Pods get most of the environment variables defined, but not all.

Specifically, the variables defined in the deployment.json are named:
{ "name": "DB_HOSTS", "value": "10.0.0.1" }, { "name": "SLACK_ENDPOINT", "value": "ERRORNOT" }, { "name": "LACK_ENDPOINT", "value": "ERRORNOT" }, { "name": "SLACK", "value": "ERRORNOT" }, { "name": "CK_ENDPOINT", "value": "ERRORNOT" }, { "name": "ACK_ENDPOINT", "value": "ERRORNOT" }, { "name": "K_ENDPOINT", "value": "ERRORNOT" }, { "name": "CASSANDRA_LOGGING_LEVEL", "value": "ERROR" }, { "name": "ENABLE_GETIP", "value": "https://www.url.com" }

the environment variables that appear in a pod are:

`
ENABLE_GETIP

CASSANDRA_LOGGING_LEVEL

K_ENDPOINT

ACK_ENDPOINT

CK_ENDPOINT

DB_HOSTS
`

That is, ENV variables named "SLACK_ENDPOINT", "LACK_ENDPOINT" and "SLACK" never get passed on to the pods.

@j3ffml j3ffml added sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. team/ux labels May 14, 2016
@drashkov
Copy link
Author

It also won't accept "randomshit" as an environment varible. When I add that as an environment variable to an existing deployment and run "kubctl apply -f deployment.json" nothing happens.

This is on version v1.2.4 / 3eed1e3

@thockin
Copy link
Member

thockin commented May 23, 2016

Ehh? Can you post a full but minimized yaml that I can run to reproduce it?

On Fri, May 20, 2016 at 4:21 PM, drashkov notifications@github.com wrote:

It also won't accept "randomshit" as an environment varible. When I add
that as an environment variable to an existing deployment and run "kubctl
apply -f deployment.json" nothing happens.

This is on version v1.2.4 / 3eed1e3
3eed1e3


You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub
#25585 (comment)

@drashkov
Copy link
Author

drashkov commented May 24, 2016

We use JSON files, here is the full one. "SLACK_ENDPOINT" does not appear as an environment variable in the pods. Doing "kubectl apply -f .json" does not create new pods, and does not pass these variables to the pods. If I change the number of replicas and add a new pod, that pod will be created, but won't have that varible.

{

    "apiVersion": "extensions/v1beta1",

    "kind": "Deployment",

    "metadata": {

        "name": "test-api-deployment"

    },

    "spec": {

        "replicas": 3,

        "template": {

            "metadata":{

                "labels": {

                    "app": "test-api"

                }

            },

            "spec": {

                "containers": [{

                    "image": "gcr.io/<repository>/test_api:1.3.5",

                    "name": "test-api",

                    "env": [

                        {

                            "name": "SLACK_ENDPOINT",

                            "value": "https://slack.com"

                        },

                        {

                            "name": "API_DB_HOSTS",

                            "value": "10.0.0.83"

                        },

                        {

                            "name": "CASSANDRA_LOGGING_LEVEL",

                            "value": "ERROR"

                        }]

                }]

            }

        }

    }

}

@pwittrock
Copy link
Member

I am not able to reproduce on GKE at 1.2.4 when exec'ing into one of the Pods and printing the environment.

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.4", GitCommit:"3eed1e3be6848b877ff80a93da3785d9034d0a4f", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.4", GitCommit:"3eed1e3be6848b877ff80a93da3785d9034d0a4f", GitTreeState:"clean"}
$ kubectl exec test-api-deployment-2124663613-029ak -i -t sh
/ # env
...
SLACK_ENDPOINT=https://slack.com
...

This is the yaml I used - note the image is different:

{

    "apiVersion": "extensions/v1beta1",

    "kind": "Deployment",

    "metadata": {

        "name": "test-api-deployment"

    },

    "spec": {

        "replicas": 3,

        "template": {

            "metadata":{

                "labels": {

                    "app": "test-api"

                }

            },

            "spec": {

                "containers": [{

                    "image": "gcr.io/google_containers/serve_hostname:v1.4",

                    "name": "test-api",

                    "env": [

                        {

                            "name": "SLACK_ENDPOINT",

                            "value": "https://slack.com"

                        },

                        {

                            "name": "API_DB_HOSTS",

                            "value": "10.0.0.83"

                        },

                        {

                            "name": "CASSANDRA_LOGGING_LEVEL",

                            "value": "ERROR"

                        }]

                }]

            }

        }

    }

}

What are the symptoms you are seeing? Can you provide yaml using standard public image along with the specific symptoms you are seeing?

@gflarity
Copy link

gflarity commented May 25, 2016

@pwittrock did you create the deployment first with out the env vars, then add them via an apply?

@drashkov
Copy link
Author

drashkov commented May 25, 2016

Thanks for looking into it @pwittrock, I managed to reproduce it with the yaml you posted.

Here's what I did:

  1. Saved the yaml you provided as 'og.yaml'
  2. Deleted the "SLACK_ENDPOINT" env var from that file and saved that as "mod.yaml" (no other changes)
$ kubectl apply -f og.yaml
deployment "test-api-deployment" created
$ kubectl exec test-api-deployment-2124663613-9tl0u env |grep SLACK
SLACK_ENDPOINT=https://slack.com

This creates the deployment with all env vars as expected

$ kubectl apply -f mod.yaml
$ kubectl exec test-api-deployment-2413078913-547h0 env |grep SLACK

This re-creates a new set of pods and they don't have the SLACK_ENDPOINT var, as expected.

$ kubectl apply -f og.yaml ; watch kubectl get pods

This does not re-create the pods, so the env var is not passed along. kubectl does not return errors.

@pwittrock pwittrock added the priority/backlog Higher priority than priority/awaiting-more-evidence. label May 26, 2016
@pwittrock
Copy link
Member

cc @janetkuo
cc @adohe

@pwittrock
Copy link
Member

@drashkov Thanks, I was able to reproduce the issue.

It looks like the config with the ENV does get stored in the last-applied-configurations annotation:

kubectl.kubernetes.io/last-applied-configuration: '{"kind":"Deployment","apiVersion":"extensions/v1beta1","metadata":{"name":"test-api-deployment","creationTimestamp":null},"spec":{"replicas":3,"template":{"metadata":{"creationTimestamp":null,"labels":{"app":"test-api","app2":"test-api2"}},"spec":{"containers":[{"name":"test-api","image":"gcr.io/google_containers/serve_hostname:v1.4","env":[{"name":"SLACK_ENDPOINT","value":"https://slack.com"},{"name":"SLACK_ENDPOINT2","value":"https://slack.com"},{"name":"CASSANDRA_LOGGING_LEVEL","value":"ERROR"}],"resources":{}}]}},"strategy":{}},"status":{}}'

But does not get put into the pods:

  containers:
  - env:
    - name: SLACK_ENDPOINT
      value: https://slack.com
    - name: CASSANDRA_LOGGING_LEVEL
      value: ERROR
    image: gcr.io/google_containers/serve_hostname:v1.4
    imagePullPolicy: IfNotPresent
    name: test-api
    resources:
      requests:
        cpu: 100m

@pwittrock pwittrock added kind/bug Categorizes issue or PR as related to a bug. 0 - Backlog labels May 26, 2016
@janetkuo
Copy link
Member

janetkuo commented May 26, 2016

After a few tries I found that if you add the SLACK_ENDPOINT env as the last one you'll be able to add it. Definitely a bug.

@janetkuo
Copy link
Member

janetkuo commented May 26, 2016

The generated three way merge patch wasn't correct (the annotation is good, but look at the spec part):
patch = {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"kind\":\"Deployment\",\"apiVersion\":\"extensions/v1beta1\",\"metadata\":{\"name\":\"test-api-deployment\",\"creationTimestamp\":null},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"creationTimestamp\":null,\"labels\":{\"app\":\"test-api\"}},\"spec\":{\"containers\":[{\"name\":\"test-api\",\"image\":\"gcr.io/google_containers/serve_hostname:v1.4\",\"env\":[{\"name\":\"API_DB_HOSTS\",\"value\":\"10.0.0.83\"},{\"name\":\"SLACK_ENDPOINT\",\"value\":\"https://slack.com\"},{\"name\":\"CASSANDRA_LOGGING_LEVEL\",\"value\":\"ERROR\"}],\"resources\":{}}]}},\"strategy\":{}},\"status\":{}}"},"creationTimestamp":null},

"spec":{"template":{"spec":{"containers":[{"env":[{"name":"CASSANDRA_LOGGING_LEVEL","value":"ERROR"}],"name":"test-api"}]}}}}

@janetkuo
Copy link
Member

janetkuo commented May 26, 2016

It only recognizes the last env as diff

@adohe-zz
Copy link

Opened a PR #26418 ptal.

@drashkov
Copy link
Author

Thanks everyone.

@bgrant0607 bgrant0607 added this to the v1.3 milestone May 28, 2016
@bgrant0607 bgrant0607 added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed priority/backlog Higher priority than priority/awaiting-more-evidence. labels May 28, 2016
k8s-github-robot pushed a commit that referenced this issue May 30, 2016
Automatic merge from submit-queue

fix strategy patch diff list issue

fixes #25585 

@janetkuo @pwittrock ptal.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/app-lifecycle area/kubectl kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants