Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Intermittent error when doing kubectl apply -f for a HorizontalPodAutoscaler: 'unable to find api field in struct HorizontalPodAutoscalerSpec for the json field "scaleTargetRef"' #34413

Closed
bes opened this issue Oct 9, 2016 · 32 comments · Fixed by #40260
Assignees
Labels
area/kubectl kind/bug Categorizes issue or PR as related to a bug. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/cli Categorizes an issue or PR as relevant to SIG CLI.
Milestone

Comments

@bes
Copy link

bes commented Oct 9, 2016

What keywords did you search in Kubernetes issues before filing this one?

  • unable to find api field in struct HorizontalPodAutoscalerSpec for the json field "scaleTargetRef"
  • HorizontalPodAutoscaler
  • scaleTargetRef

Also searched google/stackoverflow


Is this a BUG REPORT or FEATURE REQUEST?
Bug report

Kubernetes version (use kubectl version):
v1.4.0

Environment:

  • GKE with kubernetes v1.4.0

What happened:
Do kubectl apply -f my-service.yaml

Intermittently, since upgrading to 1.4.0 I have been getting

error: error when applying patch:

to:
&{0xc8203546c0 0xc8202e3340 default my-service-horizpodautoscaler /my-service/build/k8s/my-service.yaml &HorizontalPodAutoscaler{ObjectMeta:k8s_io_kubernetes_pkg_api_v1.ObjectMeta{Name:my-service-horizpodautoscaler,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: ,},OwnerReferences:[],Finalizers:[],ClusterName:,},Spec:HorizontalPodAutoscalerSpec{ScaleTargetRef:CrossVersionObjectReference{Kind:Deployment,Name:my-service-frontend,APIVersion:extensions/v1beta1,},MinReplicas:*1,MaxReplicas:3,TargetCPUUtilizationPercentage:nil,},Status:HorizontalPodAutoscalerStatus{ObservedGeneration:nil,LastScaleTime:<nil>,CurrentReplicas:0,DesiredReplicas:0,CurrentCPUUtilizationPercentage:nil,},} &TypeMeta{Kind:,APIVersion:,} 1537793 false}
for: "/my-service/build/k8s/my-service.yaml": error when creating patch with:
original:
{"kind":"HorizontalPodAutoscaler","apiVersion":"autoscaling/v1","metadata":{"name":"my-service-horizpodautoscaler","creationTimestamp":null},"spec":{"scaleTargetRef":{"kind":"Deployment","name":"my-service-frontend","apiVersion":"extensions/v1beta1"},"minReplicas":1,"maxReplicas":3},"status":{"currentReplicas":0,"desiredReplicas":0}}
modified:
{"kind":"HorizontalPodAutoscaler","apiVersion":"autoscaling/v1","metadata":{"name":"my-service-horizpodautoscaler","creationTimestamp":null,"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"kind\":\"HorizontalPodAutoscaler\",\"apiVersion\":\"autoscaling/v1\",\"metadata\":{\"name\":\"my-service-horizpodautoscaler\",\"creationTimestamp\":null},\"spec\":{\"scaleTargetRef\":{\"kind\":\"Deployment\",\"name\":\"my-service-frontend\",\"apiVersion\":\"extensions/v1beta1\"},\"minReplicas\":1,\"maxReplicas\":3},\"status\":{\"currentReplicas\":0,\"desiredReplicas\":0}}"}},"spec":{"scaleTargetRef":{"kind":"Deployment","name":"my-service-frontend","apiVersion":"extensions/v1beta1"},"minReplicas":1,"maxReplicas":3},"status":{"currentReplicas":0,"desiredReplicas":0}}
current:
{"kind":"HorizontalPodAutoscaler","apiVersion":"extensions/v1beta1","metadata":{"name":"my-service-horizpodautoscaler","namespace":"default","selfLink":"/apis/autoscaling/v1/namespaces/default/horizontalpodautoscalers/my-service-horizpodautoscaler","uid":"REDACTED-REDA-REDA-REDA-REDACTED","resourceVersion":"1537793","creationTimestamp":"2016-09-28T15:51:27Z","annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"kind\":\"HorizontalPodAutoscaler\",\"apiVersion\":\"autoscaling/v1\",\"metadata\":{\"name\":\"my-service-horizpodautoscaler\",\"creationTimestamp\":null},\"spec\":{\"scaleTargetRef\":{\"kind\":\"Deployment\",\"name\":\"my-service-frontend\",\"apiVersion\":\"extensions/v1beta1\"},\"minReplicas\":1,\"maxReplicas\":3},\"status\":{\"currentReplicas\":0,\"desiredReplicas\":0}}"}},"spec":{"scaleRef":{"kind":"Deployment","name":"my-service-frontend","apiVersion":"extensions/v1beta1","subresource":"scale"},"minReplicas":1,"maxReplicas":3,"cpuUtilization":{"targetPercentage":80}},"status":{"lastScaleTime":"2016-10-08T23:19:12Z","currentReplicas":1,"desiredReplicas":1,"currentCPUUtilizationPercentage":0}}

for: "/my-service/build/k8s/my-service.yaml": unable to find api field in struct HorizontalPodAutoscalerSpec for the json field "scaleTargetRef"

As you can see, it seems that the "current" metadata specifies "apiVersion" as extensions/v1beta1, that might be correct, but it seems suspicious to me...

I have the feeling it works every second time I do apply, but hard to say from my small sample size.

What you expected to happen:
The apply to work every time.

How to reproduce it

I used this yaml file (cleaned)

apiVersion: v1
kind: Service
metadata:
  name: my-service
  labels:
    app: my-service
    tier: frontend
spec:
  type: NodePort
  ports:
  - name: http8080
    protocol: TCP
    port: 8080
    targetPort: 8080
  selector:
    app: my-service
    tier: frontend

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-service-frontend
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: my-service
        tier: frontend
    spec:
      containers:
      - name: my-service
        image: eu.gcr.io/my-project/my-service:v0.0.0
        env:
        - name: MY_ENVIRONMENT
          value: dev
        ports:
        - containerPort: 8080
        resources:
          requests:
            cpu: 100m
            memory: 800Mi

---
apiVersion: autoscaling/v1 #I've tried with extensions/v1beta1 and "scaleRef" below, but that doesn't work properly either (similar error message)
kind: HorizontalPodAutoscaler
metadata:
  name: my-service-horizpodautoscaler
spec:
  scaleTargetRef:
    kind: Deployment
    name: my-service-frontend
    apiVersion: extensions/v1beta1 #I've tried with and without this, makes no difference
  minReplicas: 1
  maxReplicas: 3

Anything else do we need to know:

GET on clusters using https://cloud.google.com/container-engine/reference/rest/v1/projects.zones.clusters/get reveals:

[...]
 "addonsConfig": {
  "httpLoadBalancing": {
  },
  "horizontalPodAutoscaling": {
  }
 },
[...]
   "initialNodeCount": 1,
   "autoscaling": {
    "enabled": true,
    "minNodeCount": 1,
    "maxNodeCount": 3
   },
[...]

 "initialClusterVersion": "1.3.6",
 "currentMasterVersion": "1.4.0",
 "currentNodeVersion": "1.4.0",

To solve this I have temporarily moved the HPA section out of the file into a separate file, but I prefer having the complete configuration in one place.

Am I doing something wrong?

Thanks!

@bes
Copy link
Author

bes commented Oct 9, 2016

Tried deleting the hpa using kubectl delete hpa my-service-horizpodautoscaler and then reapplying. Worked first time (no errors) and the hpa was created. Waited a few minutes then applied again, and got the same error.

@pwittrock
Copy link
Member

Does this work consistently if the the hpa is in a different file?

@bes
Copy link
Author

bes commented Oct 14, 2016

@pwittrock No, it's the same thing. Every other time it seems to work.

@pwittrock pwittrock added the kind/bug Categorizes issue or PR as related to a bug. label Oct 16, 2016
@pwittrock
Copy link
Member

@ymqytw Would you look into this one as well?

@mengqiy
Copy link
Member

mengqiy commented Oct 18, 2016

@pwittrock Yeah, I will look into this issue.

@ajohnstone
Copy link
Contributor

Getting this issue too on 1.4.3

Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.3", GitCommit:"4957b090e9a4f6a68b4a40375408fdc74a212260", GitTreeState:"clean", BuildDate:"2016-10-16T06:36:33Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.3", GitCommit:"4957b090e9a4f6a68b4a40375408fdc74a212260", GitTreeState:"clean", BuildDate:"2016-10-16T06:20:04Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
11:03:39 INFO:k8s.kubernetes:kubectl --context=test-xxx-xxx.xx-xxx.photobox.com --namespace=development apply -f deploy/kubernetes;
11:03:39 error: error when applying patch:
11:03:39 
11:03:39 to:
11:03:39 &{0xc8200b4e40 0xc82034c460 development smash-website deploy/kubernetes/hpa.yaml &HorizontalPodAutoscaler{ObjectMeta:k8s_io_kubernetes_pkg_api_v1.ObjectMeta{Name:smash-website,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: ,},OwnerReferences:[],Finalizers:[],ClusterName:,},Spec:HorizontalPodAutoscalerSpec{ScaleTargetRef:CrossVersionObjectReference{Kind:Deployment,Name:smash-website,APIVersion:extensions/v1beta1,},MinReplicas:*1,MaxReplicas:10,TargetCPUUtilizationPercentage:*45,},Status:HorizontalPodAutoscalerStatus{ObservedGeneration:nil,LastScaleTime:<nil>,CurrentReplicas:0,DesiredReplicas:0,CurrentCPUUtilizationPercentage:nil,},} &TypeMeta{Kind:,APIVersion:,} 629103 false}
11:03:39 for: "deploy/kubernetes/hpa.yaml": error when creating patch with:
11:03:39 original:
11:03:39 {"kind":"HorizontalPodAutoscaler","apiVersion":"autoscaling/v1","metadata":{"name":"smash-website","creationTimestamp":null},"spec":{"scaleTargetRef":{"kind":"Deployment","name":"smash-website","apiVersion":"extensions/v1beta1"},"minReplicas":1,"maxReplicas":10,"targetCPUUtilizationPercentage":45},"status":{"currentReplicas":0,"desiredReplicas":0}}
11:03:39 modified:
11:03:39 {"kind":"HorizontalPodAutoscaler","apiVersion":"autoscaling/v1","metadata":{"name":"smash-website","creationTimestamp":null,"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"kind\":\"HorizontalPodAutoscaler\",\"apiVersion\":\"autoscaling/v1\",\"metadata\":{\"name\":\"smash-website\",\"creationTimestamp\":null},\"spec\":{\"scaleTargetRef\":{\"kind\":\"Deployment\",\"name\":\"smash-website\",\"apiVersion\":\"extensions/v1beta1\"},\"minReplicas\":1,\"maxReplicas\":10,\"targetCPUUtilizationPercentage\":45},\"status\":{\"currentReplicas\":0,\"desiredReplicas\":0}}"}},"spec":{"scaleTargetRef":{"kind":"Deployment","name":"smash-website","apiVersion":"extensions/v1beta1"},"minReplicas":1,"maxReplicas":10,"targetCPUUtilizationPercentage":45},"status":{"currentReplicas":0,"desiredReplicas":0}}
11:03:39 current:
11:03:39 {"kind":"HorizontalPodAutoscaler","apiVersion":"extensions/v1beta1","metadata":{"name":"smash-website","namespace":"development","selfLink":"/apis/autoscaling/v1/namespaces/development/horizontalpodautoscalers/smash-website","uid":"a9e65e5a-9539-11e6-bc5e-0645d95d0de3","resourceVersion":"629103","creationTimestamp":"2016-10-18T13:49:17Z","annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"kind\":\"HorizontalPodAutoscaler\",\"apiVersion\":\"autoscaling/v1\",\"metadata\":{\"name\":\"smash-website\",\"creationTimestamp\":null},\"spec\":{\"scaleTargetRef\":{\"kind\":\"Deployment\",\"name\":\"smash-website\",\"apiVersion\":\"extensions/v1beta1\"},\"minReplicas\":1,\"maxReplicas\":10,\"targetCPUUtilizationPercentage\":45},\"status\":{\"currentReplicas\":0,\"desiredReplicas\":0}}","kubernetes.io/change-cause":"kubectl --context=test-xxx-xxx.xx-xxx.photobox.com --namespace=development --record=true apply -f deploy/kubernetes"}},"spec":{"scaleRef":{"kind":"Deployment","name":"smash-website","apiVersion":"extensions/v1beta1","subresource":"scale"},"minReplicas":1,"maxReplicas":10,"cpuUtilization":{"targetPercentage":45}},"status":{"currentReplicas":2,"desiredReplicas":0}}
11:03:39 
11:03:39 for: "deploy/kubernetes/hpa.yaml": unable to find api field in struct HorizontalPodAutoscalerSpec for the json field "scaleTargetRef"
11:03:39 ERROR:k8s.kubernetes:stdout output:
11:03:39 deployment "smash-website" configured
11:03:39 service "smash-website-int" configured
11:03:39 service "smash-website" configured

yaml

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: smash-website
spec:
  maxReplicas: 10
  minReplicas: 1
  scaleTargetRef:
    apiVersion: extensions/v1beta1
    kind: Deployment
    name: smash-website
  targetCPUUtilizationPercentage: 45

@mengqiy
Copy link
Member

mengqiy commented Oct 19, 2016

It happens on 1.4.0 and HEAD.
I observed the APIVersion is wrong sometimes when using encoder to encode a internal object
Sometimes it has "apiVersion":"autoscaling/v1" which is correct, but sometimes it has "apiVersion":"extensions/v1beta1" which is wrong.

I think there is something wrong with the encoder genertaed by codec.
@smarterclayton

@smarterclayton
Copy link
Contributor

It's possible that we are not forcing the stored version into a stable apiVersion/group, and so when the HPA controller runs it mutates the object via one name (extensions?) and when the user runs it updates via (autoscaling), and so the user action fails. There is code in the storage codec (storage_factory.go) that should force objects from either type into a single version when stored in etcd. That's probably the first place to look. If that's not stable then multiple clients can cause this problem.

@bgrant0607
Copy link
Member

See also #23378, #22866

@devth
Copy link

devth commented Oct 24, 2016

This busted our CD system for any repos that contain HPA manifests. Details:

  • Started happening 10/17 on our GKE cluster.
  • The cluster was upgraded to 1.4.0 on 10/4.
  • Masters were upgraded to 1.4.3 on 10/16.

@mengqiy
Copy link
Member

mengqiy commented Oct 24, 2016

I will prioritize this issue.
cc: @caesarxuchao

@mengqiy
Copy link
Member

mengqiy commented Oct 25, 2016

The root cause is bestMatch().
This function will return the first one in the slice when there is no best match.
In this case, this function's return value is not deterministic. It depends on the order of the input slice.
Ref: #34010Comment
@soltysh @smarterclayton Can you give a fix?

@pwittrock
Copy link
Member

@caesarxuchao Is the codec library owned by the api-machinery sig?

@caesarxuchao
Copy link
Member

Yes. This is an api-machinery issue.

@soltysh
Copy link
Contributor

soltysh commented Oct 26, 2016

Lemme try to first reproduce this and I'll see what I can do.

@soltysh
Copy link
Contributor

soltysh commented Oct 27, 2016

It looks like the same problem as in #35149, goes all the way down the exact same method. Working on it now...

@pwittrock
Copy link
Member

Thx

@pwittrock
Copy link
Member

@soltysh any updates on this?

@pwittrock pwittrock added sig/cli Categorizes an issue or PR as relevant to SIG CLI. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. labels Dec 16, 2016
@matchstick matchstick added this to the v1.5 milestone Dec 16, 2016
@matchstick
Copy link
Contributor

@pwittrock Giving this P1 priority and targeting to 1.5 for now. Should this be backported to 1.4?
Also I see #35688 has a possible fix. @soltysh Do you have an ETA? are you waiting on review?

@pwittrock
Copy link
Member

cc @caesarxuchao @lavalamp

@caesarxuchao
Copy link
Member

caesarxuchao commented Dec 16, 2016

@smarterclayton @liggitt is this issue fixed by #38406?

@pwittrock
Copy link
Member

It is possible this is already fixed on master/head. @ymqytw is trying to verify if this is the case now, and create a patch for 1.5 if this is the case.

k8s-github-robot pushed a commit that referenced this issue Dec 21, 2016
…d_obj

Automatic merge from submit-queue

make apply use the correct versioned obj

Cherrypick [part of the changes](https://github.com/kubernetes/kubernetes/pull/38406/files#diff-b3aa1e4377838f45d595f8ecca3c2619) from #38406.
Make `kubectl apply` in `v1.5` works for #34413.

``` release-note
Give apply the versioned struct that generated from the type defined in the restmapping.
```

cc: @pwittrock @matchstick @liggitt
@munnerz
Copy link
Member

munnerz commented Jan 4, 2017

If #38982 does in fact fix this issue, does anyone know if it'll make it into the 1.5.2 release? This issue is really frustrating in a CI/CD pipeline!

@mengqiy
Copy link
Member

mengqiy commented Jan 4, 2017

@munnerz I believe #38982 will be in 1.5.2.
@pwittrock Can you confirm?

@pwittrock
Copy link
Member

pwittrock commented Jan 4, 2017 via email

@yunzhu-li
Copy link

I'm still hitting this on an 1.5.2 cluster, although much fewer times. Both master and node version are 1.5.2.

@liggitt
Copy link
Member

liggitt commented Jan 25, 2017

do you have any pre-1.5.2 kubectl clients performing apply?

@yunzhu-li
Copy link

@liggitt Thanks for reminding! We were running pre-1.5.2 kubectl clients and now updated. Will continue monitoring.

k8s-github-robot pushed a commit that referenced this issue Jan 27, 2017
Automatic merge from submit-queue (batch tested with PRs 39223, 40260, 40082, 40389)

make kubectl generic commands work with unstructured objects

part of making apply, edit, label, annotate, and patch work with third party resources

fixes #35149
fixes #34413

prereq of:
#35496
#40096

related to:
#39906
#40119

kubectl is currently decoding any resource it doesn't have compiled-in to a ThirdPartyResourceData struct, which means it computes patches using that struct, and would try to send a ThirdPartyResourceData object to the API server when running `apply`

This PR removes the behavior that decodes unknown objects into ThirdPartyResourceData structs internally, and fixes up the following generic commands to work with unstructured objects

- [x] apply
  - [x] decode into runtime.Unstructured objects
  - [x] successfully use `--record` with unregistered objects 
- [x] patch
  - [x] decode into runtime.Unstructured objects
  - [x] successfully use `--record` with unregistered objects 
- [x] describe
  - [x] decode into runtime.Unstructured objects
  - [x] implement generic describer
- [x] fix other generic kubectl commands to work with unstructured objects
  - [x] label
  - [x] annotate

follow-ups for pre-existing issues:
- [ ] `explain` doesn't work with unregistered resources
- [ ] remove special casing of federation group in clientset lookups, etc
- [ ] `patch`
  - [ ] doesn't honor output formats when persisting to server (`kubectl patch -f svc.json --type merge -p '{}' -o json` doesn't output json)
  - [ ] --local throws exception (`kubectl patch -f svc.json --type merge -p '{}' --local`)
- [ ] `apply`
  - [ ] fall back to generic JSON patch computation if no go struct is registered for the target GVK (e.g. #40096)
  - [ ] ensure subkey deletion works in CreateThreeWayJSONMergePatch
  - [ ] ensure type stomping works in CreateThreeWayJSONMergePatch
  - [ ] lots of tests for generic json patch computation
  - [ ] prevent generic apply patch computation among different versions
  - [ ] reconcile treatment of nulls with #35496
- [ ] `edit`
  - [ ] decode into runtime.Unstructured objects
  - [ ] fall back to generic JSON patch computation if no go struct is registered for the target GVK
@nckturner
Copy link
Contributor

I have this issue with a 1.5.2 cluster with a 1.5.4 client, anyone else seeing it?

@adambkaplan
Copy link

adambkaplan commented May 26, 2017

I'm seeing the same/similar issue with a 1.5.2 cluster and 1.6.3 client

@kishorekumark
Copy link

seeing the same issue with v1.8.0
macbook-pro-2:Federation kkokkiligadda$ kubectl --context=fed replace -f Adminconsole-Full.yaml --record --validate=false
service "adminconsole-app" replaced
deployment "adminconsole" replaced
error: unable to recognize "Adminconsole-Full.yaml": no matches for autoscaling/, Kind=HorizontalPodAutoscaler

How to overcome

@kishorekumark
Copy link


apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: adminconsole-app
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: adminconsole-app
namespace: default
minReplicas: 1
maxReplicas: 10
cpuUtilization:
targetPercentage: 50

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubectl kind/bug Categorizes issue or PR as related to a bug. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/cli Categorizes an issue or PR as relevant to SIG CLI.
Projects
None yet
Development

Successfully merging a pull request may close this issue.