-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HPA: unrecognized type: int32 #126969
Comments
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/sig autoscaling |
/retitle HPA: unrecognized type: int32 |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
What happened?
one or more objects failed to apply, reason: "" is invalid: patch: Invalid value: "map[metadata:map[annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"autoscaling/v2","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"labels":{"argocd.argoproj.io/instance":"agent-productivity-service-dev"},"name":"agent-productivity-service","namespace":"platform"},"spec":{"maxReplicas":"${{ .Values.hpa.tier3.maxReplicas }}","metrics":[{"resource":{"name":"cpu","target":{"averageUtilization":"${{ .Values.hpa.cpu }}","type":"Utilization"}},"type":"Resource"},{"resource":{"name":"memory","target":{"averageUtilization":"${{ .Values.hpa.memory }}","type":"Utilization"}},"type":"Resource"}],"minReplicas":"${{ .Values.hpa.tier3.minReplicas }}","scaleTargetRef":{"apiVersion":"apps/v1","kind":"Deployment","name":"agent-productivity-service"}}}\n]] spec:map[maxReplicas:${{ .Values.hpa.tier3.maxReplicas }} metrics:[map[resource:map[name:cpu target:map[averageUtilization:${{ .Values.hpa.cpu }} type:Utilization]] type:Resource] map[resource:map[name:memory target:map[averageUtilization:${{ .Values.hpa.memory }} type:Utilization]] type:Resource]] minReplicas:${{ .Values.hpa.tier3.minReplicas }}]]": unrecognized type: int32
What did you expect to happen?
I am trying HPA for one service, since hpa values are parsed from group vars, I am getting this error. I tired adding quotes, then it will show helm template error like "Error: failed to parse .agent-productivity-service/values.yaml: error converting YAML to JSON: yaml: invalid map key: map[interface {}]interface {}{".Values.hpa.enabled":interface {}(nil)}"
How can we reproduce it (as minimally and precisely as possible)?
Deployment file is
application: name: agent-productivity-service type: java port: 10490 replicaCount: 1 namespace: platform args: ["{{ .Values.heap.xms }}", "{{ .Values.heap.xmx }}", "{{ .Values.heap.xss }}"] nodeSelector: karpenter.sh/provisioner-name: amd-m6 healthcheck: path: /actuator/health interval: initialDelaySeconds: 50 autoscaling: enabled: ${{ .Values.hpa.enabled }} minReplicas: ${{ .Values.hpa.tier3.minReplicas }} maxReplicas: ${{ .Values.hpa.tier3.maxReplicas }} metrics: cpu: ${{ .Values.hpa.cpu }} memory: ${{ .Values.hpa.memory }} ingress: enabled: true hosts: - host: "aps.{{ .Values.common.internal_domain }}" paths: - path: / pathType: Prefix resources: requests: cpu: "{{ .Values.resources.tier3.requests.cpu }}" memory: "{{ .Values.resources.tier3.requests.memory }}" limits: cpu: "{{ .Values.resources.tier3.limits.cpu }}" memory: "{{ .Values.resources.tier3.limits.memory }}" properties: enabled: true
group vars file
hpa: enabled: true tier1: minReplicas: 1 maxReplicas: 2 tier2: minReplicas: 1 maxReplicas: 2 tier3: minReplicas: 1 maxReplicas: 2 tier4: minReplicas: 1 maxReplicas: 2 cpu: 50 memory: 250
helm template hpa.yaml file is
{{ if .Values.application.autoscaling.enabled }} apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: {{ include "helm-chart.name" . }} namespace: {{ .Values.application.namespace }} spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: {{ include "helm-chart.name" . }} minReplicas: {{ .Values.application.autoscaling.minReplicas }} maxReplicas: {{ .Values.application.autoscaling.maxReplicas }} metrics: {{- include "helm-chart.autoscalingconfiguration" . | indent 3 }} {{ end }}
Anything else we need to know?
No response
Kubernetes version
Cloud provider
OS version
Install tools
Container runtime (CRI) and version (if applicable)
Related plugins (CNI, CSI, ...) and versions (if applicable)
The text was updated successfully, but these errors were encountered: