Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HPA: unrecognized type: int32 #126969

Open
Raghu021 opened this issue Aug 28, 2024 · 5 comments
Open

HPA: unrecognized type: int32 #126969

Raghu021 opened this issue Aug 28, 2024 · 5 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling.

Comments

@Raghu021
Copy link

What happened?

one or more objects failed to apply, reason: "" is invalid: patch: Invalid value: "map[metadata:map[annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"autoscaling/v2","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"labels":{"argocd.argoproj.io/instance":"agent-productivity-service-dev"},"name":"agent-productivity-service","namespace":"platform"},"spec":{"maxReplicas":"${{ .Values.hpa.tier3.maxReplicas }}","metrics":[{"resource":{"name":"cpu","target":{"averageUtilization":"${{ .Values.hpa.cpu }}","type":"Utilization"}},"type":"Resource"},{"resource":{"name":"memory","target":{"averageUtilization":"${{ .Values.hpa.memory }}","type":"Utilization"}},"type":"Resource"}],"minReplicas":"${{ .Values.hpa.tier3.minReplicas }}","scaleTargetRef":{"apiVersion":"apps/v1","kind":"Deployment","name":"agent-productivity-service"}}}\n]] spec:map[maxReplicas:${{ .Values.hpa.tier3.maxReplicas }} metrics:[map[resource:map[name:cpu target:map[averageUtilization:${{ .Values.hpa.cpu }} type:Utilization]] type:Resource] map[resource:map[name:memory target:map[averageUtilization:${{ .Values.hpa.memory }} type:Utilization]] type:Resource]] minReplicas:${{ .Values.hpa.tier3.minReplicas }}]]": unrecognized type: int32

What did you expect to happen?

I am trying HPA for one service, since hpa values are parsed from group vars, I am getting this error. I tired adding quotes, then it will show helm template error like "Error: failed to parse .agent-productivity-service/values.yaml: error converting YAML to JSON: yaml: invalid map key: map[interface {}]interface {}{".Values.hpa.enabled":interface {}(nil)}"

How can we reproduce it (as minimally and precisely as possible)?

Deployment file is
application: name: agent-productivity-service type: java port: 10490 replicaCount: 1 namespace: platform args: ["{{ .Values.heap.xms }}", "{{ .Values.heap.xmx }}", "{{ .Values.heap.xss }}"] nodeSelector: karpenter.sh/provisioner-name: amd-m6 healthcheck: path: /actuator/health interval: initialDelaySeconds: 50 autoscaling: enabled: ${{ .Values.hpa.enabled }} minReplicas: ${{ .Values.hpa.tier3.minReplicas }} maxReplicas: ${{ .Values.hpa.tier3.maxReplicas }} metrics: cpu: ${{ .Values.hpa.cpu }} memory: ${{ .Values.hpa.memory }} ingress: enabled: true hosts: - host: "aps.{{ .Values.common.internal_domain }}" paths: - path: / pathType: Prefix resources: requests: cpu: "{{ .Values.resources.tier3.requests.cpu }}" memory: "{{ .Values.resources.tier3.requests.memory }}" limits: cpu: "{{ .Values.resources.tier3.limits.cpu }}" memory: "{{ .Values.resources.tier3.limits.memory }}" properties: enabled: true

group vars file
hpa: enabled: true tier1: minReplicas: 1 maxReplicas: 2 tier2: minReplicas: 1 maxReplicas: 2 tier3: minReplicas: 1 maxReplicas: 2 tier4: minReplicas: 1 maxReplicas: 2 cpu: 50 memory: 250

helm template hpa.yaml file is
{{ if .Values.application.autoscaling.enabled }} apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: {{ include "helm-chart.name" . }} namespace: {{ .Values.application.namespace }} spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: {{ include "helm-chart.name" . }} minReplicas: {{ .Values.application.autoscaling.minReplicas }} maxReplicas: {{ .Values.application.autoscaling.maxReplicas }} metrics: {{- include "helm-chart.autoscalingconfiguration" . | indent 3 }} {{ end }}

Anything else we need to know?

No response

Kubernetes version

$ kubectl version
Client Version: v1.26.0
Kustomize Version: v4.5.7
Server Version: v1.28.11-eks-db838b0

Cloud provider

OS version

# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here

# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here

Install tools

Container runtime (CRI) and version (if applicable)

Related plugins (CNI, CSI, ...) and versions (if applicable)

@Raghu021 Raghu021 added the kind/bug Categorizes issue or PR as related to a bug. label Aug 28, 2024
@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Aug 28, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Aug 28, 2024
@neolit123
Copy link
Member

/sig autoscaling

@k8s-ci-robot k8s-ci-robot added sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Aug 29, 2024
@neolit123
Copy link
Member

/retitle HPA: unrecognized type: int32

@k8s-ci-robot k8s-ci-robot changed the title unrecognized type: int32 HPA: unrecognized type: int32 Aug 29, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 27, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling.
Projects
None yet
Development

No branches or pull requests

4 participants