Skip to content

when 'hostPort' is unset from pre-existing deployment using hostNetwork, it spawns new replicaset but fails to delete pre-existing replicaset #126879

Open
@peterochodo

Description

What happened?

  • i prepared an nginx container to listen on port 9033
  • deployed it with a deployment that was configured to use 'hostNetwork: true' and explicitly set 'hostPort: 9033'
  • i edited the deployment by simply deleting the 'hostPort' entry
  • i then applied the edited deployment manifest
  • a second pod was launched, but got stuck in 'Pending' state. The pre-existing pod was not deleted as expected. It kept running
  • a second replicaset was launched, but its pod (above) was stuck in pending state. The pre-existing replicaset was not deleted as expected.
# diff manifest-original.yaml manifest-tweak.yaml
32d31
<           hostPort: 9033
# KUBECONFIG=kconf kubectl apply -f manifest-original.yaml
deployment.apps/myapp created
$ KUBECONFIG=kconf kubectl get rs -n my-system
NAME                                               DESIRED   CURRENT   READY   AGE
myapp-55758c985                                    1         1         1       8s

$ KUBECONFIG=kconf kubectl get pods -n my-system
NAME                                                     READY   STATUS    RESTARTS   AGE
myapp-55758c985-dft2b                                    1/1     Running   0          20s

$ KUBECONFIG=kconf kubectl apply -f manifest-tweak.yaml 
deployment.apps/myapp configured

$ KUBECONFIG=kconf kubectl get rs -n my-system
NAME                                               DESIRED   CURRENT   READY   AGE
myapp-5564fdd866                                   1         1         0       7s
myapp-55758c985                                    1         1         1       59s

$ KUBECONFIG=kconf kubectl get pods -n my-system
NAME                                                     READY   STATUS    RESTARTS   AGE
myapp-5564fdd866-2wwbg                                   0/1     Pending   0          18s
myapp-55758c985-dft2b                                    1/1     Running   0          70s

$ cat manifest-original.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  namespace: my-system
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  selector:
    matchLabels:
      app: myapp
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: mgr
        image: docker-local.artifactory.eng.yadayada.com/tilt/nginx:v0 
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9033
          hostPort: 9033
          name: api
          protocol: TCP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          readOnlyRootFilesystem: false
          runAsNonRoot: false
      dnsPolicy: ClusterFirst
      hostNetwork: true
      nodeSelector:
        node-role.kubernetes.io/control-plane: ""
      priorityClassName: system-cluster-critical
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: my-sa
      serviceAccountName: my-sa
      terminationGracePeriodSeconds: 30
      tolerations:
      - key: CriticalAddonsOnly
        operator: Exists
      - effect: NoSchedule
        key: node-role.kubernetes.io/control-plane
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
      - effect: NoSchedule
        key: node.kubernetes.io/not-ready
      - effect: NoSchedule
        key: node.cloudprovider.kubernetes.io/uninitialized
        value: "true"

$ KUBECONFIG=kconf kubectl version
Client Version: v1.29.7
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.10

What did you expect to happen?

  • i expected the pre-existing replicaset & its corresponding pod to be completely deleted, and another replicaset created which then successfully runs its pod.

  • i did NOT expect the pre-existing replicaset & its corresponding pod to persist, and another replicaset launched whose pod gets stuck in 'Pending' state

How can we reproduce it (as minimally and precisely as possible)?

  • see linux terminal output in "What happened?" section above.

Anything else we need to know?

No response

Kubernetes version

$ KUBECONFIG=kconf kubectl version
Client Version: v1.29.7
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.10

Cloud provider

vsphere

OS version

# On Linux:
$ cat /etc/os-release 
PRETTY_NAME="Ubuntu 22.04.4 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.4 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy

$ uname -a
# paste output here
Linux bug-repro-grpch-jrrd4 5.15.0-117-generic #127-Ubuntu SMP Fri Jul 5 20:13:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux

Install tools

Container runtime (CRI) and version (if applicable)

Related plugins (CNI, CSI, ...) and versions (if applicable)

Metadata

Assignees

Labels

kind/bugCategorizes issue or PR as related to a bug.needs-triageIndicates an issue or PR lacks a `triage/foo` label and requires one.sig/appsCategorizes an issue or PR as relevant to SIG Apps.

Type

No type

Projects

  • Status

    In Progress

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions