Skip to content

Kubectl/API Refusal to delete ReplicaSetย #47960

Closed
@mccormd

Description

/kind bug
What happened:
Tried to delete a replicaset with a failing container (stuck on a volume mount) and got the following: -

kubectl delete rs prometheus-4088412010 -n kube-system 
error: Scaling the resource failed with: ReplicaSet.extensions "prometheus-4088412010" is invalid: metadata.finalizers[0]: Invalid value: "foregroundDeletion": name is neither a standard finalizer name nor is it fully qualified; Current resource version 11276377

Looking at the resource looks odd: -

# kubectl get rs prometheus-4088412010 -n kube-system -o yaml
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
  annotations:
    deployment.kubernetes.io/desired-replicas: "1"
    deployment.kubernetes.io/max-replicas: "2"
    deployment.kubernetes.io/revision: "2"
  creationTimestamp: 2017-06-23T10:10:25Z
  deletionGracePeriodSeconds: 0
  deletionTimestamp: 2017-06-23T12:00:39Z
  finalizers:
  - foregroundDeletion
  generation: 2
  labels:
    app: prometheus
    pod-template-hash: "4088412010"
  name: prometheus-4088412010
  namespace: kube-system
  ownerReferences:
  - apiVersion: extensions/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: Deployment
    name: prometheus
    uid: a8870531-5297-11e7-a80e-0a80992c354e
  resourceVersion: "11276377"
  selfLink: /apis/extensions/v1beta1/namespaces/kube-system/replicasets/prometheus-4088412010
  uid: 2ccd6bf2-57fc-11e7-bc44-0a851b7f13b8
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus
      pod-template-hash: "4088412010"
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: prometheus
        pod-template-hash: "4088412010"
      name: prometheus
    spec:
      containers:
      - args:
        - -storage.local.retention=360h0m0s
        - -storage.local.memory-chunks=500000
        - -config.file=/etc/prometheus/prometheus.yml
        image: quay.io/prometheus/prometheus:v1.7.1
        imagePullPolicy: IfNotPresent
        name: prometheus
        ports:
        - containerPort: 9090
          name: web
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /etc/prometheus
          name: config-volume
        - mountPath: /prometheus
          name: data-volume
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: prometheus
      serviceAccountName: prometheus
      terminationGracePeriodSeconds: 30
      volumes:
      - configMap:
          defaultMode: 420
          name: prometheus
        name: config-volume
      - name: data-volume
        persistentVolumeClaim:
          claimName: k8smetrics-prometheus
status:
  fullyLabeledReplicas: 1
  observedGeneration: 1
  replicas: 1
# kubectl describe rs prometheus-4088412010 -n kube-system        
Name:		prometheus-4088412010
Namespace:	kube-system
Selector:	app=prometheus,pod-template-hash=4088412010
Labels:		app=prometheus
		pod-template-hash=4088412010
Annotations:	deployment.kubernetes.io/desired-replicas=1
		deployment.kubernetes.io/max-replicas=2
		deployment.kubernetes.io/revision=2
Replicas:	1 current / 1 desired
Pods Status:	1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:		app=prometheus
			pod-template-hash=4088412010
  Service Account:	prometheus
  Containers:
   prometheus:
    Image:	quay.io/prometheus/prometheus:v1.7.1
    Port:	9090/TCP
    Args:
      -storage.local.retention=360h0m0s
      -storage.local.memory-chunks=500000
      -config.file=/etc/prometheus/prometheus.yml
    Environment:	<none>
    Mounts:
      /etc/prometheus from config-volume (rw)
      /prometheus from data-volume (rw)
  Volumes:
   config-volume:
    Type:	ConfigMap (a volume populated by a ConfigMap)
    Name:	prometheus
    Optional:	false
   data-volume:
    Type:	PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:	k8smetrics-prometheus
    ReadOnly:	false
Events:		<none>

What you expected to happen:
I was expecting the ReplicaSet to be deleted.

How to reproduce it (as minimally and precisely as possible):

Created with this deployment: -

apiVersion: v1
kind: Service
metadata:
  annotations:
    prometheus.io/scrape: 'true'
  namespace: kube-system
  labels:
    name: prometheus
    addonmanager.kubernetes.io/mode: EnsureExists
  name: prometheus
spec:
  selector:
    app: prometheus
  type: ClusterIP
  ports:
  - name: prometheus
    protocol: TCP
    port: 9090
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: prometheus
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus
  template:
    metadata:
      name: prometheus
      labels:
        app: prometheus
    spec:
      serviceAccountName: prometheus
      containers:
      - name: prometheus
        image: quay.io/prometheus/prometheus:v1.7.1
        args:
          - '-storage.local.retention=360h0m0s'
          - '-storage.local.memory-chunks=500000'
          - '-config.file=/etc/prometheus/prometheus.yml'
        ports:
        - name: web
          containerPort: 9090
        volumeMounts:
        - name: config-volume
          mountPath: /etc/prometheus
        - name: data-volume
          mountPath: /prometheus
      volumes:
      - name: config-volume
        configMap:
          name: prometheus
      - name: data-volume
        persistentVolumeClaim:
          claimName: k8smetrics-prometheus

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:33:17Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:33:17Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

  • Cloud provider or hardware configuration**: aws

  • OS (e.g. from /etc/os-release):
    NAME="Container Linux by CoreOS"
    ID=coreos
    VERSION=1409.5.0
    VERSION_ID=1409.5.0
    BUILD_ID=2017-06-22-2222
    PRETTY_NAME="Container Linux by CoreOS 1409.5.0 (Ladybug)"
    ANSI_COLOR="38;5;75"
    HOME_URL="https://coreos.com/"
    BUG_REPORT_URL="https://issues.coreos.com"
    COREOS_BOARD="amd64-usr"

  • Kernel (e.g. uname -a):
    Linux ip-10-16-5-136.eu-west-1.compute.internal 4.11.6-coreos-r1 Unit test coverage in Kubelet is lousy. (~30%)ย #1 SMP Thu Jun 22 22:04:38 UTC 2017 x86_64 Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz GenuineIntel GNU/Linux

  • Install tools: roll-your-own

  • Others:

Metadata

Assignees

Labels

kind/bugCategorizes issue or PR as related to a bug.sig/api-machineryCategorizes an issue or PR as relevant to SIG API Machinery.

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions