Skip to content

zfs-localpv and lvm-localpv controller pod requires restart on single node setup after upgrade to latest version #3751

Closed
@abhilashshetty04

Description

Problem Statement:

Setup: Single worker node
Since in the last version of lvm and zfs localpv csi controller was a StatefulSet with Affinity rule below,

affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - openebs-lvm-controller
            topologyKey: "kubernetes.io/hostname"

controller manifest was changed to Deployment type in the newer version which causes new controller pods to be not in Running state.

Following are the relevent pod list and events

root@xxx:~/zfs-localpv# kubectl get pod -n openebs
NAME                                                         READY   STATUS    RESTARTS   AGE
openebs-zfslocalpv-zfs-localpv-controller-6dd6754489-bnrwh   5/5     Running   0          3m16s
openebs-zfslocalpv-zfs-localpv-controller-78c7bb488d-h44p9   0/5     Pending   0          2m51s
                            
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  42s (x2 over 72s)  default-scheduler  0/1 nodes are available: 1 node(s) didn't satisfy existing pods anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't satisfy existing pods anti-affinity rules..

Metadata

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions