Description
Is this a request for help?:
No.
What keywords did you search in Kubernetes issues before filing this one?
restart pod on the same node
Is this a BUG REPORT or FEATURE REQUEST?
Feature Request.
Use Case:
When running stateful applications like Kafka using PetSets and local storage (host volumes), it's preferable to restart a pod on the same node if it goes down (assuming the node itself is up and healthy). This saves replicating lot of data from leaders and helps to bring the application back into the cluster quickly. This problem doesn't exist if there is a network storage since the new pod can access the same network drive from another node though.
Idea:
Can we support restarting a pod on the same node where it was running before going down, either by using some kind of rescheduling policy or node/pod affinity? Are there any hooks in the scheduler, using which the user can pick/suggest a node on which the pod should be scheduled next?
Please note that this should only be a recommendation to the scheduler and pod should be scheduled on a different node if the last node doesn't meet the resource requirements anymore or isn't healthy. Thoughts?
Activity