You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What keywords did you search in Kubernetes issues before filing this one?
restart pod on the same node
Is this a BUG REPORT or FEATURE REQUEST?
Feature Request.
Use Case:
When running stateful applications like Kafka using PetSets and local storage (host volumes), it's preferable to restart a pod on the same node if it goes down (assuming the node itself is up and healthy). This saves replicating lot of data from leaders and helps to bring the application back into the cluster quickly. This problem doesn't exist if there is a network storage since the new pod can access the same network drive from another node though.
Idea:
Can we support restarting a pod on the same node where it was running before going down, either by using some kind of rescheduling policy or node/pod affinity? Are there any hooks in the scheduler, using which the user can pick/suggest a node on which the pod should be scheduled next?
Please note that this should only be a recommendation to the scheduler and pod should be scheduled on a different node if the last node doesn't meet the resource requirements anymore or isn't healthy. Thoughts?
The text was updated successfully, but these errors were encountered:
@nebril I have read about Node affinity but that's not enough for the feature I've suggested. Node affinity only helps to schedule a pod on given node(s) but how do you make sure a restarted pod comes up on the same node where it was running prior to going down?
Is this a request for help?:
No.
What keywords did you search in Kubernetes issues before filing this one?
restart pod on the same node
Is this a BUG REPORT or FEATURE REQUEST?
Feature Request.
Use Case:
When running stateful applications like Kafka using PetSets and local storage (host volumes), it's preferable to restart a pod on the same node if it goes down (assuming the node itself is up and healthy). This saves replicating lot of data from leaders and helps to bring the application back into the cluster quickly. This problem doesn't exist if there is a network storage since the new pod can access the same network drive from another node though.
Idea:
Can we support restarting a pod on the same node where it was running before going down, either by using some kind of rescheduling policy or node/pod affinity? Are there any hooks in the scheduler, using which the user can pick/suggest a node on which the pod should be scheduled next?
Please note that this should only be a recommendation to the scheduler and pod should be scheduled on a different node if the last node doesn't meet the resource requirements anymore or isn't healthy. Thoughts?
The text was updated successfully, but these errors were encountered: