-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PodDeletionCost occasionally doesn't work #126138
Comments
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/area controller-manager |
@chymy: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/sig apps |
/cc @ahg-g |
/sig autoscaling |
Pod deletion cost does not offer any guarantees on pod deletion order. |
For more details, please refer https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/#pod-deletion-cost |
I can't understand why, is it because the cache update is slow? |
Hi @chymy, |
So @chymy Can you briefly explain more about, Why do you expect the pod:analysis-3-8hplt to be deleted |
Because I set the |
I set the |
Taking "best effort basis" here to mean, "support may or may not be added to older controllers like ReplicationController," is quite the stretch. "Best effort" indicates that there may be situations where it is not possible to provide a guarantee. If the RC controller is lacking support for pod deletion costs, then a best effort has not been made. There may indeed be a timing issue, as @chymy posited. The bottom line is that "best effort basis" still requires effort; hiding behind "best effort" instead of engaging with a bug report is not an effective long-term strategy.
@chymy Is correct. The default pod deletion cost is
|
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
What happened?
I created a
ReplicationController
with 1 replicas:analysis-3-8hplt
Then I set the replicas of the rc to 2 using the
scale
command, Created a new pod:analysis-3-fbcdl
(2024-07-11T22:38:10.280375Z )Next, I set the annotation
controller.kubernetes.io/pod-deletion-cost: "-1"
for the pod: analysis-3-8hpltAfter setting the annotation successfully, (2024-07-11T22:38:10.904066Z)Then I set the replicas of the rc to 1 using the
scale
command, butanalysis-3-fbcdl
was scaled.What did you expect to happen?
analysis-3-8hplt
pod was scaled.How can we reproduce it (as minimally and precisely as possible)?
Refer to the description above
Anything else we need to know?
No response
Kubernetes version
Cloud provider
OS version
Install tools
Container runtime (CRI) and version (if applicable)
Related plugins (CNI, CSI, ...) and versions (if applicable)
The text was updated successfully, but these errors were encountered: