-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CPU not fully utilized and shared with multiple Containers in one Pod #126942
Comments
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
this KEP should help: kubernetes/enhancements#4678 |
/sig node |
@ffromani thanks a lot. From that KEP, the plan is: so when 1.34, it will be GA released.correct? |
Hi, this is the initial plan. The actual GA version depends on how the KEP discussion and implementation go. |
/remove-kind bug Please take a look at the KEP that @ffromani mentioned. And bring your usecases to that feature. Your input is valuable. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
What happened?
it is well know current Kubernetes workload only supports resources (cpu, memory, ephemeral storage) in container level instead of pod level. In that case, then container can only use its configured resource and can't share resource configured in another container located in same pod even there is very low cpu usage in that container. and in telecommunication system, different traffic model may have difference cpu usage and then it leads to different dimensioning on those containers in single pod
Here is to check if Kubernetes can support resource configuration in pod level too beside container level. if pod level resource supported, then all containers can share the cpu/mem etc resource fully like resource quota configured in namespace level shared by all pod in that namespace.
Otherwise in order to fully use the cpu etc. resource and decrease the dimensioning complexity, it has to merge all containers into one container into single pod. Frankly that violates the cloud native strategy for example reduce the reusability and modulization.
What did you expect to happen?
support resource configuration in pod level beside container level
How can we reproduce it (as minimally and precisely as possible)?
always
Anything else we need to know?
No response
Kubernetes version
all
Cloud provider
community
OS version
Install tools
Container runtime (CRI) and version (if applicable)
Related plugins (CNI, CSI, ...) and versions (if applicable)
The text was updated successfully, but these errors were encountered: