Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CPU not fully utilized and shared with multiple Containers in one Pod #126942

Open
ryanlyy opened this issue Aug 27, 2024 · 8 comments
Open

CPU not fully utilized and shared with multiple Containers in one Pod #126942

ryanlyy opened this issue Aug 27, 2024 · 8 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/node Categorizes an issue or PR as relevant to SIG Node.

Comments

@ryanlyy
Copy link

ryanlyy commented Aug 27, 2024

What happened?

it is well know current Kubernetes workload only supports resources (cpu, memory, ephemeral storage) in container level instead of pod level. In that case, then container can only use its configured resource and can't share resource configured in another container located in same pod even there is very low cpu usage in that container. and in telecommunication system, different traffic model may have difference cpu usage and then it leads to different dimensioning on those containers in single pod

Here is to check if Kubernetes can support resource configuration in pod level too beside container level. if pod level resource supported, then all containers can share the cpu/mem etc resource fully like resource quota configured in namespace level shared by all pod in that namespace.

Otherwise in order to fully use the cpu etc. resource and decrease the dimensioning complexity, it has to merge all containers into one container into single pod. Frankly that violates the cloud native strategy for example reduce the reusability and modulization.

What did you expect to happen?

support resource configuration in pod level beside container level

How can we reproduce it (as minimally and precisely as possible)?

always

Anything else we need to know?

No response

Kubernetes version

all

Cloud provider

community

OS version

# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here

# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here

Install tools

Container runtime (CRI) and version (if applicable)

Related plugins (CNI, CSI, ...) and versions (if applicable)

@ryanlyy ryanlyy added the kind/bug Categorizes issue or PR as related to a bug. label Aug 27, 2024
@k8s-ci-robot k8s-ci-robot added needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Aug 27, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@ffromani
Copy link
Contributor

this KEP should help: kubernetes/enhancements#4678

@neolit123
Copy link
Member

/sig node

@k8s-ci-robot k8s-ci-robot added sig/node Categorizes an issue or PR as relevant to SIG Node. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Aug 27, 2024
@ryanlyy
Copy link
Author

ryanlyy commented Aug 28, 2024

@ffromani thanks a lot. From that KEP, the plan is:
Enhancement target (which target equals to which milestone):
Alpha release target (x.y): 1.31
Beta release target (x.y): 1.32
Stable release target (x.y): 1.34

so when 1.34, it will be GA released.correct?

@ffromani
Copy link
Contributor

@ffromani thanks a lot. From that KEP, the plan is: Enhancement target (which target equals to which milestone): Alpha release target (x.y): 1.31 Beta release target (x.y): 1.32 Stable release target (x.y): 1.34

so when 1.34, it will be GA released.correct?

Hi, this is the initial plan. The actual GA version depends on how the KEP discussion and implementation go.

@kannon92
Copy link
Contributor

/remove-kind bug
/kind feature

Please take a look at the KEP that @ffromani mentioned. And bring your usecases to that feature. Your input is valuable.

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. and removed kind/bug Categorizes issue or PR as related to a bug. labels Aug 28, 2024
@kannon92 kannon92 moved this from Triage to Done in SIG Node Bugs Aug 28, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 26, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
Status: Done
Development

No branches or pull requests

6 participants