Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hugepage proposal #181

Closed
wants to merge 5 commits into from
Closed

hugepage proposal #181

wants to merge 5 commits into from

Conversation

sjenning
Copy link
Contributor

Proposal for supporting applications that desire pre-allocated huge pages in Kubernetes

@derekwaynecarr @kubernetes/rh-cluster-infra @dchen1107 @vishh @jeremyeder @kubernetes/sig-node

xref old main repo PR kubernetes/kubernetes#33601

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Dec 15, 2016
architecture and making a new resource field for each size doesn't scale. Pods
can do a nodeSelector on this label to land on a system with a particular huge
page size. This is similiar to how the `beta.kubernetes.io/arch` label
operates.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like you need the request to also specify the expected node huge page size, right? Otherwise it could request 10 pages and get 10Gb on a machine that has a non-default configuration.

Is there anyway to design this so the request is in bytes instead of pages?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess my thought was that a node would only be configured/labeled with one hugepage size. We would need to quantize a value in bytes to a multiple of the hugepage size. However, from a UX perspective I can see where specifying the hugepage quantity as a resource.Quantity would be nice. Thanks!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the memory covered by hugepages resource come out of the total memory request, or is the final memory footprint the sum of the two?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think the request should include the huge page size.

when prototyping this in kubernetes/kubernetes#44817, i used a request syntax that included the size similar to how it appears in syfs.

$ ls /sys/kernel/mm/hugepages
hugepages-1048576kB  hugepages-2048kB

so the pod spec has a request for the following:

alpha.kubernetes.io/hugepages-2048kB: 512


On x86_64, there are two huge page sizes: 2MB and 1GB. 1GB huge pages are also
called gigantic pages. 1GB must be enabled on kernel boot line with
`hugepagesz=1g`. Huge pages, especially 1GB ones, should to be allocated
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should to be -> should be


While a system may support multiple huge pages sizes, it is assumed that nodes
configured with huge pages will only use one huge page size, namely the default
page size in `cat /proc/meminfo | grep Hugepagesize`. In Linux, this is 2MB
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

grep Hugepagesize /proc/meminfo :-)

because there are a variety of huge page sizes across different hardware
architecture and making a new resource field for each size doesn't scale. Pods
can do a nodeSelector on this label to land on a system with a particular huge
page size. This is similiar to how the `beta.kubernetes.io/arch` label
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

similar

cAdvisor will need to be modified to return the number of available huge pages.
This is already supported in [runc/libcontainer](../../vendor/github.com/opencontainers/runc/libcontainer/cgroups/utils.go)

### Phase 2: Expose huge pages in CRI
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explain why this is desirable rather than just sticking with the pod-level implementation as above? In the abstract you talked about it as a pod feature and this jump is unclear.

supported: 2MB and 1GB. The design, however, should accommodate additional huge
page sizes available on other architectures.

**NOTE: This design, as currently proposed, requires the use of pod-level
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cross-reference would be good

- A sensitivity to memory access latency

Example applications include:
- Java applications can back the heap with huge pages using the `-XX:+UseLargePages` option.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/can/which/

limits:
hugepages: "10"
nodeSelector:
kubernetes.io/huge-page-size: "2MB"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

alpha.kubernetes.io ?

For the Java use case the JVM maps the huge pages as a shared memory segment and
memlocks them to prevent the system from moving or swapping them out.

There are several issues here:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about adding something about what Kubernetes users need to do to mitigate these issues? (e.g. special node configuration?). I almost wonder if we'd want to distinguish more clearly in the API between the availability of anonymous vs shared memory, given these additional requirements for the latter case.

Huge page support is needed for many large memory HPC workloads to achieve
acceptable performance levels.

This proposal is part of a larger effort to better support High Performance
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

HPC is too loaded of a term. It's really just performance sensitive workloads. JVMs with large heaps, stateful applications with large in-memory caches, even memcached, etc.


While a system may support multiple huge pages sizes, it is assumed that nodes
configured with huge pages will only use one huge page size, namely the default
page size in `cat /proc/meminfo | grep Hugepagesize`. In Linux, this is 2MB

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why only a single pagesize per node? As far as I understand hugepages, the dTLB (on x86_64) is able to cache 2 MiB and 1 GiB pages separately on the L1. Given that is true, it is wasteful not to utilize both sizes per node. (It'd be interesting to study how the unified L2 dTLB is affected by mixed pages though.)

This proposal only includes pre-allocated huge pages configured on the node by
the administrator at boot time or by manual dynamic allocation. It does not
discuss the kubelet attempting to allocate huge pages dynamically in an attempt
to accommodate a scheduling pod or the use of Transparent Huge Pages (THP). THP

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you expect the dynamic allocation to not happen at all or to be added as another proposal? Although not perfectly reliable due to memory fragmentation, it can still serve as a nice to have. The scheduler should prefer the nodes with preallocated pages available, but if there are none it could try to allocate pages on a node with low memory fragmentation.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, how are the hugepages going to be allocated? Is that outside of k8s' scope?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i provide a sample daemonset in #837 that can pre-allocate huge pages. If pods cannot schedule due to lack of available nodes with sufficient pre-allocated huge pages, something similar can run to allocate additional pages (or the daemonset configuration could be tweaked for a pool of nodes to increase the size). either way, that management piece is considered out of scope.

pages. For this reason, some applications may be designed to (or recommend) use
pre-allocated huge pages instead of THP.

The proposal is also limited to x86_64 support where two huge page sizes are

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Limiting the proposal to single arch is unnecessary as long as it's generic enough, which it is in this state.

@derekwaynecarr
Copy link
Member

closed in favor of #837

@cblecker cblecker deleted the hugepage-proposal branch August 18, 2017 18:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants