Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: dynamic resource allocation prototype #1

Closed
wants to merge 58 commits into from
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
58 commits
Select commit Hold shift + click to select a range
e389b27
Validate dry-run and force flags can not be used same time in replace
ardaguclu Jun 1, 2022
b722056
add unit test
zhoumingcheng Jun 21, 2022
466c4d2
pkg/kubelet: skip long test on short mode
rata Jun 24, 2022
a4f966a
Graduate SeccompDefault feature to beta
saschagrunert Jun 27, 2022
7a95525
Create ControllerRevision lifecycle e2e test
heyste May 9, 2022
93701ce
cleanup: Removes duplicate utils code
claudiubelu Jun 29, 2022
14708f2
agnhost: Check symlink target's permissions for Windows
claudiubelu Aug 27, 2021
61ebfdb
Add cases for when --timeout=0 and tests
mpuckett159 Jul 1, 2022
d3092cd
scheduler: do not update sched.nextStartNodeIndex when evaluate nomin…
SataQiu Jul 4, 2022
88c6deb
Update `godoc.org` to `pkg.go.dev ` in kubeadm
Jul 7, 2022
8aeee52
kubeadm: De-dup the confirmation on the interactive cmds
chendave Jul 6, 2022
3581e30
build: update to klog v2.70.1
pohly Jul 7, 2022
e41f2a1
Change error messages
ardaguclu Jul 7, 2022
8d2c737
Merge pull request #110997 from mengjiao-liu/update-godoc-link
k8s-ci-robot Jul 7, 2022
9d68640
Merge pull request #110923 from mpuckett159/fix/add-wait-timers
k8s-ci-robot Jul 7, 2022
6adee9d
Merge pull request #110947 from SataQiu/scheduler-20220704
k8s-ci-robot Jul 7, 2022
d2a94fe
Remove SIG Scheduling approvers from reviewers
alculquicondor Jul 7, 2022
e9b96b1
Merge pull request #111004 from alculquicondor/patch-4
k8s-ci-robot Jul 7, 2022
34b9f0d
Merge pull request #110998 from chendave/de_dup_kubeadm
k8s-ci-robot Jul 7, 2022
f218f7b
Computation of the StorageVersionHash use overridden storage versions…
249043822 Jul 1, 2022
6d5cccf
Merge pull request #110122 from ii/create-controller-revision-test
k8s-ci-robot Jul 8, 2022
2b657a0
Merge pull request #110805 from saschagrunert/seccomp-default-beta
k8s-ci-robot Jul 8, 2022
8e62fd2
Merge pull request #111001 from pohly/klog-update
k8s-ci-robot Jul 8, 2022
6732550
Merge pull request #110877 from claudiubelu/agnhost-windows-file-perm…
k8s-ci-robot Jul 8, 2022
9509211
Merge pull request #110904 from 249043822/storageversion
k8s-ci-robot Jul 8, 2022
857458c
update ginkgo from v1 to v2 and gomega to 1.19.0
chendave Mar 29, 2022
05c0f4a
Define the const of `GINKGO_PANIC` directly
chendave Mar 29, 2022
2eb8e9e
`ginkgo.It` doesn't have a `timeout` arg anymore
chendave Mar 29, 2022
ece0bb3
Adapt to new type of `GinkgoWriter`
chendave Mar 29, 2022
375b2a5
Build `Ginkgo` binary
chendave Mar 29, 2022
f792256
e2e: adapt output tests to Ginkgo v2 and Gomega 1.19
chendave Mar 30, 2022
b57bade
Switch to use `dry-run` option to generate test spec
chendave Apr 15, 2022
dd58016
Implement `DetailsReporter` report within `ReportAfterSuite`
chendave Apr 15, 2022
20498fd
Generate conformance test spec with `offset` decorator
chendave Apr 15, 2022
2084f3c
Drop all stacktrace related validtion
chendave Apr 15, 2022
2f3028c
Define the `timeout` to `24h` for Ginkgo V2
chendave Apr 20, 2022
fd4b5b6
Stop using the deprecated method `CurrentGinkgoTestDescription`
chendave Apr 24, 2022
46a3954
Migrate `ProgressReporter` to `Ginkgo` V2
chendave May 2, 2022
5ac8105
Set Ginkgo config by the method of `GinkgoConfiguration()`
chendave May 2, 2022
82ac6be
Custom reporter of Junit report is no longer needed
chendave May 2, 2022
3833695
Redirect `klog` out to `GinkgoWriter`
chendave Apr 29, 2022
05c513d
`ginkgo.By` can only be used inside a runnable node
chendave May 12, 2022
f7427d0
build: add ginkgo aliases for WHAT
pohly Jun 8, 2022
50d1b6c
Add Ginkgo v1 to the list of unwanted dependencies
chendave Jul 4, 2022
ebcc583
Merge pull request #110326 from ardaguclu/add-validation-replace
k8s-ci-robot Jul 8, 2022
c05d185
Merge pull request #110683 from zhoumingcheng/master-v2
k8s-ci-robot Jul 8, 2022
0dc32b1
Merge pull request #110774 from kinvolk/rata/kubelet-short-tests
k8s-ci-robot Jul 8, 2022
80b2848
Merge pull request #110860 from claudiubelu/utils-cleanup
k8s-ci-robot Jul 8, 2022
4569e64
Merge pull request #109111 from chendave/ginkgo_upstream
k8s-ci-robot Jul 8, 2022
eccf7c6
kube features: add DynamicResourceAllocation
pohly Mar 22, 2022
4a5f531
initial dynamic resource allocation API types
pohly Feb 26, 2022
997a22d
api: generated files for dynamic resource allocation
pohly Jul 4, 2022
7cd8d96
dynamic resource allocation: implement printers
pohly Mar 1, 2022
a331efc
cdi: example driver
pohly Mar 18, 2022
bc8c200
component-helpers: add ResourceClaim support code
pohly Mar 22, 2022
de44076
ResourceClaim controller: clone from pkg/controller/volume/ephemeral
pohly Mar 22, 2022
32f6d7a
kube-controller-manager: add ResourceClaim controller
pohly Mar 22, 2022
3d85228
scheduler: add dynamic resource allocation plugin
pohly Apr 12, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
api: generated files for dynamic resource allocation
Created with "make generated_files update".
  • Loading branch information
pohly committed Jul 8, 2022
commit 997a22d20072d1c33e2a3724f9391641302bd259
9,102 changes: 6,502 additions & 2,600 deletions api/openapi-spec/swagger.json

Large diffs are not rendered by default.

10,989 changes: 8,037 additions & 2,952 deletions api/openapi-spec/v3/api__v1_openapi.json

Large diffs are not rendered by default.

143 changes: 143 additions & 0 deletions api/openapi-spec/v3/apis__apps__v1_openapi.json
Original file line number Diff line number Diff line change
Expand Up @@ -1551,6 +1551,24 @@
],
"type": "object"
},
"io.k8s.api.core.v1.ClaimSource": {
"description": "ClaimSource either references one separate ResourceClaim by name or embeds a template for a ResourceClaim, but never both.\n\nAdditional options might get added in the future, so code using this struct must error out when none of the options that it supports are set.",
"properties": {
"resourceClaimName": {
"description": "The resource is independent of the Pod and defined by a separate ResourceClaim in the same namespace as the Pod. Either this or Template must be set, but not both.",
"type": "string"
},
"template": {
"allOf": [
{
"$ref": "#/components/schemas/io.k8s.api.core.v1.ResourceClaimTemplate"
}
],
"description": "Will be used to create a stand-alone ResourceClaim to allocate the resource. The pod in which this PodResource is embedded will be the owner of the ResourceClaim, i.e. the ResourceClaim will be deleted together with the pod. The name of the ResourceClaim will be `<pod name>-<resource name>` where `<resource name>` is the name PodResource.Name Pod validation will reject the pod if the concatenated name is not valid for a ResourceClaim (for example, too long).\n\nAn existing ResourceClaim with that name that is not owned by the pod will *not* be used for the pod to avoid using an unrelated resource by mistake. Scheduling is then blocked until the unrelated ResourceClaim is removed. If such a pre-created ResourceClaim is meant to be used by the pod, the ResourceClaim has to be updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster.\n\nRunning the pod also gets blocked by a wrong ownership. This should be even less likely because of the prior scheduling check, but could happen if a user force-deletes or modifies the ResourceClaim.\n\nThis field is read-only and no changes will be made by Kubernetes to the ResourceClaim after it has been created. Either this or ResourceClaimName must be set, but not both."
}
},
"type": "object"
},
"io.k8s.api.core.v1.ConfigMapEnvSource": {
"description": "ConfigMapEnvSource selects a ConfigMap to populate the environment variables with.\n\nThe contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables.",
"properties": {
Expand Down Expand Up @@ -3291,6 +3309,25 @@
],
"type": "object"
},
"io.k8s.api.core.v1.PodResourceClaim": {
"description": "PodResourceClaim references exactly one ResourceClaim, either by name or by embedding a template for a ResourceClaim that will get created by the resource claim controller in kube-controller-manager.",
"properties": {
"claim": {
"allOf": [
{
"$ref": "#/components/schemas/io.k8s.api.core.v1.ClaimSource"
}
],
"default": {},
"description": "Claim determines where to find the claim."
},
"name": {
"description": "A name under which this resource can be referenced by the containers.",
"type": "string"
}
},
"type": "object"
},
"io.k8s.api.core.v1.PodSecurityContext": {
"description": "PodSecurityContext holds pod-level security attributes and common container settings. Some fields are also present in container.securityContext. Field values of container.securityContext take precedence over field values of PodSecurityContext.",
"properties": {
Expand Down Expand Up @@ -3545,6 +3582,24 @@
},
"type": "array"
},
"resourceClaims": {
"description": "ResourceClaims defines which ResourceClaims must be allocated and reserved before the Pod is allowed to start. The resources will be made available to those containers which reference them by name.",
"items": {
"allOf": [
{
"$ref": "#/components/schemas/io.k8s.api.core.v1.PodResourceClaim"
}
],
"default": {}
},
"type": "array",
"x-kubernetes-list-map-keys": [
"name"
],
"x-kubernetes-list-type": "map",
"x-kubernetes-patch-merge-key": "name",
"x-kubernetes-patch-strategy": "merge,retainKeys"
},
"restartPolicy": {
"description": "Restart policy for all containers within the pod. One of Always, OnFailure, Never. Default to Always. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy\n\n",
"type": "string"
Expand Down Expand Up @@ -3890,6 +3945,85 @@
],
"type": "object"
},
"io.k8s.api.core.v1.ResourceClaimParametersReference": {
"description": "ResourceClaimParametersReference contains enough information to let you locate the parameters for a ResourceClaim. The object must be in the same namespace as the ResourceClaim.",
"properties": {
"apiVersion": {
"default": "",
"description": "APIVersion is the group and version for the resource being referenced or just the version for the core API.",
"type": "string"
},
"kind": {
"default": "",
"description": "Kind is the type of resource being referenced. This is the same value as in the parameter object's metadata, for example \"ConfigMap\".",
"type": "string"
},
"name": {
"default": "",
"description": "Name is the name of resource being referenced.",
"type": "string"
}
},
"required": [
"apiVersion",
"kind",
"name"
],
"type": "object"
},
"io.k8s.api.core.v1.ResourceClaimSpec": {
"description": "ResourceClaimSpec defines how a resource is to be allocated.",
"properties": {
"allocationMode": {
"description": "Allocation can start immediately or when a Pod wants to use the resource. Waiting for a Pod is the default.",
"type": "string"
},
"parameters": {
"allOf": [
{
"$ref": "#/components/schemas/io.k8s.api.core.v1.ResourceClaimParametersReference"
}
],
"description": "Parameters references a separate object with arbitrary parameters that will be used by the driver when allocating a resource for the claim.\n\nThe object must be in the same namespace as the ResourceClaim."
},
"resourceClassName": {
"default": "",
"description": "ResourceClassName references the driver and additional parameters via the name of a ResourceClass that was created as part of the driver deployment.\n\nThe apiserver does not check that the referenced class exists, but a driver-specific admission webhook may require that and is allowed to reject claims where the class is missing.",
"type": "string"
}
},
"required": [
"resourceClassName"
],
"type": "object"
},
"io.k8s.api.core.v1.ResourceClaimTemplate": {
"description": "ResourceClaimTemplate is used to produce ResourceClaim objects by embedding such a template in the ResourceRequirements of a Pod.",
"properties": {
"metadata": {
"allOf": [
{
"$ref": "#/components/schemas/io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta"
}
],
"default": {},
"description": "May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation."
},
"spec": {
"allOf": [
{
"$ref": "#/components/schemas/io.k8s.api.core.v1.ResourceClaimSpec"
}
],
"default": {},
"description": "The specification for the ResourceClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a ResourceClaim are also valid here."
}
},
"required": [
"spec"
],
"type": "object"
},
"io.k8s.api.core.v1.ResourceFieldSelector": {
"description": "ResourceFieldSelector represents container resources (cpu, memory) and their output format",
"properties": {
Expand Down Expand Up @@ -3921,6 +4055,15 @@
"io.k8s.api.core.v1.ResourceRequirements": {
"description": "ResourceRequirements describes the compute resource requirements.",
"properties": {
"claims": {
"description": "The entries are the names of resources in PodSpec.ResourceClaims that are used by the container.",
"items": {
"default": "",
"type": "string"
},
"type": "array",
"x-kubernetes-list-type": "set"
},
"limits": {
"additionalProperties": {
"allOf": [
Expand Down
143 changes: 143 additions & 0 deletions api/openapi-spec/v3/apis__batch__v1_openapi.json
Original file line number Diff line number Diff line change
Expand Up @@ -747,6 +747,24 @@
],
"type": "object"
},
"io.k8s.api.core.v1.ClaimSource": {
"description": "ClaimSource either references one separate ResourceClaim by name or embeds a template for a ResourceClaim, but never both.\n\nAdditional options might get added in the future, so code using this struct must error out when none of the options that it supports are set.",
"properties": {
"resourceClaimName": {
"description": "The resource is independent of the Pod and defined by a separate ResourceClaim in the same namespace as the Pod. Either this or Template must be set, but not both.",
"type": "string"
},
"template": {
"allOf": [
{
"$ref": "#/components/schemas/io.k8s.api.core.v1.ResourceClaimTemplate"
}
],
"description": "Will be used to create a stand-alone ResourceClaim to allocate the resource. The pod in which this PodResource is embedded will be the owner of the ResourceClaim, i.e. the ResourceClaim will be deleted together with the pod. The name of the ResourceClaim will be `<pod name>-<resource name>` where `<resource name>` is the name PodResource.Name Pod validation will reject the pod if the concatenated name is not valid for a ResourceClaim (for example, too long).\n\nAn existing ResourceClaim with that name that is not owned by the pod will *not* be used for the pod to avoid using an unrelated resource by mistake. Scheduling is then blocked until the unrelated ResourceClaim is removed. If such a pre-created ResourceClaim is meant to be used by the pod, the ResourceClaim has to be updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster.\n\nRunning the pod also gets blocked by a wrong ownership. This should be even less likely because of the prior scheduling check, but could happen if a user force-deletes or modifies the ResourceClaim.\n\nThis field is read-only and no changes will be made by Kubernetes to the ResourceClaim after it has been created. Either this or ResourceClaimName must be set, but not both."
}
},
"type": "object"
},
"io.k8s.api.core.v1.ConfigMapEnvSource": {
"description": "ConfigMapEnvSource selects a ConfigMap to populate the environment variables with.\n\nThe contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables.",
"properties": {
Expand Down Expand Up @@ -2370,6 +2388,25 @@
],
"type": "object"
},
"io.k8s.api.core.v1.PodResourceClaim": {
"description": "PodResourceClaim references exactly one ResourceClaim, either by name or by embedding a template for a ResourceClaim that will get created by the resource claim controller in kube-controller-manager.",
"properties": {
"claim": {
"allOf": [
{
"$ref": "#/components/schemas/io.k8s.api.core.v1.ClaimSource"
}
],
"default": {},
"description": "Claim determines where to find the claim."
},
"name": {
"description": "A name under which this resource can be referenced by the containers.",
"type": "string"
}
},
"type": "object"
},
"io.k8s.api.core.v1.PodSecurityContext": {
"description": "PodSecurityContext holds pod-level security attributes and common container settings. Some fields are also present in container.securityContext. Field values of container.securityContext take precedence over field values of PodSecurityContext.",
"properties": {
Expand Down Expand Up @@ -2624,6 +2661,24 @@
},
"type": "array"
},
"resourceClaims": {
"description": "ResourceClaims defines which ResourceClaims must be allocated and reserved before the Pod is allowed to start. The resources will be made available to those containers which reference them by name.",
"items": {
"allOf": [
{
"$ref": "#/components/schemas/io.k8s.api.core.v1.PodResourceClaim"
}
],
"default": {}
},
"type": "array",
"x-kubernetes-list-map-keys": [
"name"
],
"x-kubernetes-list-type": "map",
"x-kubernetes-patch-merge-key": "name",
"x-kubernetes-patch-strategy": "merge,retainKeys"
},
"restartPolicy": {
"description": "Restart policy for all containers within the pod. One of Always, OnFailure, Never. Default to Always. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy\n\n",
"type": "string"
Expand Down Expand Up @@ -2969,6 +3024,85 @@
],
"type": "object"
},
"io.k8s.api.core.v1.ResourceClaimParametersReference": {
"description": "ResourceClaimParametersReference contains enough information to let you locate the parameters for a ResourceClaim. The object must be in the same namespace as the ResourceClaim.",
"properties": {
"apiVersion": {
"default": "",
"description": "APIVersion is the group and version for the resource being referenced or just the version for the core API.",
"type": "string"
},
"kind": {
"default": "",
"description": "Kind is the type of resource being referenced. This is the same value as in the parameter object's metadata, for example \"ConfigMap\".",
"type": "string"
},
"name": {
"default": "",
"description": "Name is the name of resource being referenced.",
"type": "string"
}
},
"required": [
"apiVersion",
"kind",
"name"
],
"type": "object"
},
"io.k8s.api.core.v1.ResourceClaimSpec": {
"description": "ResourceClaimSpec defines how a resource is to be allocated.",
"properties": {
"allocationMode": {
"description": "Allocation can start immediately or when a Pod wants to use the resource. Waiting for a Pod is the default.",
"type": "string"
},
"parameters": {
"allOf": [
{
"$ref": "#/components/schemas/io.k8s.api.core.v1.ResourceClaimParametersReference"
}
],
"description": "Parameters references a separate object with arbitrary parameters that will be used by the driver when allocating a resource for the claim.\n\nThe object must be in the same namespace as the ResourceClaim."
},
"resourceClassName": {
"default": "",
"description": "ResourceClassName references the driver and additional parameters via the name of a ResourceClass that was created as part of the driver deployment.\n\nThe apiserver does not check that the referenced class exists, but a driver-specific admission webhook may require that and is allowed to reject claims where the class is missing.",
"type": "string"
}
},
"required": [
"resourceClassName"
],
"type": "object"
},
"io.k8s.api.core.v1.ResourceClaimTemplate": {
"description": "ResourceClaimTemplate is used to produce ResourceClaim objects by embedding such a template in the ResourceRequirements of a Pod.",
"properties": {
"metadata": {
"allOf": [
{
"$ref": "#/components/schemas/io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta"
}
],
"default": {},
"description": "May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation."
},
"spec": {
"allOf": [
{
"$ref": "#/components/schemas/io.k8s.api.core.v1.ResourceClaimSpec"
}
],
"default": {},
"description": "The specification for the ResourceClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a ResourceClaim are also valid here."
}
},
"required": [
"spec"
],
"type": "object"
},
"io.k8s.api.core.v1.ResourceFieldSelector": {
"description": "ResourceFieldSelector represents container resources (cpu, memory) and their output format",
"properties": {
Expand Down Expand Up @@ -3000,6 +3134,15 @@
"io.k8s.api.core.v1.ResourceRequirements": {
"description": "ResourceRequirements describes the compute resource requirements.",
"properties": {
"claims": {
"description": "The entries are the names of resources in PodSpec.ResourceClaims that are used by the container.",
"items": {
"default": "",
"type": "string"
},
"type": "array",
"x-kubernetes-list-type": "set"
},
"limits": {
"additionalProperties": {
"allOf": [
Expand Down
Loading