Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ability to check whether PVC storage requirements are fulfilled by the node #123

Open
maunavarro opened this issue Nov 15, 2021 · 8 comments
Assignees
Labels
enhancement New feature or request project/community wontfix This will not be worked on

Comments

@maunavarro
Copy link

maunavarro commented Nov 15, 2021

Describe the problem/challenge you have

It seems that a PVC can claim and be provisioned more storage than the disk size of a particular node.
For example, the following PVC requesting 5G should be denied if the available capacity of nodes in cluster is 1G

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: local-hostpath-pvc
  namespace: openebs
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5G

As it currently stands, the above PVC will be provisioned and bound to a PV .

Describe the solution you'd like

We would like to limit the storage policy to existing free storage in node.

Environment:

  • OpenEBS version:
$ kubectl get po -n openebs --show-labels
NAME                                           READY   STATUS    RESTARTS   AGE    LABELS
openebs-localpv-provisioner-b559bc6c4-k87hl    1/1     Running   0          4h4m   name=openebs-localpv-provisioner,openebs.io/component-name=openebs-localpv-provisioner,openebs.io/version=3.0.0,pod-template-hash=b559bc6c4
openebs-ndm-cluster-exporter-b48f4c59d-f24d9   1/1     Running   0          4h4m   name=openebs-ndm-cluster-exporter,openebs.io/component-name=ndm-cluster-exporter,openebs.io/version=3.0.0,pod-template-hash=b48f4c59d
openebs-ndm-node-exporter-22mv4                1/1     Running   1          14d    controller-revision-hash=5fc6c5df65,name=openebs-ndm-node-exporter,openebs.io/component-name=ndm-node-exporter,openebs.io/version=3.0.0,pod-template-generation=1
openebs-ndm-node-exporter-2g78z                1/1     Running   1          14d    controller-revision-hash=5fc6c5df65,name=openebs-ndm-node-exporter,openebs.io/component-name=ndm-node-exporter,openebs.io/version=3.0.0,pod-template-generation=1
openebs-ndm-node-exporter-4h526                1/1     Running   1          14d    controller-revision-hash=5fc6c5df65,name=openebs-ndm-node-exporter,openebs.io/component-name=ndm-node-exporter,openebs.io/version=3.0.0,pod-template-generation=1
openebs-ndm-node-exporter-9p2xv                1/1     Running   3          14d    controller-revision-hash=5fc6c5df65,name=openebs-ndm-node-exporter,openebs.io/component-name=ndm-node-exporter,openebs.io/version=3.0.0,pod-template-generation=1
openebs-ndm-node-exporter-c999w                1/1     Running   1          14d    controller-revision-hash=5fc6c5df65,name=openebs-ndm-node-exporter,openebs.io/component-name=ndm-node-exporter,openebs.io/version=3.0.0,pod-template-generation=1
openebs-ndm-node-exporter-d5xsq                1/1     Running   3          14d    controller-revision-hash=5fc6c5df65,name=openebs-ndm-node-exporter,openebs.io/component-name=ndm-node-exporter,openebs.io/version=3.0.0,pod-template-generation=1
openebs-ndm-node-exporter-fnl8m                1/1     Running   1          14d    controller-revision-hash=5fc6c5df65,name=openebs-ndm-node-exporter,openebs.io/component-name=ndm-node-exporter,openebs.io/version=3.0.0,pod-template-generation=1
openebs-ndm-node-exporter-h2q88                1/1     Running   4          14d    controller-revision-hash=5fc6c5df65,name=openebs-ndm-node-exporter,openebs.io/component-name=ndm-node-exporter,openebs.io/version=3.0.0,pod-template-generation=1
openebs-ndm-node-exporter-kcrs7                1/1     Running   1          14d    controller-revision-hash=5fc6c5df65,name=openebs-ndm-node-exporter,openebs.io/component-name=ndm-node-exporter,openebs.io/version=3.0.0,pod-template-generation=1
openebs-ndm-node-exporter-vclcz                1/1     Running   2          14d    controller-revision-hash=5fc6c5df65,name=openebs-ndm-node-exporter,openebs.io/component-name=ndm-node-exporter,openebs.io/version=3.0.0,pod-template-generation=1
openebs-ndm-node-exporter-x6ww5                1/1     Running   1          14d    controller-revision-hash=5fc6c5df65,name=openebs-ndm-node-exporter,openebs.io/component-name=ndm-node-exporter,openebs.io/version=3.0.0,pod-template-generation=1
openebs-ndm-node-exporter-xczq2                1/1     Running   1          14d    controller-revision-hash=5fc6c5df65,name=openebs-ndm-node-exporter,openebs.io/component-name=ndm-node-exporter,openebs.io/version=3.0.0,pod-template-generation=1
openebs-ndm-operator-77d54c5c69-pt85n          1/1     Running   0          89m    name=openebs-ndm-operator,openebs.io/component-name=ndm-operator,openebs.io/version=3.0.0,pod-template-hash=77d54c5c69
  • Kubernetes version :
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:59:11Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.6-4+2076e5554ee7d2", GitCommit:"2076e5554ee7d2e0ac57857d161db7fd6fad915a", GitTreeState:"clean", BuildDate:"2021-01-05T22:46:00Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
  • OS (e.g: cat /etc/os-release):
~# cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04.5 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.5 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
  • kernel:
    linux
@maunavarro maunavarro changed the title Add ability to check where PVC storage requirements are fulfilled by the node Add ability to check whether PVC storage requirements are fulfilled by the node Nov 15, 2021
@niladrih niladrih self-assigned this Nov 16, 2021
@trathborne
Copy link

As mentioned in a Slack #openebs thread:

There are a few dimensions to the problem:

  • (any filesystem) Current available space ← naïve solution to this would be a great start!
  • (XFS) Quotas & their usage (and maybe an overcommit percentage)
  • (LVM) Space available in VG

Affinity-by-IOPS might be another way to schedule, and as I understand it affinities can have some ranking metric so I can say 'Allocate this PV where:' ... 'it just fits' or 'there is the most space' or 'where other local PVs are generating fewer IOPS'.

@niladrih
Copy link
Member

niladrih commented Feb 9, 2022

The hostpath provisioner volumes are thin-provisioned (basically how the filesystem handles it).
Taking an example...
Let's say there's a filesystem with 10 GiB storage capacity. A hostpath volume with 7 GiB capacity request is created on this filesystem. It succeeds. The application process consumes a small amount of space, say 1 GiB.
Going forward, if another volume request for say, 5 GiB is created, it will succeed. As the available capacity for the filesystem will be observed to be (10-1) GiB = 9 GiB!
The hostpath provisioner does not pre-allocate the requested capacity.

@maunavarro @trathborne -- Does the above hostpath provisioner behaviour suffice your use-case? Thick provisioning is beyond the scope of the hostpath provisioner's design limitations.

@trathborne
Copy link

This is working fine for me because I am using XFS quotas and I am ok with overcommitting.

I am more concerned about balance. Consider this case: I have 3 nodes, each with 10GiB of storage. 4 PVCs for 6GiB of storage come in. 2 nodes get 1 PVC, one node gets 2 PVCs. Now another PVC for 6GiB comes in. What is to guaranteed that it does not end up on the node that is already overcommitted?

@niladrih
Copy link
Member

niladrih commented Feb 12, 2022

@trathborne -- The LVM-based LocalPV CSI-driver is better suited for capacity-aware scheduling use-cases. Here's a link: github.com/openebs/lvm-localpv

@trathborne
Copy link

@niladrih I was going to say that LVM doesn't let me overcommit, but then I found https://github.com/openebs/lvm-localpv/blob/2d5196996e97edb82d1840900416835b0b729328/design/monitoring/capacity_monitor.md#thin-logical-volume ... thanks!

@niladrih
Copy link
Member

niladrih commented Jun 5, 2024

Hi @maunavarro, this issue has been open for a while. Are you are still facing this issue?

This feature is one which is typically present in a CSI driver based storage plugin (e.g. LocalPV ZFS, or LocalPV LVM). This is unfortunately beyond the scope of the out-of-tree provisioner library that this hostpath provisioner uses.

That being said, my viewpoint might be pigeonholed. Please feel free to open a design proposal or continue the discussion here. Marking this as 'wont-fix' for now.

@niladrih niladrih added wontfix This will not be worked on enhancement New feature or request labels Jun 5, 2024
@avishnu
Copy link
Member

avishnu commented Oct 3, 2024

@niladrih if there is no possible way to achieve this in the external provisioner, should this be closed?

@niladrih
Copy link
Member

niladrih commented Oct 3, 2024

A LocalPV Rawfile style pre-allocated volume comes to mind.

Another alternative could be filesystem quota.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request project/community wontfix This will not be worked on
Projects
None yet
Development

No branches or pull requests

5 participants