-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cosi v1alpha2 changes #4599
base: master
Are you sure you want to change the base?
Cosi v1alpha2 changes #4599
Conversation
BlaineEXE
commented
Apr 25, 2024
- One-line PR description: modify doc to reflect v1alpha2 changes discussed in working COSI working group.
- Issue link: Object Storage Support (COSI) #1979
- Other comments:
d22d1bd
to
511d6cc
Compare
<!-- TODO: make sure there is a mechanism to ensure that the user creating the BucketClaim has | ||
access to the bucketclass --> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@xing-yang I am curious how other projects like VolumeSnapshotter handle situations like this.
How can the COSI controller check that the user (or namespace, or serviceaccount) has proper permissions to read/use the BucketClass?
``` | ||
BucketClaim - bcl1 BucketClass - bc1 | ||
|------------------------------| |--------------------------------| | ||
| metadata: | | deletionPolicy: delete | | ||
| namespace: ns1 | | driverName: s3.amazonaws.com | | ||
| spec: | | parameters: | | ||
| bucketClassName: bc1 | | key: value | | ||
| protocols: | |--------------------------------| | ||
| - s3 | | ||
|------------------------------| | ||
|
||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These block-style things seem like they must be created with a markdown tool that I don't know about. Since this is a non-standard tool, I am also going to replace all instances of these with simpler built-in markdown blocks like this:
```yaml
apiVersion: objectstorage.k8s.io/v1alpha2
kind: BucketClass
metadata:
name: silver
spec:
deletionPolicy: delete
driverName: s3.amazonaws.com
parameters:
key: value
```
###### Multi-protocol example with keys | ||
|
||
<!-- TODO: what if one protocol supports IAM-style but not the other? | ||
do we need auth type individually per-protocol? | ||
do we advertize KEY if any don't support IAM? --> | ||
<!-- TODO: is IAM an object-storage-wide term, or is it S3-specific? | ||
Can we agree on a generic terminology, like type=ServiceAccount? --> | ||
```yaml | ||
apiVersion: v1 | ||
kind: Secret | ||
metadata: | ||
name: <BucketAccess.spec.credentialsSecretName> | ||
namespace: <BucketAccess.metadata.namespace> | ||
data: | ||
COSI_BUCKET_NAME: bc-$uuid | ||
COSI_AUTHENTICATION_TYPE: KEY | ||
COSI_PROTOCOLS: Azure,S3 | ||
COSI_AZURE_ENDPOINT: https://blob.mysite.com | ||
COSI_AZURE_ACCESS_TOKEN: AZURETOKENEXAMPLE | ||
COSI_AZURE_EXPIRY_TIMESTAMP: 2006-01-02T15:04:05Z07:00 | ||
COSI_S3_ENDPOINT: https://s3.mysite.com | ||
COSI_S3_ACCESS_KEY_ID: AKIAIOSFODNN7EXAMPLE | ||
COSI_S3_ACCESS_SECRET_KEY: wJalrXUtnFEMI/K... | ||
COSI_S3_REGION: us-west-1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@shanduur here's the more complex version of what new secret might look like. Lmk if you have feedback.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like it! Little bit more data, and still has clear naming convention.
One more thing we should consider is Google Cloud Storage support. It is a valid value for the Protocols
, yet there is no place for adding any credentials to secret for GCS.
- Do we want to continue to support GCS in the COSI spec?
- Do we need to consider additional fields for GCS, or do we leave it as?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm having some trouble with this as well. I have access to plenty of S3 experts via the Ceph project, and we have Akash involved from Azure, but we don't have any GCP experts. I wonder if we could somehow introduce a more plugin-like method that would allow different/new object storage interfaces to be defined but in a way that can't be abused. I'll try to think more about that.
<!-- TODO: this example shows a problem that might arise from our current plan changes ... | ||
what if a user mounts multiple buckets to the same pod? | ||
they can refer to the secret fields individually, of course, and define unique names for each bucket's fields | ||
but is that too much work? --> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the PodSpec
below, I am seeing what could be a complication for our plans to change the secret format.
If users have multiple sets of credentials, they will have to do manual work to make sure their config vars (mounted either as env vars, files, or whatever) don't overlap with each other.
At the end of the day, this is still something that k8s users have to be somewhat mindful of, so maybe it doesn't matter too much. But it's something to consider.
@shanduur I'm curious your thoughts
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my opinion it is not different than the situation that happens now. If there are 2 secrets to be mounted, those will still have to be either renamed or mounted in different directories. I am in favour of the proposed design change with separate fields, as this collision can be easily avoided at the manifest level, without need for init containers or changes to application.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hello I'm trying to use a COSI s3 bucket within a tekton pipeline and this is what they have for awscli example https://github.com/tektoncd/catalog/tree/main/task/aws-cli/0.2
I could totally work with this enhancement as well because awscli takes ENV_VARS I can set in my containers from the ones exposed by the new design, old design is troublesome as it's json in a secret property
511d6cc
to
5aa494b
Compare
@BlaineEXE: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
COSI has a few design changes to make which may involve some API breakages. Update the current KEP with new design details, and update the design spec to v1alpha2. Signed-off-by: Blaine Gardner <blaine.gardner@ibm.com>
5aa494b
to
52275b7
Compare
// BucketClaimName is the name of the BucketClaim. | ||
BucketClaimName string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some future thinking, reg. the single BucketAccess
for multiple BucketClaims
:
// BucketClaimName is the name of the BucketClaim. | |
BucketClaimName string | |
// BucketClaimReferences holds references to multiple BucketClaims, | |
// indicating which claims share this BucketAccess. This relationship allows | |
// for efficient management of access permissions across related BucketClaims. | |
BucketClaimReferences []*corev1.ObjectReference |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm. This is a good idea. Before we started on v1alpha2, I was trying to see how we could accomplish this without API changes. But if we can modify the API, maybe it would be best. I'll think about this also, and probably include this suggestion. Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can also use validation on the API side, to ensure that the array has at least one item.
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: BlaineEXE The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |