-
Notifications
You must be signed in to change notification settings - Fork 193
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add values to Helm chart for using a shared credentials file #341
Conversation
Hi @juanrh. Thanks for your PR. I'm waiting for a aws-controllers-k8s member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This is really great, @juanrh . I'll approve, and give others a chance to check it out and they can merge it. |
/ok-to-test |
@juanrh: The following test failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
This is awesome! Thanks @juanrh |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: juanrh, RedbackThomson, vijtrip2 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
…trollers-k8s#341) Issue #, if available: [1330](aws-controllers-k8s/community#1330) Description of changes: In the [documentation on using a shared credentials file](https://aws-controllers-k8s.github.io/community/docs/user-docs/authentication/#use-a-shared-credentials-file) there are instructions for mounting a secret containing AWS credentials on the pod for an ACK controller, so it can use those credentials. However, for this controller there is no way to specify which secret to use when following the [installation instructions with Helm](https://aws-controllers-k8s.github.io/community/docs/tutorials/rds-example/#install-the-ack-service-controller-for-rds). This change adds new values settings that can be used to specify that secret. ## Manual testing - Render template ```bash # Generate helm chart cd aws-controllers-k8s/code-generator ./scripts/build-controller-release.sh rds # Render template cd aws-controllers-k8s/rds-controller # new volume, volume mount, and env vars are visible helm -n testns template rds-ack helm/ --debug \ --set aws.credentials.secretName=aws-creds \ --set aws.credentials.profile=ack > render_with_secret.yaml # no new stuff is visible helm -n testns template rds-ack helm/ --debug > render_without_secret.yaml ``` - Deploy to minikube: ```bash # Create secret with AWS credential as described in https://aws-controllers-k8s.github.io/community/docs/user-docs/authentication/#use-a-shared-credentials-file ## Assuming AWS_SHARED_CREDENTIALS_FILE is properly set CREDS_CONTENT=$(cat ${AWS_SHARED_CREDENTIALS_FILE} | sed 's/^/ /';) kubectl create namespace testns kubectl -n testns apply -f - <<EOF apiVersion: v1 kind: Secret metadata: name: aws-creds type: Opaque stringData: credentials: | $CREDS_CONTENT EOF # Install chart as in https://aws-controllers-k8s.github.io/community/docs/tutorials/rds-example/#install-the-ack-service-controller-for-rds # Without new values setting, the controller fails to find the credentials helm -n testns install rds-ack helm/ --set=aws.region=us-east-1 $ kubectl -n testns logs -f -l "app.kubernetes.io/instance=rds-ack" 1.6552236370473711e+09 ERROR setup Unable to create controller manager {"aws.service": "rds", "error": "unable to determine account ID: unable to get caller identity: NoCredentialProviders: no valid providers in chain. Deprecated.\n\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors"} # Using the additional values settings it is able to get the credentials helm -n testns upgrade rds-ack helm/ \ --set=aws.region=us-east-1 \ --set aws.credentials.secretName=aws-creds \ --set aws.credentials.profile=ack $ kubectl -n testns logs -f -l "app.kubernetes.io/instance=rds-ack" 1.655223683774505e+09 INFO controller.globalcluster Starting EventSource {"reconciler group": "rds.services.k8s.aws", "reconciler kind": "GlobalCluster", "source": "kind source: *v1alpha1.GlobalCluster"} ... ``` By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Fixes aws-controllers-k8s/community#1330
Issue #, if available:
1330
Description of changes:
In the documentation on using a shared credentials file there are instructions for mounting a secret containing AWS credentials on the pod for an ACK controller, so it can use those credentials. However, for this controller there is no way to specify which secret to use when following the installation instructions with Helm. This change adds new values settings that can be used to specify that secret.
Manual testing
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.