-
Notifications
You must be signed in to change notification settings - Fork 328
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Add ECR credential provider to nodepool controller #4377
Conversation
This reverts commit 1927260. Signed-off-by: Enrique Llorente <ellorent@redhat.com>
…-old-scripts HOSTEDCP-1446: hack: remove old arguments and scripts
BUCKET_NAME is required to be exported to be accessed by the process that creates policy.json
…p-job HOSTEDCP-1402: cmd/infra/aws/destroy: allow using component credentials
The goal of this refactor is to reduce the complexity in the command-line tooling. Overall, the changes here remove duplicative structures that copied data around and co-locate the logic with the data, instead of first aggregating all data into one uber-structure and then conditionally acting on that structure. These refactors have a number of benefits: - locality of behavior: in the past, it was very difficult if not impossible to determine where a value was used, as it would be bound to a flag in one package, copied around between container structs a couple of times, then have some generic logic act on the presence or absence of the value to e.g. change a field on the HostedCluster. Simply reading the generic logic was often not enough to understand what was going on, as many of the conditional branches in the example fixture code could only ever trigger for one specific platform, and you'd never know unless you traced how the example options uber-struct had its fields set in every provider. - clear go-to-definition: as a knock-on effect of the above, now there's *one* structure that holds a command-line flag and it's trivial to use the LSP when determining where that flag is used and how - composability: as exemplified in the KubeVirt NodePool code, we are able to compose commands as necessary. When commands re-use the same arguments with the same flags and the same validation logic, there's no need to copy things around and re-implement anything; by localizing flag binding, validation and option completion, we gain small, composable parts that we can use to build larger commands with Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
We only bind flags in one routine now; breaking out explicitly the set of flags that should only be exposed to developers in the `hypershift` CLI. The net effect of this change is to expose `--base-domain-prefix` and `--external-dns-domain` to users of `hcp`. This change also shows how to change the defaults in an option set for a command - the `hcp create cluster` command has a unique default for the control plane availability policy, and its now evident that this is the case since it has to be done explicitly after building the default set of options. Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
…te-resource-creation HOSTEDCP-1542: cmd/cluster: refactor to remove example fixtures
This enables data plane -> kubernetes.svc traffic and any external dns traffic to use a common router for all HCs in Azure. The initial goal is to start transitioning the code structure (reconcilers, APIs, helpers...) towards a shared ingress oriented solution using haproxy as presented for simplicity. Main changes are: - Change existing data plane haproxy to forward request to the management kas SVC IP. - Add a new haproxy in the data plane that listens on the management kas SVC IP and forwards requests to the remote shared ingress. - Introduce a new common shared ingress for the all the HC fleet management side and uses the proxy protocol to discriminate traffic originated via data plane kubernetes.svc by the dst API of theKAS SVC in the management cluster. - Update endpoints routes to use the new shared ingress router and stop deploying the per HC router. Follow ups: PDBs, Network policies, evaluate alternative or adhoc solutions to haproxy, explore how to remove data plane hops, proxy integration, private/public support...
HOSTEDCP-1721: Enable shared ingress for Azure
- fixed platfrom specifc validation not being executed
- ResourceGroupName, VNetID and SubnetID
Signed-off-by: David Vossel <davidvossel@gmail.com>
Signed-off-by: Juan Manuel Parrilla Madrid <jparrill@redhat.com>
Humans make mistakes and the compiler is here to help. The pattern in this commit is esoteric, but widespread in the Kubernetes ecosystem for creating options structs that *must* be validated and completed before being used. The type gymanstics are not ideal but the end experience is not degraded since we're embedding everything. In total, the pattern has a couple benefits: - enforcement of validation and completion - clear distinction between user input (via flags) and computed input - obvious flow for re-use in code consumers Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
We need to seed the random readers we use in these tests or the output will never be testable. Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
We need the `--render` output to be deterministic in order to easily test what we're doing. Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
This seems to be an oversight, as we never validated the options in the past. Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
HOSTEDCP-1542: Fixed infra-id not being defaulted first
* CLI options for cluster create & destroy. More options will come with Node pools and in the future with more usecases. * Set a default Machine Network when creating the cluster with the CLI. * Add the CLI options to e2e.
OSASINFRA-3538: openstack: cluster CLI
OSASINFRA-3539: Add ipam to cluster-api assets
OSASINFRA-3312: Implements OpenStack Node pools
9de8ee3
to
d1891ce
Compare
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
4f9484c
to
a5b7d17
Compare
@hectorakemp: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
Stale issues rot after 30d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle rotten |
Rotten issues close after 30d of inactivity. Reopen the issue by commenting /close |
@openshift-bot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Resolves https://issues.redhat.com/browse/OSD-24468
What this PR does / why we need it:
Which issue(s) this PR fixes (optional, use
fixes #<issue_number>(, fixes #<issue_number>, ...)
format, where issue_number might be a GitHub issue, or a Jira story:Fixes #
Checklist