-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ghproxy: add instructions on how to use #10538
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: nikhita If they are not already assigned, you can assign the PR to them by writing The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
To use ghProxy for your prow instance, | ||
|
||
- Create a `StorageClass` using [`gce-ssd-retain_storageclass.yaml`](/prow/cluster/gce-ssd-retain_storageclass.yaml) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the benefit of the storageclass here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's used in the PVC mentioned below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay sorry, let me rephrase my question: What is the advantage of using a Storageclass with reclaimPolicy: Retain
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you redeploy ghproxy
and you end up reclaiming the data by a new deployment, we want the data to not have been deleted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doesn't that require manual intervention to produce that effect? If the PVC was deleted, at least the PersistentVolume.Spec.ClaimRef.UID
field needs to be cleared to make it's available again to be bound to the same claim.
AFAIK, reclaimPolicy: retain
only means that the PV is not deleted when the PVC is deleted. If the PVC is recreated then a new PV is provisioned (and a new persistent desk on GCP). If the PVC is not deleted and only the deployment is recreated, then the PVC should be bound to the same PV regardless of the reclaimPolicy
set.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting. My understanding of PV binding is definitely shakey. @cjwagner thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that makes a lot of sense. I guess we could remove that and just keep the storage class for the SSD type?
[here](https://github.com/kubernetes/test-infra/blob/6cd6c2332d493b4d141822cbaa3ffe5806f64825/prow/cluster/hook_deployment.yaml#L43). | ||
To make sure that the client is able to bypass cache if it is temporarily unavailable, | ||
you can additionally use `--github-endpoint=https://api.github.com`. The endpoints are used in the order |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about [...] in the order they are mentioned, the first endpoint that does not return a connection error will be used
?
To use ghProxy for your prow instance, | ||
|
||
- Create a `StorageClass` using [`gce-ssd-retain_storageclass.yaml`](/prow/cluster/gce-ssd-retain_storageclass.yaml) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you redeploy ghproxy
and you end up reclaiming the data by a new deployment, we want the data to not have been deleted.
## How to use | ||
|
||
To use ghProxy for your prow instance, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of words here, why not
kubectl apply -f prow/cluster/gce-ssd-retain_storageclass.yaml
kubectl apply -f prow/cluster/ghproxy_deployment.yaml
kuebctl apply -f prow/cluster/ghproxy_service.yaml
- Create a `PersistentVolumeClaim` and `Deployment` using the config [here](prow/cluster/ghproxy_deployment.yaml). | ||
- Finally, create a [`Service`](/prow/cluster/ghproxy_service.yaml). | ||
- In the deployments for your prow components, use `--github-endpoint=http://ghproxy` as shown |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This depends on the service name but should be fine to have this value here
[here](https://github.com/kubernetes/test-infra/blob/6cd6c2332d493b4d141822cbaa3ffe5806f64825/prow/cluster/hook_deployment.yaml#L43). | ||
To make sure that the client is able to bypass cache if it is temporarily unavailable, | ||
you can additionally use `--github-endpoint=https://api.github.com`. The endpoints are used in the order |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of "additionally use" maybe "also add the --github-...
flag" or something
- Create a `StorageClass` using [`gce-ssd-retain_storageclass.yaml`](/prow/cluster/gce-ssd-retain_storageclass.yaml) | ||
if on GCE or specify your own storage class. | ||
- Create a `PersistentVolumeClaim` and `Deployment` using the config [here](prow/cluster/ghproxy_deployment.yaml). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bad link
- Create a `PersistentVolumeClaim` and `Deployment` using the config [here](prow/cluster/ghproxy_deployment.yaml). | |
- Create a `PersistentVolumeClaim` and `Deployment` using the config [here](/prow/cluster/ghproxy_deployment.yaml). |
- Create a `PersistentVolumeClaim` and `Deployment` using the config [here](prow/cluster/ghproxy_deployment.yaml). | ||
- Finally, create a [`Service`](/prow/cluster/ghproxy_service.yaml). | ||
- In the deployments for your prow components, use `--github-endpoint=http://ghproxy` as shown |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should explicitly mention that this should be done for all prow components that use the main Prow bot token. (i.e. deployments that use the oauth-token
secret).
/uncc |
This sounds cool. Any chance you'll get back to it? |
@nikhita: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I think the wording can definitely be improved. Suggestions welcome.