Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ghproxy: add instructions on how to use #10538

Closed
wants to merge 1 commit into from

Conversation

nikhita
Copy link
Member

@nikhita nikhita commented Dec 21, 2018

I think the wording can definitely be improved. Suggestions welcome.

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. area/ghproxy Issues or PRs related to code in /ghproxy labels Dec 21, 2018
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: nikhita
To fully approve this pull request, please assign additional approvers.
We suggest the following additional approver: cjwagner

If they are not already assigned, you can assign the PR to them by writing /assign @cjwagner in a comment when ready.

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

To use ghProxy for your prow instance,

- Create a `StorageClass` using [`gce-ssd-retain_storageclass.yaml`](/prow/cluster/gce-ssd-retain_storageclass.yaml)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the benefit of the storageclass here?

Copy link
Contributor

@ibrasho ibrasho Dec 21, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's used in the PVC mentioned below.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay sorry, let me rephrase my question: What is the advantage of using a Storageclass with reclaimPolicy: Retain here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you redeploy ghproxy and you end up reclaiming the data by a new deployment, we want the data to not have been deleted.

Copy link
Contributor

@ibrasho ibrasho Dec 21, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't that require manual intervention to produce that effect? If the PVC was deleted, at least the PersistentVolume.Spec.ClaimRef.UID field needs to be cleared to make it's available again to be bound to the same claim.

AFAIK, reclaimPolicy: retain only means that the PV is not deleted when the PVC is deleted. If the PVC is recreated then a new PV is provisioned (and a new persistent desk on GCP). If the PVC is not deleted and only the deployment is recreated, then the PVC should be bound to the same PV regardless of the reclaimPolicy set.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting. My understanding of PV binding is definitely shakey. @cjwagner thoughts?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah that makes a lot of sense. I guess we could remove that and just keep the storage class for the SSD type?

[here](https://github.com/kubernetes/test-infra/blob/6cd6c2332d493b4d141822cbaa3ffe5806f64825/prow/cluster/hook_deployment.yaml#L43).
To make sure that the client is able to bypass cache if it is temporarily unavailable,
you can additionally use `--github-endpoint=https://api.github.com`. The endpoints are used in the order
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about [...] in the order they are mentioned, the first endpoint that does not return a connection error will be used?

To use ghProxy for your prow instance,

- Create a `StorageClass` using [`gce-ssd-retain_storageclass.yaml`](/prow/cluster/gce-ssd-retain_storageclass.yaml)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you redeploy ghproxy and you end up reclaiming the data by a new deployment, we want the data to not have been deleted.

## How to use

To use ghProxy for your prow instance,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of words here, why not

kubectl apply -f prow/cluster/gce-ssd-retain_storageclass.yaml
kubectl apply -f prow/cluster/ghproxy_deployment.yaml
kuebctl apply -f prow/cluster/ghproxy_service.yaml

- Create a `PersistentVolumeClaim` and `Deployment` using the config [here](prow/cluster/ghproxy_deployment.yaml).
- Finally, create a [`Service`](/prow/cluster/ghproxy_service.yaml).
- In the deployments for your prow components, use `--github-endpoint=http://ghproxy` as shown
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This depends on the service name but should be fine to have this value here

[here](https://github.com/kubernetes/test-infra/blob/6cd6c2332d493b4d141822cbaa3ffe5806f64825/prow/cluster/hook_deployment.yaml#L43).
To make sure that the client is able to bypass cache if it is temporarily unavailable,
you can additionally use `--github-endpoint=https://api.github.com`. The endpoints are used in the order
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of "additionally use" maybe "also add the --github-... flag" or something

- Create a `StorageClass` using [`gce-ssd-retain_storageclass.yaml`](/prow/cluster/gce-ssd-retain_storageclass.yaml)
if on GCE or specify your own storage class.
- Create a `PersistentVolumeClaim` and `Deployment` using the config [here](prow/cluster/ghproxy_deployment.yaml).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bad link

Suggested change
- Create a `PersistentVolumeClaim` and `Deployment` using the config [here](prow/cluster/ghproxy_deployment.yaml).
- Create a `PersistentVolumeClaim` and `Deployment` using the config [here](/prow/cluster/ghproxy_deployment.yaml).

- Create a `PersistentVolumeClaim` and `Deployment` using the config [here](prow/cluster/ghproxy_deployment.yaml).
- Finally, create a [`Service`](/prow/cluster/ghproxy_service.yaml).
- In the deployments for your prow components, use `--github-endpoint=http://ghproxy` as shown
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should explicitly mention that this should be done for all prow components that use the main Prow bot token. (i.e. deployments that use the oauth-token secret).

@ixdy
Copy link
Member

ixdy commented Jan 30, 2019

/uncc

@k8s-ci-robot k8s-ci-robot removed the request for review from ixdy January 30, 2019 18:47
@fejta
Copy link
Contributor

fejta commented Mar 26, 2019

This sounds cool. Any chance you'll get back to it?

@k8s-ci-robot
Copy link
Contributor

k8s-ci-robot commented Apr 30, 2019

@nikhita: The following test failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-test-infra-lint 441375b link /test pull-test-infra-lint

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 1, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 31, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closed this PR.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/ghproxy Issues or PRs related to code in /ghproxy cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants