Skip to content

Commit

Permalink
Fix the instruction for setting up cluster to run test (kubeflow#152)
Browse files Browse the repository at this point in the history
* Fix some text format in README
* checkin the vendor for test-infra app.
* set service_type for jupyterHubLoadBalancer in nfs-jupyter.jsonnet
  • Loading branch information
lluunn authored and jlewi committed Jan 27, 2018
1 parent cb87276 commit 15ffefc
Show file tree
Hide file tree
Showing 9 changed files with 1,225 additions and 23 deletions.
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -30,3 +30,6 @@ examples/tf_sample/tf_sample.egg-info/
examples/.ipynb_checkpoints/

**/.ipynb_checkpoints


!testing/test-infra/vendor
44 changes: 26 additions & 18 deletions testing/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,35 +82,34 @@ gcloud --project=${PROJECT} container clusters create \

### Create a GCP service account

* The tests need a GCP service account to upload data to GCS for Gubernator
* The tests need a GCP service account to upload data to GCS for Gubernator

```
SERVICE_ACCOUNT=kubeflow-testing
gcloud iam service-accounts --project=mlkube-testing create ${SERVICE_ACCOUNT} --display-name "Kubeflow testing account"
```
SERVICE_ACCOUNT=kubeflow-testing
gcloud iam service-accounts --project=mlkube-testing create ${SERVICE_ACCOUNT} --display-name "Kubeflow testing account"
gcloud projects add-iam-policy-binding ${PROJECT} \
--member serviceAccount:${SERVICE_ACCOUNT}@${PROJECT}.iam.gserviceaccount.com --role roles/container.developer
```
* The service account needs to be able to create K8s resources as part of the test.
```
* The service account needs to be able to create K8s resources as part of the test.


Create a secret key for the service account
Create a secret key for the service account

```
gcloud iam service-accounts keys create ~/tmp/key.json \
```
gcloud iam service-accounts keys create ~/tmp/key.json \
--iam-account ${SERVICE_ACCOUNT}@${PROJECT}.iam.gserviceaccount.com
kubectl create secret generic kubeflow-testing-credentials \
--namespace=kubeflow-test-infra --from-file=`echo ~/tmp/key.json`
rm ~/tmp/key.json
```
```

Make the service account a cluster admin
Make the service account a cluster admin

```
kubectl create clusterrolebinding ${SERVICE_ACCOUNT}-admin --clusterrole=cluster-admin \
```
kubectl create clusterrolebinding ${SERVICE_ACCOUNT}-admin --clusterrole=cluster-admin \
--user=${SERVICE_ACCOUNT}@${PROJECT}.iam.gserviceaccount.com
```
* The service account is used to deploye Kubeflow which entails creating various roles; so
it needs sufficient RBAC permission to do so.
```
* The service account is used to deploye Kubeflow which entails creating various roles; so it needs sufficient RBAC permission to do so.

### Create a GitHub Token

Expand Down Expand Up @@ -141,6 +140,15 @@ the test runs.

The ksonnet app `test-infra` contains ksonnet configs to deploy the test infrastructure.

First, install the kubeflow package

```
ks pkg install kubeflow/core
```

Then change the server ip in `test-infra/environments/prow/spec.json` to
point to your cluster.

You can deploy argo as follows (you don't need to use argo's CLI)

```
Expand All @@ -153,8 +161,8 @@ Deploy NFS & Jupyter
ks apply prow -c nfs-jupyter
```

* This creates the NFS share
* We use JupyterHub as a convenient way to access the NFS share for manual inspection of the file contents.
* This creates the NFS share
* We use JupyterHub as a convenient way to access the NFS share for manual inspection of the file contents.

#### Troubleshooting

Expand Down
10 changes: 5 additions & 5 deletions testing/test-infra/components/nfs-jupyter.jsonnet
Original file line number Diff line number Diff line change
Expand Up @@ -26,10 +26,10 @@ local tfJobImage = params.tfJobImage;

// Create a list of the resources needed for a particular disk
local diskToList = function(diskName) [
nfs.parts(namespace, name,).diskResources(diskName).storageClass,
nfs.parts(namespace, name,).diskResources(diskName).volumeClaim,
nfs.parts(namespace, name,).diskResources(diskName).service,
nfs.parts(namespace, name,).diskResources(diskName).provisioner];
nfs.parts(namespace, name).diskResources(diskName).storageClass,
nfs.parts(namespace, name).diskResources(diskName).volumeClaim,
nfs.parts(namespace, name).diskResources(diskName).service,
nfs.parts(namespace, name).diskResources(diskName).provisioner];

local allDisks = std.flattenArrays(std.map(diskToList, diskNames));

Expand All @@ -49,7 +49,7 @@ std.prune(k.core.v1.list.new([
// jupyterHub components
jupyterConfigMap,
jupyter.parts(namespace).jupyterHubService,
jupyter.parts(namespace).jupyterHubLoadBalancer,
jupyter.parts(namespace).jupyterHubLoadBalancer("ClusterIP"),
jupyter.parts(namespace).jupyterHub(jupyterHubImage),
jupyter.parts(namespace).jupyterHubRole,
jupyter.parts(namespace).jupyterHubServiceAccount,
Expand Down
59 changes: 59 additions & 0 deletions testing/test-infra/vendor/kubeflow/core/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
# core

> Core components of Kubeflow.

* [Quickstart](#quickstart)
* [Using Prototypes](#using-prototypes)
* [io.ksonnet.pkg.kubeflow-core](#io.ksonnet.pkg.kubeflow-core)

## Quickstart

*The following commands use the `io.ksonnet.pkg.kubeflow` prototype to generate Kubernetes YAML for core, and then deploys it to your Kubernetes cluster.*

First, create a cluster and install the ksonnet CLI (see root-level [README.md](rootReadme)).

If you haven't yet created a [ksonnet application](linkToSomewhere), do so using `ks init <app-name>`.

Finally, in the ksonnet application directory, run the following:

```shell
# Expand prototype as a Jsonnet file, place in a file in the
# `components/` directory. (YAML and JSON are also available.)
$ ks prototype use io.ksonnet.pkg.kubeflow-core \
--name core \
--namespace default \
--disks

# Apply to server.
$ ks apply -f core.jsonnet
```

## Using the library

The library files for core define a set of relevant *parts* (_e.g._, deployments, services, secrets, and so on) that can be combined to configure core for a wide variety of scenarios. For example, a database like Redis may need a secret to hold the user password, or it may have no password if it's acting as a cache.

This library provides a set of pre-fabricated "flavors" (or "distributions") of core, each of which is configured for a different use case. These are captured as ksonnet *prototypes*, which allow users to interactively customize these distributions for their specific needs.

These prototypes, as well as how to use them, are enumerated below.

### io.ksonnet.pkg.kubeflow-core

Kubeflow core components
#### Example

```shell
# Expand prototype as a Jsonnet file, place in a file in the
# `components/` directory. (YAML and JSON are also available.)
$ ks prototype use io.ksonnet.pkg.kubeflow-core core \
--name YOUR_NAME_HERE
```

#### Parameters

The available options to pass prototype are:

* `--name=<name>`: Name to give to each of the components [string]


[rootReadme]: https://github.com/ksonnet/mixins
Loading

0 comments on commit 15ffefc

Please sign in to comment.