Skip to content

Commit

Permalink
Make docs links be relative so we can version them
Browse files Browse the repository at this point in the history
  • Loading branch information
thockin committed Jul 7, 2015
1 parent 530bff3 commit 0a23c06
Show file tree
Hide file tree
Showing 14 changed files with 34 additions and 34 deletions.
2 changes: 1 addition & 1 deletion docs/availability.md
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ then you need `R + U` clusters. If it is not (e.g you want to ensure low latenc
cluster failure), then you need to have `R * U` clusters (`U` in each of `R` regions). In any case, try to put each cluster in a different zone.

Finally, if any of your clusters would need more than the maximum recommended number of nodes for a Kubernetes cluster, then
you may need even more clusters. Our [roadmap](http://docs.k8s.io/roadmap.md)
you may need even more clusters. Our [roadmap](./roadmap.md)
calls for maximum 100 node clusters at v1.0 and maximum 1000 node clusters in the middle of 2015.

## Working with multiple clusters
Expand Down
6 changes: 3 additions & 3 deletions docs/design/secrets.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ service would also consume the secrets associated with the MySQL service.

### Use-Case: Secrets associated with service accounts

[Service Accounts](http://docs.k8s.io/design/service_accounts.md) are proposed as a
[Service Accounts](./service_accounts.md) are proposed as a
mechanism to decouple capabilities and security contexts from individual human users. A
`ServiceAccount` contains references to some number of secrets. A `Pod` can specify that it is
associated with a `ServiceAccount`. Secrets should have a `Type` field to allow the Kubelet and
Expand Down Expand Up @@ -241,7 +241,7 @@ memory overcommit on the node.

#### Secret data on the node: isolation

Every pod will have a [security context](http://docs.k8s.io/design/security_context.md).
Every pod will have a [security context](./security_context.md).
Secret data on the node should be isolated according to the security context of the container. The
Kubelet volume plugin API will be changed so that a volume plugin receives the security context of
a volume along with the volume spec. This will allow volume plugins to implement setting the
Expand All @@ -253,7 +253,7 @@ Several proposals / upstream patches are notable as background for this proposal

1. [Docker vault proposal](https://github.com/docker/docker/issues/10310)
2. [Specification for image/container standardization based on volumes](https://github.com/docker/docker/issues/9277)
3. [Kubernetes service account proposal](http://docs.k8s.io/design/service_accounts.md)
3. [Kubernetes service account proposal](./service_accounts.md)
4. [Secrets proposal for docker (1)](https://github.com/docker/docker/pull/6075)
5. [Secrets proposal for docker (2)](https://github.com/docker/docker/pull/6697)

Expand Down
20 changes: 10 additions & 10 deletions docs/design/security.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,14 +63,14 @@ Automated process users fall into the following categories:
A pod runs in a *security context* under a *service account* that is defined by an administrator or project administrator, and the *secrets* a pod has access to is limited by that *service account*.


1. The API should authenticate and authorize user actions [authn and authz](http://docs.k8s.io/design/access.md)
1. The API should authenticate and authorize user actions [authn and authz](./access.md)
2. All infrastructure components (kubelets, kube-proxies, controllers, scheduler) should have an infrastructure user that they can authenticate with and be authorized to perform only the functions they require against the API.
3. Most infrastructure components should use the API as a way of exchanging data and changing the system, and only the API should have access to the underlying data store (etcd)
4. When containers run on the cluster and need to talk to other containers or the API server, they should be identified and authorized clearly as an autonomous process via a [service account](http://docs.k8s.io/design/service_accounts.md)
4. When containers run on the cluster and need to talk to other containers or the API server, they should be identified and authorized clearly as an autonomous process via a [service account](./service_accounts.md)
1. If the user who started a long-lived process is removed from access to the cluster, the process should be able to continue without interruption
2. If the user who started processes are removed from the cluster, administrators may wish to terminate their processes in bulk
3. When containers run with a service account, the user that created / triggered the service account behavior must be associated with the container's action
5. When container processes run on the cluster, they should run in a [security context](http://docs.k8s.io/design/security_context.md) that isolates those processes via Linux user security, user namespaces, and permissions.
5. When container processes run on the cluster, they should run in a [security context](./security_context.md) that isolates those processes via Linux user security, user namespaces, and permissions.
1. Administrators should be able to configure the cluster to automatically confine all container processes as a non-root, randomly assigned UID
2. Administrators should be able to ensure that container processes within the same namespace are all assigned the same unix user UID
3. Administrators should be able to limit which developers and project administrators have access to higher privilege actions
Expand All @@ -79,7 +79,7 @@ A pod runs in a *security context* under a *service account* that is defined by
6. Developers may need to ensure their images work within higher security requirements specified by administrators
7. When available, Linux kernel user namespaces can be used to ensure 5.2 and 5.4 are met.
8. When application developers want to share filesytem data via distributed filesystems, the Unix user ids on those filesystems must be consistent across different container processes
6. Developers should be able to define [secrets](http://docs.k8s.io/design/secrets.md) that are automatically added to the containers when pods are run
6. Developers should be able to define [secrets](./secrets.md) that are automatically added to the containers when pods are run
1. Secrets are files injected into the container whose values should not be displayed within a pod. Examples:
1. An SSH private key for git cloning remote data
2. A client certificate for accessing a remote system
Expand All @@ -93,12 +93,12 @@ A pod runs in a *security context* under a *service account* that is defined by

### Related design discussion

* Authorization and authentication http://docs.k8s.io/design/access.md
* Secret distribution via files https://github.com/GoogleCloudPlatform/kubernetes/pull/2030
* Docker secrets https://github.com/docker/docker/pull/6697
* Docker vault https://github.com/docker/docker/issues/10310
* Service Accounts: http://docs.k8s.io/design/service_accounts.md
* Secret volumes https://github.com/GoogleCloudPlatform/kubernetes/4126
* [Authorization and authentication](./access.md)
* [Secret distribution via files](https://github.com/GoogleCloudPlatform/kubernetes/pull/2030)
* [Docker secrets](https://github.com/docker/docker/pull/6697)
* [Docker vault](https://github.com/docker/docker/issues/10310)
* [Service Accounts:](./service_accounts.md)
* [Secret volumes](https://github.com/GoogleCloudPlatform/kubernetes/pull/4126)

## Specific Design Points

Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/centos/centos_manual_config.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ You need two machines with CentOS installed on them.
## Starting a cluster
This is a getting started guide for CentOS. It is a manual configuration so you understand all the underlying packages / services / ports, etc...

This guide will only get ONE minion working. Multiple minions requires a functional [networking configuration](http://docs.k8s.io/networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.
This guide will only get ONE minion working. Multiple minions requires a functional [networking configuration](../../networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.

The kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, centos-master, will be the kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_. The remaining host, centos-minion will be the minion and run kubelet, proxy, cadvisor and docker.

Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/cloudstack.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ CloudStack is a software to build public and private clouds based on hardware vi
[CoreOS](http://coreos.com) templates for CloudStack are built [nightly](http://stable.release.core-os.net/amd64-usr/current/). CloudStack operators need to [register](http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/templates.html) this template in their cloud before proceeding with these Kubernetes deployment instructions.

This guide uses an [Ansible playbook](https://github.com/runseb/ansible-kubernetes).
This is a completely automated, a single playbook deploys Kubernetes based on the coreOS [instructions](http://docs.k8s.io/getting-started-guides/coreos/coreos_multinode_cluster.md).
This is a completely automated, a single playbook deploys Kubernetes based on the coreOS [instructions](./coreos/coreos_multinode_cluster.md).


This [Ansible](http://ansibleworks.com) playbook deploys Kubernetes on a CloudStack based Cloud using CoreOS images. The playbook, creates an ssh key pair, creates a security group and associated rules and finally starts coreOS instances configured via cloud-init.
Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/coreos/bare_metal_offline.md
Original file line number Diff line number Diff line change
Expand Up @@ -213,7 +213,7 @@ Now for the good stuff!
## Cloud Configs
The following config files are tailored for the OFFLINE version of a Kubernetes deployment.

These are based on the work found here: [master.yml](http://docs.k8s.io/getting-started-guides/coreos/cloud-configs/master.yaml), [node.yml](http://docs.k8s.io/getting-started-guides/coreos/cloud-configs/node.yaml)
These are based on the work found here: [master.yml](./cloud-configs/master.yaml), [node.yml](./cloud-configs/node.yaml)

To make the setup work, you need to replace a few placeholders:

Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ docker run --net=host -d gcr.io/google_containers/etcd:2.0.9 /usr/local/bin/etcd
docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.18.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests
```

This actually runs the kubelet, which in turn runs a [pod](http://docs.k8s.io/pods.md) that contains the other master components.
This actually runs the kubelet, which in turn runs a [pod](../pods.md) that contains the other master components.

### Step Three: Run the service proxy
*Note, this could be combined with master above, but it requires --privileged for iptables manipulation*
Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/fedora/fedora_manual_config.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Getting started on [Fedora](http://fedoraproject.org)

This is a getting started guide for Fedora. It is a manual configuration so you understand all the underlying packages / services / ports, etc...

This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](http://docs.k8s.io/networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.
This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](../../networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.

The kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, fed-master, will be the kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_ (not needed if _etcd_ runs on a different host but this guide assumes that _etcd_ and kubernetes master run on the same host). The remaining host, fed-node will be the node and run kubelet, proxy and docker.

Expand Down
4 changes: 2 additions & 2 deletions docs/getting-started-guides/locally.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,8 +85,8 @@ cluster/kubectl.sh get replicationcontrollers

### Running a user defined pod

Note the difference between a [container](http://docs.k8s.io/containers.md)
and a [pod](http://docs.k8s.io/pods.md). Since you only asked for the former, kubernetes will create a wrapper pod for you.
Note the difference between a [container](../containers.md)
and a [pod](../pods.md). Since you only asked for the former, kubernetes will create a wrapper pod for you.
However you cannot view the nginx start page on localhost. To verify that nginx is running you need to run `curl` within the docker container (try `docker exec`).

You can control the specifications of a pod via a user defined manifest, and reach nginx through your browser on the port specified therein:
Expand Down
6 changes: 3 additions & 3 deletions docs/proposals/autoscaling.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ done automatically based on statistical analysis and thresholds.

* This proposal is for horizontal scaling only. Vertical scaling will be handled in [issue 2072](https://github.com/GoogleCloudPlatform/kubernetes/issues/2072)
* `ReplicationControllers` will not know about the auto-scaler, they are the target of the auto-scaler. The `ReplicationController` responsibilities are
constrained to only ensuring that the desired number of pods are operational per the [Replication Controller Design](http://docs.k8s.io/replication-controller.md#responsibilities-of-the-replication-controller)
constrained to only ensuring that the desired number of pods are operational per the [Replication Controller Design](../replication-controller.md#responsibilities-of-the-replication-controller)
* Auto-scalers will be loosely coupled with data gathering components in order to allow a wide variety of input sources
* Auto-scalable resources will support a scale verb ([1629](https://github.com/GoogleCloudPlatform/kubernetes/issues/1629))
such that the auto-scaler does not directly manipulate the underlying resource.
Expand All @@ -42,7 +42,7 @@ applications will expose one or more network endpoints for clients to connect to
balanced or situated behind a proxy - the data from those proxies and load balancers can be used to estimate client to
server traffic for applications. This is the primary, but not sole, source of data for making decisions.

Within Kubernetes a [kube proxy](http://docs.k8s.io/services.md#ips-and-vips)
Within Kubernetes a [kube proxy](../services.md#ips-and-vips)
running on each node directs service requests to the underlying implementation.

While the proxy provides internal inter-pod connections, there will be L3 and L7 proxies and load balancers that manage
Expand Down Expand Up @@ -225,7 +225,7 @@ or down as appropriate. In the future this may be more configurable.

### Interactions with a deployment

In a deployment it is likely that multiple replication controllers must be monitored. For instance, in a [rolling deployment](http://docs.k8s.io/replication-controller.md#rolling-updates)
In a deployment it is likely that multiple replication controllers must be monitored. For instance, in a [rolling deployment](../replication-controller.md#rolling-updates)
there will be multiple replication controllers, with one scaling up and another scaling down. This means that an
auto-scaler must be aware of the entire set of capacity that backs a service so it does not fight with the deployer. `AutoScalerSpec.MonitorSelector`
is what provides this ability. By using a selector that spans the entire service the auto-scaler can monitor capacity
Expand Down
2 changes: 1 addition & 1 deletion docs/sharing-clusters.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ scp host2:/path/to/home2/.kube/config path/to/other/.kube/config

export $KUBECONFIG=path/to/other/.kube/config
```
Detailed examples and explanation of `kubeconfig` loading/merging rules can be found in [kubeconfig-file.md](http://docs.k8s.io/kubeconfig-file.md).
Detailed examples and explanation of `kubeconfig` loading/merging rules can be found in [kubeconfig-file.md](./kubeconfig-file.md).



Expand Down
10 changes: 5 additions & 5 deletions examples/mysql-wordpress-pd/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ This example describes how to run a persistent installation of [Wordpress](https

We'll use the [mysql](https://registry.hub.docker.com/_/mysql/) and [wordpress](https://registry.hub.docker.com/_/wordpress/) official [Docker](https://www.docker.com/) images for this installation. (The wordpress image includes an Apache server).

We'll create two Kubernetes [pods](http://docs.k8s.io/pods.md) to run mysql and wordpress, both with associated persistent disks, then set up a Kubernetes [service](http://docs.k8s.io/services.md) to front each pod.
We'll create two Kubernetes [pods](../../docs/pods.md) to run mysql and wordpress, both with associated persistent disks, then set up a Kubernetes [service](../../docs/services.md) to front each pod.

This example demonstrates several useful things, including: how to set up and use persistent disks with Kubernetes pods; how to define Kubernetes services to leverage docker-links-compatible service environment variables; and use of an external load balancer to expose the wordpress service externally and make it transparent to the user if the wordpress pod moves to a different cluster node.

Expand All @@ -30,11 +30,11 @@ Next, start up a Kubernetes cluster:
wget -q -O - https://get.k8s.io | bash
```

Please see the [GCE getting started guide](http://docs.k8s.io/getting-started-guides/gce.md) for full details and other options for starting a cluster.
Please see the [GCE getting started guide](../../docs/getting-started-guides/gce.md) for full details and other options for starting a cluster.

## Create two persistent disks

For this WordPress installation, we're going to configure our Kubernetes [pods](http://docs.k8s.io/pods.md) to use [persistent disks](https://cloud.google.com/compute/docs/disks). This means that we can preserve installation state across pod shutdown and re-startup.
For this WordPress installation, we're going to configure our Kubernetes [pods](../../docs/pods.md) to use [persistent disks](https://cloud.google.com/compute/docs/disks). This means that we can preserve installation state across pod shutdown and re-startup.

You will need to create the disks in the same [GCE zone](https://cloud.google.com/compute/docs/zones) as the Kubernetes cluster. The default setup script will create the cluster in the `us-central1-b` zone, as seen in the [config-default.sh](/cluster/gce/config-default.sh) file. Replace `$ZONE` below with the appropriate zone.

Expand Down Expand Up @@ -123,8 +123,8 @@ If you want to do deeper troubleshooting, e.g. if it seems a container is not st

### Start the Mysql service

We'll define and start a [service](http://docs.k8s.io/services.md) that lets other pods access the mysql database on a known port and host.
We will specifically name the service `mysql`. This will let us leverage the support for [Docker-links-compatible](http://docs.k8s.io/services.md#how-do-they-work) service environment variables when we set up the wordpress pod. The wordpress Docker image expects to be linked to a mysql container named `mysql`, as you can see in the "How to use this image" section on the wordpress docker hub [page](https://registry.hub.docker.com/_/wordpress/).
We'll define and start a [service](../../docs/services.md) that lets other pods access the mysql database on a known port and host.
We will specifically name the service `mysql`. This will let us leverage the support for [Docker-links-compatible](../../docs/services.md#how-do-they-work) service environment variables when we set up the wordpress pod. The wordpress Docker image expects to be linked to a mysql container named `mysql`, as you can see in the "How to use this image" section on the wordpress docker hub [page](https://registry.hub.docker.com/_/wordpress/).

So if we label our Kubernetes mysql service `mysql`, the wordpress pod will be able to use the Docker-links-compatible environment variables, defined by Kubernetes, to connect to the database.

Expand Down
Loading

0 comments on commit 0a23c06

Please sign in to comment.