Skip to content

Commit

Permalink
Ensure all docs and examples in user guide are reachable
Browse files Browse the repository at this point in the history
  • Loading branch information
janetkuo committed Jul 17, 2015
1 parent 55e9356 commit b0c68c4
Show file tree
Hide file tree
Showing 30 changed files with 77 additions and 47 deletions.
22 changes: 15 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,17 +33,25 @@ While the concepts and architecture in Kubernetes represent years of experience

Kubernetes works with the following concepts:

**Clusters** are the compute resources on top of which your containers are built. Kubernetes can run anywhere! See the [Getting Started Guides](docs/getting-started-guides) for instructions for a variety of services.
[**Cluster**](docs/admin/README.md)
: A cluster is a set of physical or virtual machines and other infrastructure resources used by Kubernetes to run your applications. Kubernetes can run anywhere! See the [Getting Started Guides](docs/getting-started-guides) for instructions for a variety of services.

**Pods** are a colocated group of Docker containers with shared volumes. They're the smallest deployable units that can be created, scheduled, and managed with Kubernetes. Pods can be created individually, but it's recommended that you use a replication controller even if creating a single pod. [More about pods](docs/pods.md).
[**Node**](docs/admin/node.md)
: A node is a physical or virtual machine running Kubernetes, onto which pods can be scheduled.

**Replication controllers** manage the lifecycle of pods. They ensure that a specified number of pods are running
at any given time, by creating or killing pods as required. [More about replication controllers](docs/replication-controller.md).
[**Pod**](docs/user-guide/pods.md)
: Pods are a colocated group of application containers with shared volumes. They're the smallest deployable units that can be created, scheduled, and managed with Kubernetes. Pods can be created individually, but it's recommended that you use a replication controller even if creating a single pod.

**Services** provide a single, stable name and address for a set of pods.
They act as basic load balancers. [More about services](docs/services.md).
[**Replication controller**](docs/user-guide/replication-controller.md)
: Replication controllers manage the lifecycle of pods. They ensure that a specified number of pods are running
at any given time, by creating or killing pods as required.

**Labels** are used to organize and select groups of objects based on key:value pairs. [More about labels](docs/labels.md).
[**Service**](docs/user-guide/services.md)
: Services provide a single, stable name and address for a set of pods.
They act as basic load balancers.

[**Label**](docs/user-guide/labels.md)
: Labels are used to organize and select groups of objects based on key:value pairs.

## Documentation

Expand Down
4 changes: 2 additions & 2 deletions docs/admin/admission-controllers.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ This plug-in will observe the incoming request and ensure that it does not viola
enumerated in the ```ResourceQuota``` object in a ```Namespace```. If you are using ```ResourceQuota```
objects in your Kubernetes deployment, you MUST use this plug-in to enforce quota constraints.

See the [resourceQuota design doc](../design/admission_control_resource_quota.md).
See the [resourceQuota design doc](../design/admission_control_resource_quota.md) and the [example of Resource Quota](../user-guide/resourcequota/).

It is strongly encouraged that this plug-in is configured last in the sequence of admission control plug-ins. This is
so that quota is not prematurely incremented only for the request to be rejected later in admission control.
Expand All @@ -113,7 +113,7 @@ This plug-in will observe the incoming request and ensure that it does not viola
enumerated in the ```LimitRange``` object in a ```Namespace```. If you are using ```LimitRange``` objects in
your Kubernetes deployment, you MUST use this plug-in to enforce those constraints.

See the [limitRange design doc](../design/admission_control_limit_range.md).
See the [limitRange design doc](../design/admission_control_limit_range.md) and the [example of Limit Range](../user-guide/limitrange/).

### NamespaceExists

Expand Down
9 changes: 7 additions & 2 deletions docs/admin/resource-quota.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,8 @@ Resource Quota is enforced in a particular namespace when there is a
`ResourceQuota` object in that namespace. There should be at most one
`ResourceQuota` object in a namespace.

See [ResourceQuota design doc](../design/admission_control_resource_quota.md) for more information.

## Object Count Quota
The number of objects of a given type can be restricted. The following types
are supported:
Expand All @@ -46,9 +48,9 @@ are supported:
| pods | Total number of pods |
| services | Total number of services |
| replicationcontrollers | Total number of replication controllers |
| resourcequotas | Total number of resource quotas |
| resourcequotas | Total number of [resource quotas](admission-controllers.md#resourcequota) |
| secrets | Total number of secrets |
| persistentvolumeclaims | Total number of persistent volume claims |
| persistentvolumeclaims | Total number of [persistent volume claims](../user-guide/persistent-volumes.md#persistentvolumeclaims) |

For example, `pods` quota counts and enforces a maximum on the number of `pods`
created in a single namespace.
Expand Down Expand Up @@ -122,6 +124,9 @@ Such policies could be implemented using ResourceQuota as a building-block, by
writing a 'controller' which watches the quota usage and adjusts the quota
hard limits of each namespace.

## Example
See a [detailed example for how to use resource quota](../user-guide/resourcequota/).


<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/resource-quota.md?pixel)]()
Expand Down
3 changes: 3 additions & 0 deletions docs/design/admission_control_limit_range.md
Original file line number Diff line number Diff line change
Expand Up @@ -153,6 +153,9 @@ It is expected we will want to define limits for particular pods or containers b

To make a **LimitRangeItem** more restrictive, we will intend to add these additional restrictions at a future point in time.

## Example
See the [example of Limit Range](../user-guide/limitrange) for more information.


<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/admission_control_limit_range.md?pixel)]()
Expand Down
3 changes: 3 additions & 0 deletions docs/design/admission_control_resource_quota.md
Original file line number Diff line number Diff line change
Expand Up @@ -174,6 +174,9 @@ resourcequotas 1 1
services 3 5
```

## More information
See [resource quota document](../admin/resource-quota.md) and the [example of Resource Quota](../user-guide/resourcequota) for more information.


<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/admission_control_resource_quota.md?pixel)]()
Expand Down
2 changes: 1 addition & 1 deletion docs/design/persistent-storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ This document proposes a model for managing persistent, cluster-scoped storage f

Two new API kinds:

A `PersistentVolume` (PV) is a storage resource provisioned by an administrator. It is analogous to a node.
A `PersistentVolume` (PV) is a storage resource provisioned by an administrator. It is analogous to a node. See [Persistent Volume Guide](../user-guide/persistent-volumes/) for how to use it.

A `PersistentVolumeClaim` (PVC) is a user's request for a persistent volume to use in a pod. It is analogous to a pod.

Expand Down
4 changes: 2 additions & 2 deletions docs/design/secrets.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,8 @@ certainly want the docs that go with that version.</h1>

## Abstract

A proposal for the distribution of secrets (passwords, keys, etc) to the Kubelet and to
containers inside Kubernetes using a custom volume type.
A proposal for the distribution of [secrets](../user-guide/secrets.md) (passwords, keys, etc) to the Kubelet and to
containers inside Kubernetes using a custom [volume](../user-guide/volumes.md#secrets) type. See the [secrets example](../user-guide/secrets/) for more information.

## Motivation

Expand Down
4 changes: 2 additions & 2 deletions docs/design/simple-rolling-update.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,9 @@ certainly want the docs that go with that version.</h1>

<!-- END MUNGE: UNVERSIONED_WARNING -->
## Simple rolling update
This is a lightweight design document for simple rolling update in ```kubectl```
This is a lightweight design document for simple [rolling update](../user-guide/kubectl/kubectl_rolling-update.md) in ```kubectl```.

Complete execution flow can be found [here](#execution-details).
Complete execution flow can be found [here](#execution-details). See the [example of rolling update](../user-guide/update-demo/) for more information.

### Lightweight rollout
Assume that we have a current replication controller named ```foo``` and it is running image ```image:v1```
Expand Down
2 changes: 1 addition & 1 deletion docs/devel/scheduler_algorithm.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ The purpose of filtering the nodes is to filter out the nodes that do not meet c
- `PodFitsResources`: Check if the free resource (CPU and Memory) meets the requirement of the Pod. The free resource is measured by the capacity minus the sum of limits of all Pods on the node.
- `PodFitsPorts`: Check if any HostPort required by the Pod is already occupied on the node.
- `PodFitsHost`: Filter out all nodes except the one specified in the PodSpec's NodeName field.
- `PodSelectorMatches`: Check if the labels of the node match the labels specified in the Pod's `nodeSelector` field.
- `PodSelectorMatches`: Check if the labels of the node match the labels specified in the Pod's `nodeSelector` field ([Here](../user-guide/node-selection/) is an example of how to use `nodeSelector` field).
- `CheckNodeLabelPresence`: Check if all the specified labels exist on a node or not, regardless of the value.

The details of the above predicates can be found in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](../../plugin/pkg/scheduler/algorithm/predicates/predicates.go). All predicates mentioned above can be used in combination to perform a sophisticated filtering policy. Kubernetes uses some, but not all, of these predicates by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](../../plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go).
Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/docker-multinode/master.md
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,7 @@ NAME LABELS STATUS
```

If the status of the node is ```NotReady``` or ```Unknown``` please check that all of the containers you created are successfully running.
If all else fails, ask questions on IRC at #google-containers.
If all else fails, ask questions on IRC at [#google-containers](http://webchat.freenode.net/?channels=google-containers).


### Next steps
Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/docker-multinode/testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ NAME LABELS STATUS
```

If the status of any node is ```Unknown``` or ```NotReady``` your cluster is broken, double check that all containers are running properly, and if all else fails, contact us on IRC at
```#google-containers``` for advice.
[```#google-containers```](http://webchat.freenode.net/?channels=google-containers) for advice.

### Run an application
```sh
Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/gce.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ cluster/kube-up.sh
If you want more than one cluster running in your project, want to use a different name, or want a different number of worker nodes, see the `<kubernetes>/cluster/gce/config-default.sh` file for more fine-grained configuration before you start up your cluster.

If you run into trouble, please see the section on [troubleshooting](gce.md#troubleshooting), post to the
[google-containers group](https://groups.google.com/forum/#!forum/google-containers), or come ask questions on IRC at #google-containers on freenode.
[google-containers group](https://groups.google.com/forum/#!forum/google-containers), or come ask questions on IRC at [#google-containers](http://webchat.freenode.net/?channels=google-containers) on freenode.

The next few steps will show you:

Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/scratch.md
Original file line number Diff line number Diff line change
Expand Up @@ -770,7 +770,7 @@ pinging or SSH-ing from one node to another.

### Getting Help
If you run into trouble, please see the section on [troubleshooting](gce.md#troubleshooting), post to the
[google-containers group](https://groups.google.com/forum/#!forum/google-containers), or come ask questions on IRC at #google-containers on freenode.
[google-containers group](https://groups.google.com/forum/#!forum/google-containers), or come ask questions on IRC at [#google-containers](http://webchat.freenode.net/?channels=google-containers) on freenode.


<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
Expand Down
8 changes: 8 additions & 0 deletions docs/user-guide/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,12 @@ If you don't have much familiarity with Kubernetes, we recommend you read the fo
[**Overview**](overview.md)
: A brief overview of Kubernetes concepts.

[**Cluster**](../admin/README.md)
: A cluster is a set of physical or virtual machines and other infrastructure resources used by Kubernetes to run your applications.

[**Node**](../admin/node.md)
: A node is a physical or virtual machine running Kubernetes, onto which pods can be scheduled.

[**Pod**](pods.md)
: A pod is a co-located group of containers and volumes.

Expand Down Expand Up @@ -107,6 +113,8 @@ If you don't have much familiarity with Kubernetes, we recommend you read the fo
* [Downward API: accessing system configuration from a pod](downward-api.md)
* [Images and registries](images.md)
* [Migrating from docker-cli to kubectl](docker-cli-to-kubectl.md)
* [Assign pods to selected nodes](node-selection/)
* [Perform a rolling update on a running group of pods](update-demo/)


<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
Expand Down
2 changes: 1 addition & 1 deletion docs/user-guide/container-environment.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ Eventually, user specified reasons may be [added to the API](https://github.com/


### Hook Handler Execution
When a management hook occurs, the management system calls into any registered hook handlers in the container for that hook.  These hook handler calls are synchronous in the context of the pod containing the container. Note:this means that hook handler execution blocks any further management of the pod.  If your hook handler blocks, no other management (including health checks) will occur until the hook handler completes.  Blocking hook handlers do *not* affect management of other Pods.  Typically we expect that users will make their hook handlers as lightweight as possible, but there are cases where long running commands make sense (e.g. saving state prior to container stop)
When a management hook occurs, the management system calls into any registered hook handlers in the container for that hook.  These hook handler calls are synchronous in the context of the pod containing the container. Note:this means that hook handler execution blocks any further management of the pod.  If your hook handler blocks, no other management (including [health checks](production-pods.md#liveness-and-readiness-probes-aka-health-checks)) will occur until the hook handler completes.  Blocking hook handlers do *not* affect management of other Pods.  Typically we expect that users will make their hook handlers as lightweight as possible, but there are cases where long running commands make sense (e.g. saving state prior to container stop)

For hooks which have parameters, these parameters are passed to the event handler as a set of key/value pairs.  The details of this parameter passing is handler implementation dependent (see below).

Expand Down
2 changes: 1 addition & 1 deletion docs/user-guide/environment-guide/containers/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ For each container, the build steps are the same. The examples below
are for the `show` container. Replace `show` with `backend` for the
backend container.

GCR
Google Container Registry ([GCR](https://cloud.google.com/tools/container-registry/))
---
docker build -t gcr.io/<project-name>/show .
gcloud docker push gcr.io/<project-name>/show
Expand Down
2 changes: 1 addition & 1 deletion docs/user-guide/limitrange/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ This example demonstrates how limits can be applied to a Kubernetes namespace to
min/max resource limits per pod. In addition, this example demonstrates how you can
apply default resource limits to pods in the absence of an end-user specified value.

For a detailed description of the Kubernetes resource model, see [Resources](../../../docs/user-guide/compute-resources.md)
See [LimitRange design doc](../../design/admission_control_limit_range.md) for more information. For a detailed description of the Kubernetes resource model, see [Resources](../../../docs/user-guide/compute-resources.md)

Step 0: Prerequisites
-----------------------------------------
Expand Down
6 changes: 3 additions & 3 deletions docs/user-guide/liveness/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ certainly want the docs that go with that version.</h1>

<!-- END MUNGE: UNVERSIONED_WARNING -->
## Overview
This example shows two types of pod health checks: HTTP checks and container execution checks.
This example shows two types of pod [health checks](../production-pods.md#liveness-and-readiness-probes-aka-health-checks): HTTP checks and container execution checks.

The [exec-liveness.yaml](exec-liveness.yaml) demonstrates the container execution check.
```
Expand All @@ -33,9 +33,9 @@ The [exec-liveness.yaml](exec-liveness.yaml) demonstrates the container executio
initialDelaySeconds: 15
timeoutSeconds: 1
```
Kubelet executes the command cat /tmp/health in the container and reports failure if the command returns a non-zero exit code.
Kubelet executes the command `cat /tmp/health` in the container and reports failure if the command returns a non-zero exit code.

Note that the container removes the /tmp/health file after 10 seconds,
Note that the container removes the `/tmp/health` file after 10 seconds,
```
echo ok > /tmp/health; sleep 10; rm -rf /tmp/health; sleep 600
```
Expand Down
2 changes: 1 addition & 1 deletion docs/user-guide/logging-demo/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ describes a pod that just emits a log message once every 4 seconds. The pod spec
[synthetic_10lps.yaml](synthetic_10lps.yaml)
describes a pod that just emits 10 log lines per second.

To observe the ingested log lines when using Google Cloud Logging please see the getting
See [logging document](../logging.md) for more details about logging. To observe the ingested log lines when using Google Cloud Logging please see the getting
started instructions
at [Cluster Level Logging to Google Cloud Logging](../../../docs/getting-started-guides/logging.md).
To observe the ingested log lines when using Elasticsearch and Kibana please see the getting
Expand Down
4 changes: 2 additions & 2 deletions docs/user-guide/logging.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,8 @@ Kubernetes components, such as kubelet and apiserver, use the [glog](https://god

## Examining the logs of running containers
The logs of a running container may be fetched using the command `kubectl logs`. For example, given
this pod specification which has a container which writes out some text to standard
output every second [counter-pod.yaml](../../examples/blog-logging/counter-pod.yaml):
this pod specification [counter-pod.yaml](../../examples/blog-logging/counter-pod.yaml), which has a container which writes out some text to standard
output every second. (You can find different pod specifications [here](logging-demo/).)
```
apiVersion: v1
kind: Pod
Expand Down
2 changes: 1 addition & 1 deletion docs/user-guide/managing-deployments.md
Original file line number Diff line number Diff line change
Expand Up @@ -241,7 +241,7 @@ my-nginx-o0ef1 1/1 Running 0 1h

At some point, you’ll eventually need to update your deployed application, typically by specifying a new image or image tag, as in the canary deployment scenario above. `kubectl` supports several update operations, each of which is applicable to different scenarios.

To update a service without an outage, `kubectl` supports what is called [“rolling update”](kubectl/kubectl_rolling-update.md), which updates one pod at a time, rather than taking down the entire service at the same time.
To update a service without an outage, `kubectl` supports what is called [“rolling update”](kubectl/kubectl_rolling-update.md), which updates one pod at a time, rather than taking down the entire service at the same time. See the [rolling update design document](../design/simple-rolling-update.md) and the [example of rolling update](update-demo/) for more information.

Let’s say you were running version 1.7.9 of nginx:
```yaml
Expand Down
4 changes: 2 additions & 2 deletions docs/user-guide/namespaces/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,13 +88,13 @@ Use the file [`namespace-dev.json`](namespace-dev.json) which describes a develo
Create the development namespace using kubectl.

```shell
$ kubectl create -f docs/user-guide/kubernetes-namespaces/namespace-dev.json
$ kubectl create -f docs/user-guide/namespaces/namespace-dev.json
```

And then lets create the production namespace using kubectl.

```shell
$ kubectl create -f docs/user-guide/kubernetes-namespaces/namespace-prod.json
$ kubectl create -f docs/user-guide/namespaces/namespace-prod.json
```

To be sure things are right, let's list all of the namespaces in our cluster.
Expand Down
Loading

0 comments on commit b0c68c4

Please sign in to comment.