Skip to content

Commit

Permalink
Fixes kubernetes#8701. Added inline links to "services" "pods" and "r…
Browse files Browse the repository at this point in the history
…eplication controllers" (using relative linking ../../folder/filename.md)
  • Loading branch information
RichieEscarez committed Jun 4, 2015
1 parent d97199c commit 7936334
Show file tree
Hide file tree
Showing 4 changed files with 19 additions and 20 deletions.
19 changes: 9 additions & 10 deletions examples/redis/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,13 @@
The following document describes the deployment of a reliable, multi-node Redis on Kubernetes. It deploys a master with replicated slaves, as well as replicated redis sentinels which are use for health checking and failover.

### Prerequisites
This example assumes that you have a Kubernetes cluster installed and running, and that you have installed the ```kubectl``` command line tool somewhere in your path. Please see the [getting started](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs/getting-started-guides) for installation instructions for your platform.
This example assumes that you have a Kubernetes cluster installed and running, and that you have installed the ```kubectl``` command line tool somewhere in your path. Please see the [getting started](../../docs/getting-started-guides) for installation instructions for your platform.

### A note for the impatient
This is a somewhat long tutorial. If you want to jump straight to the "do it now" commands, please see the [tl; dr](#tl-dr) at the end.

### Turning up an initial master/sentinel pod.
What is a [_Pod_](http://docs.k8s.io/pods.md)? A Pod is one or more containers that _must_ be scheduled onto the same host. All containers in a pod share a network namespace, and may optionally share mounted volumes.
A [_Pod_](../../docs/pods.md) is one or more containers that _must_ be scheduled onto the same host. All containers in a pod share a network namespace, and may optionally share mounted volumes.

We will used the shared network namespace to bootstrap our Redis cluster. In particular, the very first sentinel needs to know how to find the master (subsequent sentinels just ask the first sentinel). Because all containers in a Pod share a network namespace, the sentinel can simply look at ```$(hostname -i):6379```.

Expand All @@ -22,11 +22,11 @@ kubectl create -f examples/redis/redis-master.yaml
```

### Turning up a sentinel service
In Kubernetes a _Service_ describes a set of Pods that perform the same task. For example, the set of nodes in a Cassandra cluster, or even the single node we created above. An important use for a Service is to create a load balancer which distributes traffic across members of the set. But a _Service_ can also be used as a standing query which makes a dynamically changing set of Pods (or the single Pod we've already created) available via the Kubernetes API.
In Kubernetes a [_Service_](../../docs/services.md) describes a set of Pods that perform the same task. For example, the set of nodes in a Cassandra cluster, or even the single node we created above. An important use for a Service is to create a load balancer which distributes traffic across members of the set. But a _Service_ can also be used as a standing query which makes a dynamically changing set of Pods (or the single Pod we've already created) available via the Kubernetes API.

In Redis, we will use a Kubernetes Service to provide a discoverable endpoints for the Redis sentinels in the cluster. From the sentinels Redis clients can find the master, and then the slaves and other relevant info for the cluster. This enables new members to join the cluster when failures occur.

Here is the definition of the sentinel service:[redis-sentinel-service.yaml](redis-sentinel-service.yaml)
Here is the definition of the sentinel service: [redis-sentinel-service.yaml](redis-sentinel-service.yaml)

Create this service:
```sh
Expand All @@ -36,10 +36,9 @@ kubectl create -f examples/redis/redis-sentinel-service.yaml
### Turning up replicated redis servers
So far, what we have done is pretty manual, and not very fault-tolerant. If the ```redis-master``` pod that we previously created is destroyed for some reason (e.g. a machine dying) our Redis service goes away with it.

In Kubernetes a _Replication Controller_ is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with it's desired state.
In Kubernetes a [_Replication Controller_](../../docs/replication-controller.md) is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with it's desired state.

Replication Controllers will "adopt" existing pods that match their selector query, so let's create a Replication Controller with a single replica to adopt our existing Redis server.
[redis-controller.yaml](redis-controller.yaml)
Replication Controllers will "adopt" existing pods that match their selector query, so let's create a Replication Controller with a single replica to adopt our existing Redis server. Here is the replication controller config: [redis-controller.yaml](redis-controller.yaml)

The bulk of this controller config is actually identical to the redis-master pod definition above. It forms the template or "cookie cutter" that defines what it means to be a member of this set.

Expand All @@ -49,7 +48,7 @@ Create this controller:
kubectl create -f examples/redis/redis-controller.yaml
```

We'll do the same thing for the sentinel. Here is the controller config:[redis-sentinel-controller.yaml](redis-sentinel-controller.yaml)
We'll do the same thing for the sentinel. Here is the controller config: [redis-sentinel-controller.yaml](redis-sentinel-controller.yaml)

We create it as follows:
```sh
Expand Down Expand Up @@ -89,9 +88,9 @@ Now let's take a close look at what happens after this pod is deleted. There ar
At this point we now have a reliable, scalable Redis installation. By scaling the replication controller for redis servers, we can increase or decrease the number of read-slaves in our cluster. Likewise, if failures occur, the redis-sentinels will perform master election and select a new master.

### tl; dr
For those of you who are impatient, here is the summary of commands we ran in this tutorial
For those of you who are impatient, here is the summary of commands we ran in this tutorial:

```sh
```
# Create a bootstrap master
kubectl create -f examples/redis/redis-master.yaml
Expand Down
6 changes: 3 additions & 3 deletions examples/spark/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,10 +26,10 @@ instructions for your platform.

## Step One: Start your Master service

The Master service is the master (or head) service for a Spark
The Master [service](../../docs/services.md) is the master (or head) service for a Spark
cluster.

Use the [`examples/spark/spark-master.json`](spark-master.json) file to create a pod running
Use the [`examples/spark/spark-master.json`](spark-master.json) file to create a [pod](../../docs/pods.md) running
the Master service.

```shell
Expand Down Expand Up @@ -85,7 +85,7 @@ program.
The Spark workers need the Master service to be running.

Use the [`examples/spark/spark-worker-controller.json`](spark-worker-controller.json) file to create a
ReplicationController that manages the worker pods.
[replication controller](../../docs/replication-controller.md) that manages the worker pods.

```shell
$ kubectl create -f examples/spark/spark-worker-controller.json
Expand Down
8 changes: 4 additions & 4 deletions examples/storm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,10 +27,10 @@ instructions for your platform.

## Step One: Start your ZooKeeper service

ZooKeeper is a distributed coordination service that Storm uses as a
ZooKeeper is a distributed coordination [service](../../docs/services.md) that Storm uses as a
bootstrap and for state storage.

Use the [`examples/storm/zookeeper.json`](zookeeper.json) file to create a pod running
Use the [`examples/storm/zookeeper.json`](zookeeper.json) file to create a [pod](../../docs/pods.md) running
the ZooKeeper service.

```shell
Expand Down Expand Up @@ -114,7 +114,7 @@ The Storm workers need both the ZooKeeper and Nimbus services to be
running.

Use the [`examples/storm/storm-worker-controller.json`](storm-worker-controller.json) file to create a
ReplicationController that manages the worker pods.
[replication controller](../../docs/replication-controller.md) that manages the worker pods.

```shell
$ kubectl create -f examples/storm/storm-worker-controller.json
Expand Down Expand Up @@ -147,7 +147,7 @@ Node count: 13

There should be one client from the Nimbus service and one per
worker. Ideally, you should get ```stat``` output from ZooKeeper
before and after creating the ReplicationController.
before and after creating the replication controller.

(Pull requests welcome for alternative ways to validate the workers)

Expand Down
6 changes: 3 additions & 3 deletions examples/update-demo/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,11 +15,11 @@ limitations under the License.
-->
# Live update example
This example demonstrates the usage of Kubernetes to perform a live update on a running group of pods.
This example demonstrates the usage of Kubernetes to perform a live update on a running group of [pods](../../docs/pods.md).

### Step Zero: Prerequisites

This example assumes that you have forked the repository and [turned up a Kubernetes cluster](https://github.com/GoogleCloudPlatform/kubernetes-new#contents):
This example assumes that you have forked the repository and [turned up a Kubernetes cluster](../../docs/getting-started-guides):

```bash
$ cd kubernetes
Expand Down Expand Up @@ -65,7 +65,7 @@ $ ./cluster/kubectl.sh rolling-update update-demo-nautilus --update-period=10s -
```
The rolling-update command in kubectl will do 2 things:

1. Create a new replication controller with a pod template that uses the new image (`gcr.io/google_containers/update-demo:kitten`)
1. Create a new [replication controller](../../docs/replication-controller.md) with a pod template that uses the new image (`gcr.io/google_containers/update-demo:kitten`)
2. Scale the old and new replication controllers until the new controller replaces the old. This will kill the current pods one at a time, spinnning up new ones to replace them.

Watch the [demo website](http://localhost:8001/static/index.html), it will update one pod every 10 seconds until all of the pods have the new image.
Expand Down

0 comments on commit 7936334

Please sign in to comment.