Skip to content

Commit

Permalink
Merge pull request kubernetes#2026 from satnam6502/201
Browse files Browse the repository at this point in the history
Fix of a few 201 typos
  • Loading branch information
brendandburns committed Oct 28, 2014
2 parents a619764 + 295ff57 commit 1c61486
Showing 1 changed file with 10 additions and 9 deletions.
19 changes: 10 additions & 9 deletions examples/walkthrough/k8s201.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,11 +16,12 @@ Lists all pods who name label matches 'nginx'. Labels are discussed in detail [

### Replication Controllers

Ok, now you have an awesome, multi-container, labelled pod and you want to use it to built an application, you might be tempted to just start build a whole bunch of individual pods , but if you do that, a whole host of operational concerns pop up. For example: how will you scale the number of pods up or down and how will you ensure that all pods are homogenous.
OK, now you have an awesome, multi-container, labelled pod and you want to use it to build an application, you might be tempted to just start building a whole bunch of individual pods, but if you do that, a whole host of operational concerns pop up. For example: how will you scale the number of pods up or down and how will you ensure that all pods are homogenous?

Replication controllers are the object to answer these questions. A replication controller combines a template for pod creation (a "cookie-cutter" if you will) and a number of desired replicas, into a single API object. The replica controller also contains a label selector that identifies the set of objects managed by the replica controller. The replica controller constantly measures the size of this set relative to the desired size, and takes action by creating or deleting pods. The design of replica controllers is discussed in detail [elsewhere](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/replication-controller.md).
Replication controllers are the objects to answer these questions. A replication controller combines a template for pod creation (a "cookie-cutter" if you will) and a number of desired replicas, into a single API object. The replica controller also contains a label selector that identifies the set of objects managed by the replica controller. The replica controller constantly measures the size of this set relative to the desired size, and takes action by creating or deleting pods. The design of replica controllers is discussed in detail [elsewhere](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/replication-controller.md).

An example replica controller that instantiates two pods running nginx looks like:

```yaml
id: nginxController
apiVersion: v1beta1
Expand All @@ -37,7 +38,7 @@ desiredState:
desiredState:
manifest:
version: v1beta1
id: ngix
id: nginx
containers:
- name: nginx
image: dockerfile/nginx
Expand All @@ -50,7 +51,7 @@ desiredState:
```
### Services
Once you have a replicated set of pods, you need an abstraction that enables connectivity between the layers of your application. For example, if you have a replication controller managing your backend jobs, you don't want to have to reconfigure your frontends whenever you re-scale your backends. Likewise, if the pods in your backends are scheduled (or rescheduled) onto different machines, you can't be required to re-configure your frontends. In Kubernetes the Service API object achieves these goals. A Service basically combines an IP address and a label selector together to form a simple, static rallying point for connecting to a micro-service in your application.
Once you have a replicated set of pods, you need an abstraction that enables connectivity between the layers of your application. For example, if you have a replication controller managing your backend jobs, you don't want to have to reconfigure your front-ends whenever you re-scale your backends. Likewise, if the pods in your backends are scheduled (or rescheduled) onto different machines, you can't be required to re-configure your front-ends. In Kubernetes the Service API object achieves these goals. A Service basically combines an IP address and a label selector together to form a simple, static rallying point for connecting to a micro-service in your application.
For example, here is a service that balances across the pods created in the previous nginx replication controller example:
Expand Down Expand Up @@ -84,9 +85,9 @@ In Kubernetes, the health check monitor is the Kubelet agent.
#### Low level process health-checking
The simplest form of health-checking, is just process level health checking. The Kubelet constantly asks the the Docker daemon
The simplest form of health-checking is just process level health checking. The Kubelet constantly asks the Docker daemon
if the container process is still running, and if not, the container process is restarted. In all of the Kubernetes examples
that you have run so far, this health checking was actually already enabled. It's on for every single container that runs in
you have run so far, this health checking was actually already enabled. It's on for every single container that runs in
Kubernetes.
#### Application health-checking
Expand All @@ -108,11 +109,10 @@ lockOne.Lock();
```

This is a classic example of a problem in computer science known as "Deadlock". From Docker's perspective your application is
still operating, the process is still running, but from your application's perspective, your code is locked up, and will never
respond correctly.
still operating, the process is still running, but from your application's perspective, your code is locked up, and will never respond correctly.

To address this problem, Kubernetes supports user implemented application health-checks. These checks are performed by the
kubelet to ensure that your application is operating correctly for a definition of "correctly" that _you_ provide.
Kubelet to ensure that your application is operating correctly for a definition of "correctly" that _you_ provide.

Currently, there are three types of application health checks that you can choose from:

Expand All @@ -125,6 +125,7 @@ In all cases, if the Kubelet discovers a failure, the container is restarted.
The container health checks are configured in the "LivenessProbe" section of your container config. There you can also specify an "initialDelaySeconds" that is a grace period from when the container is started to when health checks are performed, to enable your container to perform any necessary initialization.

Here is an example config for a pod with an HTTP health check:

```yaml
kind: Pod
apiVersion: v1beta1
Expand Down

0 comments on commit 1c61486

Please sign in to comment.