Skip to content

Commit

Permalink
Copy edits for typos
Browse files Browse the repository at this point in the history
  • Loading branch information
epc committed Aug 9, 2015
1 parent 2bfa9a1 commit 35a5eda
Show file tree
Hide file tree
Showing 33 changed files with 42 additions and 42 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ Kubernetes documentation is organized into several categories.
- in the [Kubernetes Cluster Admin Guide](docs/admin/README.md)
- **Developer and API documentation**
- for people who want to write programs that access the Kubernetes API, write plugins
or extensions, or modify the core Kubernete code
or extensions, or modify the core Kubernetes code
- in the [Kubernetes Developer Guide](docs/devel/README.md)
- see also [notes on the API](docs/api.md)
- see also the [API object documentation](http://kubernetes.io/third_party/swagger-ui/), a
Expand Down
2 changes: 1 addition & 1 deletion cluster/addons/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Kubernetes clusters. The add-ons are visible through the API (they can be listed
using ```kubectl```), but manipulation of these objects is discouraged because
the system will bring them back to the original state, in particular:
* if an add-on is stopped, it will be restarted automatically
* if an add-on is rolling-updated (for Replication Controlers), the system will stop the new version and
* if an add-on is rolling-updated (for Replication Controllers), the system will stop the new version and
start the old one again (or perform rolling update to the old version, in the
future).

Expand Down
2 changes: 1 addition & 1 deletion cluster/addons/dns/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,7 @@ If you see that, DNS is working correctly.

## How does it work?
SkyDNS depends on etcd for what to serve, but it doesn't really need all of
what etcd offers (at least not in the way we use it). For simplicty, we run
what etcd offers (at least not in the way we use it). For simplicity, we run
etcd and SkyDNS together in a pod, and we do not try to link etcd instances
across replicas. A helper container called [kube2sky](kube2sky/) also runs in
the pod and acts a bridge between Kubernetes and SkyDNS. It finds the
Expand Down
2 changes: 1 addition & 1 deletion cluster/addons/dns/kube2sky/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ mutation (insertion or removal of a dns entry) before giving up and crashing.

`--etcd-server`: The etcd server that is being used by skydns.

`--kube_master_url`: URL of kubernetes master. Reuired if `--kubecfg_file` is not set.
`--kube_master_url`: URL of kubernetes master. Required if `--kubecfg_file` is not set.

`--kubecfg_file`: Path to kubecfg file that contains the master URL and tokens to authenticate with the master.

Expand Down
4 changes: 2 additions & 2 deletions cluster/juju/bundles/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ containerized applications.

The [Juju](https://juju.ubuntu.com) system provides provisioning and
orchestration across a variety of clouds and bare metal. A juju bundle
describes collection of services and how they interelate. `juju
describes collection of services and how they interrelate. `juju
quickstart` allows you to bootstrap a deployment environment and
deploy a bundle.

Expand Down Expand Up @@ -136,7 +136,7 @@ configuration on it's own

## Installing the kubectl outside of kubernetes master machine

Download the Kuberentes release from:
Download the Kubernetes release from:
https://github.com/GoogleCloudPlatform/kubernetes/releases and extract
the release, you can then just directly use the cli binary at
./kubernetes/platforms/linux/amd64/kubectl
Expand Down
2 changes: 1 addition & 1 deletion cluster/saltbase/pillar/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
The
[SaltStack pillar](http://docs.saltstack.com/en/latest/topics/pillar/)
data is partially statically dervied from the contents of this
data is partially statically derived from the contents of this
directory. The bulk of the pillars are hard to perceive from browsing
this directory, though, because they are written into
[cluster-params.sls](cluster-params.sls) at cluster inception.
Expand Down
2 changes: 1 addition & 1 deletion contrib/exec-healthz/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Exec healthz server

The exec healthz server is a sidecar container meant to serve as a liveness-exec-over-http bridge. It isolates pods from the idiosyncracies of container runtime exec implemetations.
The exec healthz server is a sidecar container meant to serve as a liveness-exec-over-http bridge. It isolates pods from the idiosyncrasies of container runtime exec implementations.

## Examples:

Expand Down
2 changes: 1 addition & 1 deletion contrib/logging/fluentd-sidecar-es/README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Collecting log files from within containers with Fluentd and sending them to Elasticsearch.
*Note that this only works for clusters with an Elastisearch service. If your cluster is logging to Google Cloud Logging instead (e.g. if you're using Container Engine), see [this guide](/contrib/logging/fluentd-sidecar-gcp/) instead.*
*Note that this only works for clusters with an ElasticSearch service. If your cluster is logging to Google Cloud Logging instead (e.g. if you're using Container Engine), see [this guide](/contrib/logging/fluentd-sidecar-gcp/) instead.*

This directory contains the source files needed to make a Docker image that collects log files from arbitrary files within a container using [Fluentd](http://www.fluentd.org/) and sends them to the cluster's Elasticsearch service.
The image is designed to be used as a sidecar container as part of a pod.
Expand Down
2 changes: 1 addition & 1 deletion contrib/mesos/docs/ha.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ In this case, if there are problems launching a replacement scheduler process th
##### Command Line Arguments

- `--ha` is required to enable scheduler HA and multi-scheduler leader election.
- `--km_path` or else (`--executor_path` and `--proxy_path`) should reference non-local-file URI's and must be identicial across schedulers.
- `--km_path` or else (`--executor_path` and `--proxy_path`) should reference non-local-file URI's and must be identical across schedulers.

If you have HDFS installed on your slaves then you can specify HDFS URI locations for the binaries:

Expand Down
8 changes: 4 additions & 4 deletions contrib/prometheus/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ Looks open enough :).

1. Now, you can start this pod, like so `kubectl create -f contrib/prometheus/prometheus-all.json`. This ReplicationController will maintain both prometheus, the server, as well as promdash, the visualization tool. You can then configure promdash, and next time you restart the pod - you're configuration will be remain (since the promdash directory was mounted as a local docker volume).

1. Finally, you can simply access localhost:3000, which will have promdash running. Then, add the prometheus server (locahost:9090)to as a promdash server, and create a dashboard according to the promdash directions.
1. Finally, you can simply access localhost:3000, which will have promdash running. Then, add the prometheus server (localhost:9090)to as a promdash server, and create a dashboard according to the promdash directions.

## Prometheus

Expand All @@ -52,14 +52,14 @@ This is a v1 api based, containerized prometheus ReplicationController, which sc

1. Use kubectl to handle auth & proxy the kubernetes API locally, emulating the old KUBERNETES_RO service.

1. The list of services to be monitored is passed as a command line aguments in
1. The list of services to be monitored is passed as a command line arguments in
the yaml file.

1. The startup scripts assumes that each service T will have
2 environment variables set ```T_SERVICE_HOST``` and ```T_SERVICE_PORT```

1. Each can be configured manually in yaml file if you want to monitor something
that is not a regular Kubernetes service. For example, you can add comma delimted
that is not a regular Kubernetes service. For example, you can add comma delimited
endpoints which can be scraped like so...
```
- -t
Expand All @@ -77,7 +77,7 @@ at port 9090.
# TODO

- We should publish this image into the kube/ namespace.
- Possibly use postgre or mysql as a promdash database.
- Possibly use Postgres or mysql as a promdash database.
- stop using kubectl to make a local proxy faking the old RO port and build in
real auth capabilities.

Expand Down
2 changes: 1 addition & 1 deletion contrib/service-loadbalancer/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -191,7 +191,7 @@ $ mysql -u root -ppassword --host 104.197.63.17 --port 3306 -e 'show databases;'
### Troubleshooting:
- If you can curl or netcat the endpoint from the pod (with kubectl exec) and not from the node, you have not specified hostport and containerport.
- If you can hit the ips from the node but not from your machine outside the cluster, you have not opened firewall rules for the right network.
- If you can't hit the ips from within the container, either haproxy or the service_loadbalacer script is not runing.
- If you can't hit the ips from within the container, either haproxy or the service_loadbalacer script is not running.
1. Use ps in the pod
2. sudo restart haproxy in the pod
3. cat /etc/haproxy/haproxy.cfg in the pod
Expand Down
4 changes: 2 additions & 2 deletions docs/admin/cluster-management.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ Documentation for other releases can be found at

This document describes several topics related to the lifecycle of a cluster: creating a new cluster,
upgrading your cluster's
master and worker nodes, performing node maintainence (e.g. kernel upgrades), and upgrading the Kubernetes API version of a
master and worker nodes, performing node maintenance (e.g. kernel upgrades), and upgrading the Kubernetes API version of a
running cluster.

## Creating and configuring a Cluster
Expand Down Expand Up @@ -132,7 +132,7 @@ For pods with a replication controller, the pod will eventually be replaced by a

For pods with no replication controller, you need to bring up a new copy of the pod, and assuming it is not part of a service, redirect clients to it.

Perform maintainence work on the node.
Perform maintenance work on the node.

Make the node schedulable again:

Expand Down
2 changes: 1 addition & 1 deletion docs/admin/etcd.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ objects.

Access Control: give *only* kube-apiserver read/write access to etcd. You do not
want apiserver's etcd exposed to every node in your cluster (or worse, to the
internet at large), because access to etcd is equivilent to root in your
internet at large), because access to etcd is equivalent to root in your
cluster.

Data Reliability: for reasonable safety, either etcd needs to be run as a
Expand Down
2 changes: 1 addition & 1 deletion docs/admin/kubelet.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ Documentation for other releases can be found at
The kubelet is the primary "node agent" that runs on each
node. The kubelet works in terms of a PodSpec. A PodSpec is a YAML or JSON object
that describes a pod. The kubelet takes a set of PodSpecs that are provided through
various echanisms (primarily through the apiserver) and ensures that the containers
various mechanisms (primarily through the apiserver) and ensures that the containers
described in those PodSpecs are running and healthy.

Other than from an PodSpec from the apiserver, there are three ways that a container
Expand Down
2 changes: 1 addition & 1 deletion docs/admin/service-accounts-admin.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ TokenController runs as part of controller-manager. It acts asynchronously. It:
- observes serviceAccount creation and creates a corresponding Secret to allow API access.
- observes serviceAccount deletion and deletes all corresponding ServiceAccountToken Secrets
- observes secret addition, and ensures the referenced ServiceAccount exists, and adds a token to the secret if needed
- observes secret deleteion and removes a reference from the corresponding ServiceAccount if needed
- observes secret deletion and removes a reference from the corresponding ServiceAccount if needed

#### To create additional API tokens

Expand Down
2 changes: 1 addition & 1 deletion docs/devel/development.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ Note: If you have write access to the main repository at github.com/GoogleCloudP
git remote set-url --push upstream no_push
```

### Commiting changes to your fork
### Committing changes to your fork

```sh
git commit
Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/coreos/azure/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -223,7 +223,7 @@ frontend-z9oxo 1/1 Running 0 41s

## Exposing the app to the outside world

There is no native Azure load-ballancer support in Kubernets 1.0, however here is how you can expose the Guestbook app to the Internet.
There is no native Azure load-balancer support in Kubernetes 1.0, however here is how you can expose the Guestbook app to the Internet.

```
./expose_guestbook_app_port.sh ./output/kube_1c1496016083b4_ssh_conf
Expand Down
4 changes: 2 additions & 2 deletions docs/getting-started-guides/docker-multinode.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ cd kubernetes/cluster/docker-multinode

`Master done!`

See [here](docker-multinode/master.md) for detailed instructions explaination.
See [here](docker-multinode/master.md) for detailed instructions explanation.

## Adding a worker node

Expand All @@ -104,7 +104,7 @@ cd kubernetes/cluster/docker-multinode

`Worker done!`

See [here](docker-multinode/worker.md) for detailed instructions explaination.
See [here](docker-multinode/worker.md) for detailed instructions explanation.

## Testing your cluster

Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ parameters as follows:
```

NOTE: The above is specifically for GRUB2.
You can check the command line parameters passed to your kenel by looking at the
You can check the command line parameters passed to your kernel by looking at the
output of /proc/cmdline:

```console
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,7 @@ cd ~/kubernetes/contrib/ansible/

That's all there is to it. It's really that easy. At this point you should have a functioning Kubernetes cluster.

**Show kubernets nodes**
**Show kubernetes nodes**

Run the following on the kube-master:

Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/scratch.md
Original file line number Diff line number Diff line change
Expand Up @@ -657,7 +657,7 @@ This pod mounts several node file system directories using the `hostPath` volum
authenticate external services, such as a cloud provider.
- This is not required if you do not use a cloud provider (e.g. bare-metal).
- The `/srv/kubernetes` mount allows the apiserver to read certs and credentials stored on the
node disk. These could instead be stored on a persistend disk, such as a GCE PD, or baked into the image.
node disk. These could instead be stored on a persistent disk, such as a GCE PD, or baked into the image.
- Optionally, you may want to mount `/var/log` as well and redirect output there (not shown in template).
- Do this if you prefer your logs to be accessible from the root filesystem with tools like journalctl.

Expand Down
6 changes: 3 additions & 3 deletions docs/proposals/apiserver_watch.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,14 +67,14 @@ When a client sends a watch request to apiserver, instead of redirecting it to
etcd, it will cause:

- registering a handler to receive all new changes coming from etcd
- iteratiting though a watch window, starting at the requested resourceVersion
to the head and sending filetered changes directory to the client, blocking
- iterating though a watch window, starting at the requested resourceVersion
to the head and sending filtered changes directory to the client, blocking
the above until this iteration has caught up

This will be done be creating a go-routine per watcher that will be responsible
for performing the above.

The following section describes the proposal in more details, analizes some
The following section describes the proposal in more details, analyzes some
corner cases and divides the whole design in more fine-grained steps.


Expand Down
4 changes: 2 additions & 2 deletions docs/user-guide/connecting-applications.md
Original file line number Diff line number Diff line change
Expand Up @@ -238,8 +238,8 @@ Address 1: 10.0.116.146
## Securing the Service

Till now we have only accessed the nginx server from within the cluster. Before exposing the Service to the internet, you want to make sure the communication channel is secure. For this, you will need:
* Self signed certificates for https (unless you already have an identitiy certificate)
* An nginx server configured to use the cretificates
* Self signed certificates for https (unless you already have an identity certificate)
* An nginx server configured to use the certificates
* A [secret](secrets.md) that makes the certificates accessible to pods

You can acquire all these from the [nginx https example](../../examples/https-nginx/README.md), in short:
Expand Down
2 changes: 1 addition & 1 deletion docs/user-guide/docker-cli-to-kubectl.md
Original file line number Diff line number Diff line change
Expand Up @@ -214,7 +214,7 @@ $ kubectl logs -f nginx-app-zibvs

```

Now's a good time to mention slight difference between pods and containers; by default pods will not terminate if their processes exit. Instead it will restart the process. This is similar to the docker run option `--restart=always` with one major difference. In docker, the output for each invocation of the process is concatenated but for Kubernetes, each invokation is separate. To see the output from a prevoius run in Kubernetes, do this:
Now's a good time to mention slight difference between pods and containers; by default pods will not terminate if their processes exit. Instead it will restart the process. This is similar to the docker run option `--restart=always` with one major difference. In docker, the output for each invocation of the process is concatenated but for Kubernetes, each invocation is separate. To see the output from a previous run in Kubernetes, do this:

```console

Expand Down
2 changes: 1 addition & 1 deletion docs/user-guide/pod-states.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ A [Probe](https://godoc.org/github.com/GoogleCloudPlatform/kubernetes/pkg/api/v1

* `ExecAction`: executes a specified command inside the container expecting on success that the command exits with status code 0.
* `TCPSocketAction`: performs a tcp check against the container's IP address on a specified port expecting on success that the port is open.
* `HTTPGetAction`: performs an HTTP Get againsts the container's IP address on a specified port and path expecting on success that the response has a status code greater than or equal to 200 and less than 400.
* `HTTPGetAction`: performs an HTTP Get against the container's IP address on a specified port and path expecting on success that the response has a status code greater than or equal to 200 and less than 400.

Each probe will have one of three results:

Expand Down
2 changes: 1 addition & 1 deletion docs/whatisk8s.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ Here are some key points:
* **Application-centric management**:
Raises the level of abstraction from running an OS on virtual hardware to running an application on an OS using logical resources. This provides the simplicity of PaaS with the flexibility of IaaS and enables you to run much more than just [12-factor apps](http://12factor.net/).
* **Dev and Ops separation of concerns**:
Provides separatation of build and deployment; therefore, decoupling applications from infrastructure.
Provides separation of build and deployment; therefore, decoupling applications from infrastructure.
* **Agile application creation and deployment**:
Increased ease and efficiency of container image creation compared to VM image use.
* **Continuous development, integration, and deployment**:
Expand Down
2 changes: 1 addition & 1 deletion examples/cassandra/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -244,7 +244,7 @@ spec:
[Download example](cassandra-controller.yaml)
<!-- END MUNGE: EXAMPLE cassandra-controller.yaml -->

Most of this replication controller definition is identical to the Cassandra pod definition above, it simply gives the resplication controller a recipe to use when it creates new Cassandra pods. The other differentiating parts are the ```selector``` attribute which contains the controller's selector query, and the ```replicas``` attribute which specifies the desired number of replicas, in this case 1.
Most of this replication controller definition is identical to the Cassandra pod definition above, it simply gives the replication controller a recipe to use when it creates new Cassandra pods. The other differentiating parts are the ```selector``` attribute which contains the controller's selector query, and the ```replicas``` attribute which specifies the desired number of replicas, in this case 1.

Create this controller:

Expand Down
2 changes: 1 addition & 1 deletion examples/elasticsearch/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ with [replication controllers](../../docs/user-guide/replication-controller.md).
because multicast discovery will not find the other pod IPs needed to form a cluster. This
image detects other Elasticsearch [pods](../../docs/user-guide/pods.md) running in a specified [namespace](../../docs/user-guide/namespaces.md) with a given
label selector. The detected instances are used to form a list of peer hosts which
are used as part of the unicast discovery mechansim for Elasticsearch. The detection
are used as part of the unicast discovery mechanism for Elasticsearch. The detection
of the peer nodes is done by a program which communicates with the Kubernetes API
server to get a list of matching Elasticsearch pods. To enable authenticated
communication this image needs a [secret](../../docs/user-guide/secrets.md) to be mounted at `/etc/apiserver-secret`
Expand Down
2 changes: 1 addition & 1 deletion examples/guestbook-go/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -280,7 +280,7 @@ You can now play with the guestbook that you just created by opening it in a bro

### Step Eight: Cleanup <a id="step-eight"></a>

After you're done playing with the guestbook, you can cleanup by deleting the guestbook service and removing the associated resources that were created, including load balancers, forwarding rules, target pools, and Kuberentes replication controllers and services.
After you're done playing with the guestbook, you can cleanup by deleting the guestbook service and removing the associated resources that were created, including load balancers, forwarding rules, target pools, and Kubernetes replication controllers and services.

Delete all the resources by running the following `kubectl delete -f` *`filename`* command:

Expand Down
Loading

0 comments on commit 35a5eda

Please sign in to comment.