Skip to content

Commit

Permalink
Copy edits for typos
Browse files Browse the repository at this point in the history
  • Loading branch information
epc committed Jul 13, 2015
1 parent a1efb50 commit 98e9f1e
Show file tree
Hide file tree
Showing 20 changed files with 27 additions and 27 deletions.
2 changes: 1 addition & 1 deletion docs/application-troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ You don't have enough resources. You may have exhausted the supply of CPU or Me
you need to delete Pods, adjust resource requests, or add new nodes to your cluster.

You are using ```hostPort```. When you bind a Pod to a ```hostPort``` there are a limited number of places that pod can be
scheduled. In most cases, ```hostPort``` is unnecesary, try using a Service object to expose your Pod. If you do require
scheduled. In most cases, ```hostPort``` is unnecessary, try using a Service object to expose your Pod. If you do require
```hostPort``` then you can only schedule as many Pods as there are nodes in your Kubernetes cluster.


Expand Down
2 changes: 1 addition & 1 deletion docs/compute_resources.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ consistent manner.
*CPU* and *memory* are each a *resource type*. A resource type has a base unit. CPU is specified
in units of cores. Memory is specified in units of bytes.

CPU and RAM are collectively refered to as *compute resources*, or just *resources*. Compute
CPU and RAM are collectively referred to as *compute resources*, or just *resources*. Compute
resources are measureable quantities which can be requested, allocated, and consumed. They are
distinct from [API resources](working_with_resources.md). API resources, such as pods and
[services](services.md) are objects that can be written to and retrieved from the Kubernetes API
Expand Down
2 changes: 1 addition & 1 deletion docs/design/networking.md
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ to serve the purpose outside of GCE.

The [service](../services.md) abstraction provides a way to group pods under a
common access policy (e.g. load-balanced). The implementation of this creates a
virtual IP which clients can access and which is transparantly proxied to the
virtual IP which clients can access and which is transparently proxied to the
pods in a Service. Each node runs a kube-proxy process which programs
`iptables` rules to trap access to service IPs and redirect them to the correct
backends. This provides a highly-available load-balancing solution with low
Expand Down
2 changes: 1 addition & 1 deletion docs/design/security.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ A pod runs in a *security context* under a *service account* that is defined by
5. Developers should be able to run their own images or images from the community and expect those images to run correctly
6. Developers may need to ensure their images work within higher security requirements specified by administrators
7. When available, Linux kernel user namespaces can be used to ensure 5.2 and 5.4 are met.
8. When application developers want to share filesytem data via distributed filesystems, the Unix user ids on those filesystems must be consistent across different container processes
8. When application developers want to share filesystem data via distributed filesystems, the Unix user ids on those filesystems must be consistent across different container processes
6. Developers should be able to define [secrets](secrets.md) that are automatically added to the containers when pods are run
1. Secrets are files injected into the container whose values should not be displayed within a pod. Examples:
1. An SSH private key for git cloning remote data
Expand Down
4 changes: 2 additions & 2 deletions docs/devel/releasing.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ You should progress in this strict order.
#### Selecting Release Components

When cutting a major/minor release, your first job is to find the branch
point. We cut `vX.Y.0` releases directly from `master`, which is also the the
point. We cut `vX.Y.0` releases directly from `master`, which is also the
branch that we have most continuous validation on. Go first to [the main GCE
Jenkins end-to-end job](http://go/k8s-test/job/kubernetes-e2e-gce) and next to [the
Critical Builds page](http://go/k8s-test/view/Critical%20Builds) and hopefully find a
Expand All @@ -42,7 +42,7 @@ Because Jenkins builds frequently, if you're looking between jobs
`kubernetes-e2e-gce` build (but please check that it corresponds to a temporally
similar build that's green on `kubernetes-e2e-gke-ci`). Lastly, if you're having
trouble understanding why the GKE continuous integration clusters are failing
and you're trying to cut a release, don't hesistate to contact the GKE
and you're trying to cut a release, don't hesitate to contact the GKE
oncall.

Before proceeding to the next step:
Expand Down
2 changes: 1 addition & 1 deletion docs/downward_api.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ the Pod's name, for example, and inject it into this well-known variable.

## Capabilities

The following information is available to a `Pod` through the the downward API:
The following information is available to a `Pod` through the downward API:

* The pod's name
* The pod's namespace
Expand Down
4 changes: 2 additions & 2 deletions docs/getting-started-guides/centos/centos_manual_config.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ gpgcheck=0
yum -y install --enablerepo=virt7-testing kubernetes
```

* Note * Using etcd-0.4.6-7 (This is temperory update in documentation)
* Note * Using etcd-0.4.6-7 (This is temporary update in documentation)

If you do not get etcd-0.4.6-7 installed with virt7-testing repo,

Expand Down Expand Up @@ -80,7 +80,7 @@ KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow_privileged=false"
```

* Disable the firewall on both the master and minon, as docker does not play well with other firewall rule managers
* Disable the firewall on both the master and minion, as docker does not play well with other firewall rule managers

```
systemctl disable iptables-services firewalld
Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/docker-multinode/master.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ sudo docker -H unix:///var/run/docker-bootstrap.sock run --net=host gcr.io/googl


### Set up Flannel on the master node
Flannel is a network abstraction layer build by CoreOS, we will use it to provide simplfied networking between our Pods of containers.
Flannel is a network abstraction layer build by CoreOS, we will use it to provide simplified networking between our Pods of containers.

Flannel re-configures the bridge that Docker uses for networking. As a result we need to stop Docker, reconfigure its networking, and then restart Docker.

Expand Down
4 changes: 2 additions & 2 deletions docs/getting-started-guides/juju.md
Original file line number Diff line number Diff line change
Expand Up @@ -183,7 +183,7 @@ The [k8petstore example](../../examples/k8petstore/) is available as a

juju action do kubernetes-master/0

Note: this example includes curl statements to exercise the app, which automatically generates "petstore" transactions written to redis, and allows you to visualize the throughput in your browswer.
Note: this example includes curl statements to exercise the app, which automatically generates "petstore" transactions written to redis, and allows you to visualize the throughput in your browser.

## Tear down cluster

Expand All @@ -199,7 +199,7 @@ Kubernetes Bundle on Github

- [Bundle Repository](https://github.com/whitmo/bundle-kubernetes)
* [Kubernetes master charm](https://github.com/whitmo/charm-kubernetes-master)
* [Kubernetes mininion charm](https://github.com/whitmo/charm-kubernetes)
* [Kubernetes minion charm](https://github.com/whitmo/charm-kubernetes)
- [Bundle Documentation](http://whitmo.github.io/bundle-kubernetes)
- [More about Juju](https://juju.ubuntu.com)

Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/logging.md
Original file line number Diff line number Diff line change
Expand Up @@ -138,7 +138,7 @@ spec:
path: /var/lib/docker/containers
```

This pod specification maps the the directory on the host containing the Docker log files, `/var/lib/docker/containers`, to a directory inside the container which has the same path. The pod runs one image, `gcr.io/google_containers/fluentd-gcp:1.6`, which is configured to collect the Docker log files from the logs directory and ingest them into Google Cloud Logging. One instance of this pod runs on each node of the cluster. Kubernetes will notice if this pod fails and automatically restart it.
This pod specification maps the directory on the host containing the Docker log files, `/var/lib/docker/containers`, to a directory inside the container which has the same path. The pod runs one image, `gcr.io/google_containers/fluentd-gcp:1.6`, which is configured to collect the Docker log files from the logs directory and ingest them into Google Cloud Logging. One instance of this pod runs on each node of the cluster. Kubernetes will notice if this pod fails and automatically restart it.

We can click on the Logs item under the Monitoring section of the Google Developer Console and select the logs for the counter container, which will be called kubernetes.counter_default_count. This identifies the name of the pod (counter), the namespace (default) and the name of the container (count) for which the log collection occurred. Using this name we can select just the logs for our counter container from the drop down menu:

Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/scratch.md
Original file line number Diff line number Diff line change
Expand Up @@ -361,7 +361,7 @@ installation, by following examples given in the Docker documentation.

### rkt

[rkt](https://github.com/coreos/rkt) is an alterative to Docker. You only need to install one of Docker or rkt.
[rkt](https://github.com/coreos/rkt) is an alternative to Docker. You only need to install one of Docker or rkt.

*TODO*: how to install and configure rkt.

Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/ubuntu.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ This document describes how to deploy kubernetes on ubuntu nodes, including 1 ma

*3 These guide is tested OK on Ubuntu 14.04 LTS 64bit server, but it should also work on most Ubuntu versions*

*4 Dependences of this guide: etcd-2.0.12, flannel-0.4.0, k8s-0.19.3, but it may work with higher versions*
*4 Dependencies of this guide: etcd-2.0.12, flannel-0.4.0, k8s-0.19.3, but it may work with higher versions*

*5 All the remote servers can be ssh logged in without a password by using key authentication*

Expand Down
8 changes: 4 additions & 4 deletions docs/kube-controller-manager.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,18 +25,18 @@ controller, and serviceaccounts controller.
--cluster-cidr=<nil>: CIDR Range for Pods in cluster.
--cluster-name="": The instance prefix for the cluster
--concurrent-endpoint-syncs=0: The number of endpoint syncing operations that will be done concurrently. Larger number = faster endpoint updating, but more CPU (and network) load
--concurrent_rc_syncs=0: The number of replication controllers that are allowed to sync concurrently. Larger number = more reponsive replica management, but more CPU (and network) load
--concurrent_rc_syncs=0: The number of replication controllers that are allowed to sync concurrently. Larger number = more responsive replica management, but more CPU (and network) load
--deleting-pods-burst=10: Number of nodes on which pods are bursty deleted in case of node failure. For more details look into RateLimiter.
--deleting-pods-qps=0.1: Number of nodes per second on which pods are deleted in case of node failure.
-h, --help=false: help for kube-controller-manager
--kubeconfig="": Path to kubeconfig file with authorization and master location information.
--master="": The address of the Kubernetes API server (overrides any value in kubeconfig)
--namespace-sync-period=0: The period for syncing namespace life-cycle updates
--node-monitor-grace-period=40s: Amount of time which we allow running Node to be unresponsive before marking it unhealty. Must be N times more than kubelet's nodeStatusUpdateFrequency, where N means number of retries allowed for kubelet to post node status.
--node-monitor-grace-period=40s: Amount of time which we allow running Node to be unresponsive before marking it unhealthy. Must be N times more than kubelet's nodeStatusUpdateFrequency, where N means number of retries allowed for kubelet to post node status.
--node-monitor-period=5s: The period for syncing NodeStatus in NodeController.
--node-startup-grace-period=1m0s: Amount of time which we allow starting Node to be unresponsive before marking it unhealty.
--node-startup-grace-period=1m0s: Amount of time which we allow starting Node to be unresponsive before marking it unhealthy.
--node-sync-period=0: The period for syncing nodes from cloudprovider. Longer periods will result in fewer calls to cloud provider, but may delay addition of new nodes to cluster.
--pod-eviction-timeout=0: The grace peroid for deleting pods on failed nodes.
--pod-eviction-timeout=0: The grace period for deleting pods on failed nodes.
--port=0: The port that the controller-manager's http service runs on
--profiling=true: Enable profiling via web interface host:port/debug/pprof/
--pvclaimbinder-sync-period=0: The period for syncing persistent volumes and persistent volume claims
Expand Down
2 changes: 1 addition & 1 deletion docs/kubectl_patch.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ kubectl patch node k8s-node-1 -p '{"spec":{"unschedulable":true}}'

```
-h, --help=false: help for patch
-p, --patch="": The patch to be appied to the resource JSON file.
-p, --patch="": The patch to be applied to the resource JSON file.
```

### Options inherited from parent commands
Expand Down
2 changes: 1 addition & 1 deletion docs/proposals/autoscaling.md
Original file line number Diff line number Diff line change
Expand Up @@ -208,7 +208,7 @@ be specified as "when requests per second fall below 25 for 30 seconds scale the
This section has intentionally been left empty. I will defer to folks who have more experience gathering and analyzing
time series statistics.

Data aggregation is opaque to the the auto-scaler resource. The auto-scaler is configured to use `AutoScaleThresholds`
Data aggregation is opaque to the auto-scaler resource. The auto-scaler is configured to use `AutoScaleThresholds`
that know how to work with the underlying data in order to know if an application must be scaled up or down. Data aggregation
must feed a common data structure to ease the development of `AutoScaleThreshold`s but it does not matter to the
auto-scaler whether this occurs in a push or pull implementation, whether or not the data is stored at a granular level,
Expand Down
2 changes: 1 addition & 1 deletion docs/proposals/high-availability.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ This document serves as a proposal for high availability of the scheduler and co
## Design Options
For complete reference see [this](https://www.ibm.com/developerworks/community/blogs/RohitShetty/entry/high_availability_cold_warm_hot?lang=en)

1. Hot Standby: In this scenario, data and state are shared between the two components such that an immediate failure in one component causes the the standby daemon to take over exactly where the failed component had left off. This would be an ideal solution for kubernetes, however it poses a series of challenges in the case of controllers where component-state is cached locally and not persisted in a transactional way to a storage facility. This would also introduce additional load on the apiserver, which is not desirable. As a result, we are **NOT** planning on this approach at this time.
1. Hot Standby: In this scenario, data and state are shared between the two components such that an immediate failure in one component causes the standby daemon to take over exactly where the failed component had left off. This would be an ideal solution for kubernetes, however it poses a series of challenges in the case of controllers where component-state is cached locally and not persisted in a transactional way to a storage facility. This would also introduce additional load on the apiserver, which is not desirable. As a result, we are **NOT** planning on this approach at this time.

2. **Warm Standby**: In this scenario there is only one active component acting as the master and additional components running but not providing service or responding to requests. Data and state are not shared between the active and standby components. When a failure occurs, the standby component that becomes the master must determine the current state of the system before resuming functionality. This is the approach that this proposal will leverage.

Expand Down
2 changes: 1 addition & 1 deletion docs/replication-controller.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ Note that replication controllers may themselves have labels and would generally

Pods may be removed from a replication controller's target set by changing their labels. This technique may be used to remove pods from service for debugging, data recovery, etc. Pods that are removed in this way will be replaced automatically (assuming that the number of replicas is not also changed).

Similarly, deleting a replication controller does not affect the pods it created. Its `replicas` field must first be set to 0 in order to delete the pods controlled. (Note that the client tool, kubectl, provides a single operation, [stop](kubectl_stop.md) to delete both the replication controller and the pods it controlls. However, there is no such operation in the API at the moment)
Similarly, deleting a replication controller does not affect the pods it created. Its `replicas` field must first be set to 0 in order to delete the pods controlled. (Note that the client tool, kubectl, provides a single operation, [stop](kubectl_stop.md) to delete both the replication controller and the pods it controls. However, there is no such operation in the API at the moment)

## Responsibilities of the replication controller

Expand Down
2 changes: 1 addition & 1 deletion docs/service_accounts.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ A service account provides an identity for processes that run in a Pod.
*This is a user introduction to Service Accounts. See also the
[Cluster Admin Guide to Service Accounts](service_accounts_admin.md).*

*Note: This document descibes how service accounts behave in a cluster set up
*Note: This document describes how service accounts behave in a cluster set up
as recommended by the Kubernetes project. Your cluster administrator may have
customized the behavior in your cluster, in which case this documentation may
not apply.*
Expand Down
2 changes: 1 addition & 1 deletion docs/troubleshooting.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Troubleshooting
Sometimes things go wrong. This guide is aimed at making them right. It has two sections:
* [Troubleshooting your application](application-troubleshooting.md) - Useful for users who are deploying code into Kubernetes and wondering why it is not working.
* [Troubleshooting your cluster](cluster-troubleshooting.md) - Useful for cluster adminstrators and people whose Kubernetes cluster is unhappy.
* [Troubleshooting your cluster](cluster-troubleshooting.md) - Useful for cluster administrators and people whose Kubernetes cluster is unhappy.


[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/troubleshooting.md?pixel)]()
4 changes: 2 additions & 2 deletions docs/volumes.md
Original file line number Diff line number Diff line change
Expand Up @@ -223,7 +223,7 @@ mounted into your pod. Unlike `emptyDir`, which is erased when a Pod is
removed, the contents of an `nfs` volume are preserved and the volume is merely
unmounted. This means that an NFS volume can be pre-populated with data, and
that data can be "handed off" between pods. NFS can be mounted by multiple
writers simultaneuosly.
writers simultaneously.

__Important: You must have your own NFS server running with the share exported
before you can use it__
Expand Down Expand Up @@ -266,7 +266,7 @@ source networked filesystem) volume to be mounted into your pod. Unlike
`glusterfs` volume are preserved and the volume is merely unmounted. This
means that a glusterfs volume can be pre-populated with data, and that data can
be "handed off" between pods. GlusterFS can be mounted by multiple writers
simultaneuosly.
simultaneously.

__Important: You must have your own GlusterFS installation running before you
can use it__
Expand Down

0 comments on commit 98e9f1e

Please sign in to comment.