From 98e9f1eeae0b02dbe2d8c932e2baae68bf3adbc0 Mon Sep 17 00:00:00 2001 From: Ed Costello Date: Sun, 12 Jul 2015 22:03:06 -0400 Subject: [PATCH] Copy edits for typos --- docs/application-troubleshooting.md | 2 +- docs/compute_resources.md | 2 +- docs/design/networking.md | 2 +- docs/design/security.md | 2 +- docs/devel/releasing.md | 4 ++-- docs/downward_api.md | 2 +- .../getting-started-guides/centos/centos_manual_config.md | 4 ++-- docs/getting-started-guides/docker-multinode/master.md | 2 +- docs/getting-started-guides/juju.md | 4 ++-- docs/getting-started-guides/logging.md | 2 +- docs/getting-started-guides/scratch.md | 2 +- docs/getting-started-guides/ubuntu.md | 2 +- docs/kube-controller-manager.md | 8 ++++---- docs/kubectl_patch.md | 2 +- docs/proposals/autoscaling.md | 2 +- docs/proposals/high-availability.md | 2 +- docs/replication-controller.md | 2 +- docs/service_accounts.md | 2 +- docs/troubleshooting.md | 2 +- docs/volumes.md | 4 ++-- 20 files changed, 27 insertions(+), 27 deletions(-) diff --git a/docs/application-troubleshooting.md b/docs/application-troubleshooting.md index 287a924ebf663..0ac52a4785cac 100644 --- a/docs/application-troubleshooting.md +++ b/docs/application-troubleshooting.md @@ -42,7 +42,7 @@ You don't have enough resources. You may have exhausted the supply of CPU or Me you need to delete Pods, adjust resource requests, or add new nodes to your cluster. You are using ```hostPort```. When you bind a Pod to a ```hostPort``` there are a limited number of places that pod can be -scheduled. In most cases, ```hostPort``` is unnecesary, try using a Service object to expose your Pod. If you do require +scheduled. In most cases, ```hostPort``` is unnecessary, try using a Service object to expose your Pod. If you do require ```hostPort``` then you can only schedule as many Pods as there are nodes in your Kubernetes cluster. diff --git a/docs/compute_resources.md b/docs/compute_resources.md index 5f448867faad1..f0c8c402c027e 100644 --- a/docs/compute_resources.md +++ b/docs/compute_resources.md @@ -17,7 +17,7 @@ consistent manner. *CPU* and *memory* are each a *resource type*. A resource type has a base unit. CPU is specified in units of cores. Memory is specified in units of bytes. -CPU and RAM are collectively refered to as *compute resources*, or just *resources*. Compute +CPU and RAM are collectively referred to as *compute resources*, or just *resources*. Compute resources are measureable quantities which can be requested, allocated, and consumed. They are distinct from [API resources](working_with_resources.md). API resources, such as pods and [services](services.md) are objects that can be written to and retrieved from the Kubernetes API diff --git a/docs/design/networking.md b/docs/design/networking.md index af64ed8d7b3f7..210d10e509212 100644 --- a/docs/design/networking.md +++ b/docs/design/networking.md @@ -128,7 +128,7 @@ to serve the purpose outside of GCE. The [service](../services.md) abstraction provides a way to group pods under a common access policy (e.g. load-balanced). The implementation of this creates a -virtual IP which clients can access and which is transparantly proxied to the +virtual IP which clients can access and which is transparently proxied to the pods in a Service. Each node runs a kube-proxy process which programs `iptables` rules to trap access to service IPs and redirect them to the correct backends. This provides a highly-available load-balancing solution with low diff --git a/docs/design/security.md b/docs/design/security.md index 4ea7d755cf73c..c2fd092e58e65 100644 --- a/docs/design/security.md +++ b/docs/design/security.md @@ -78,7 +78,7 @@ A pod runs in a *security context* under a *service account* that is defined by 5. Developers should be able to run their own images or images from the community and expect those images to run correctly 6. Developers may need to ensure their images work within higher security requirements specified by administrators 7. When available, Linux kernel user namespaces can be used to ensure 5.2 and 5.4 are met. - 8. When application developers want to share filesytem data via distributed filesystems, the Unix user ids on those filesystems must be consistent across different container processes + 8. When application developers want to share filesystem data via distributed filesystems, the Unix user ids on those filesystems must be consistent across different container processes 6. Developers should be able to define [secrets](secrets.md) that are automatically added to the containers when pods are run 1. Secrets are files injected into the container whose values should not be displayed within a pod. Examples: 1. An SSH private key for git cloning remote data diff --git a/docs/devel/releasing.md b/docs/devel/releasing.md index 6533858c1ba1a..9cec89e0e1de1 100644 --- a/docs/devel/releasing.md +++ b/docs/devel/releasing.md @@ -21,7 +21,7 @@ You should progress in this strict order. #### Selecting Release Components When cutting a major/minor release, your first job is to find the branch -point. We cut `vX.Y.0` releases directly from `master`, which is also the the +point. We cut `vX.Y.0` releases directly from `master`, which is also the branch that we have most continuous validation on. Go first to [the main GCE Jenkins end-to-end job](http://go/k8s-test/job/kubernetes-e2e-gce) and next to [the Critical Builds page](http://go/k8s-test/view/Critical%20Builds) and hopefully find a @@ -42,7 +42,7 @@ Because Jenkins builds frequently, if you're looking between jobs `kubernetes-e2e-gce` build (but please check that it corresponds to a temporally similar build that's green on `kubernetes-e2e-gke-ci`). Lastly, if you're having trouble understanding why the GKE continuous integration clusters are failing -and you're trying to cut a release, don't hesistate to contact the GKE +and you're trying to cut a release, don't hesitate to contact the GKE oncall. Before proceeding to the next step: diff --git a/docs/downward_api.md b/docs/downward_api.md index b267cca6a4f6e..b30d07c8ae68a 100644 --- a/docs/downward_api.md +++ b/docs/downward_api.md @@ -14,7 +14,7 @@ the Pod's name, for example, and inject it into this well-known variable. ## Capabilities -The following information is available to a `Pod` through the the downward API: +The following information is available to a `Pod` through the downward API: * The pod's name * The pod's namespace diff --git a/docs/getting-started-guides/centos/centos_manual_config.md b/docs/getting-started-guides/centos/centos_manual_config.md index 2c2f18faab04a..1d8cc8a1b0c82 100644 --- a/docs/getting-started-guides/centos/centos_manual_config.md +++ b/docs/getting-started-guides/centos/centos_manual_config.md @@ -40,7 +40,7 @@ gpgcheck=0 yum -y install --enablerepo=virt7-testing kubernetes ``` -* Note * Using etcd-0.4.6-7 (This is temperory update in documentation) +* Note * Using etcd-0.4.6-7 (This is temporary update in documentation) If you do not get etcd-0.4.6-7 installed with virt7-testing repo, @@ -80,7 +80,7 @@ KUBE_LOG_LEVEL="--v=0" KUBE_ALLOW_PRIV="--allow_privileged=false" ``` -* Disable the firewall on both the master and minon, as docker does not play well with other firewall rule managers +* Disable the firewall on both the master and minion, as docker does not play well with other firewall rule managers ``` systemctl disable iptables-services firewalld diff --git a/docs/getting-started-guides/docker-multinode/master.md b/docs/getting-started-guides/docker-multinode/master.md index 48e7e5c7b4264..39c16ee74bd2d 100644 --- a/docs/getting-started-guides/docker-multinode/master.md +++ b/docs/getting-started-guides/docker-multinode/master.md @@ -37,7 +37,7 @@ sudo docker -H unix:///var/run/docker-bootstrap.sock run --net=host gcr.io/googl ### Set up Flannel on the master node -Flannel is a network abstraction layer build by CoreOS, we will use it to provide simplfied networking between our Pods of containers. +Flannel is a network abstraction layer build by CoreOS, we will use it to provide simplified networking between our Pods of containers. Flannel re-configures the bridge that Docker uses for networking. As a result we need to stop Docker, reconfigure its networking, and then restart Docker. diff --git a/docs/getting-started-guides/juju.md b/docs/getting-started-guides/juju.md index 2bd478c1eee0e..d3bf19f227aa5 100644 --- a/docs/getting-started-guides/juju.md +++ b/docs/getting-started-guides/juju.md @@ -183,7 +183,7 @@ The [k8petstore example](../../examples/k8petstore/) is available as a juju action do kubernetes-master/0 -Note: this example includes curl statements to exercise the app, which automatically generates "petstore" transactions written to redis, and allows you to visualize the throughput in your browswer. +Note: this example includes curl statements to exercise the app, which automatically generates "petstore" transactions written to redis, and allows you to visualize the throughput in your browser. ## Tear down cluster @@ -199,7 +199,7 @@ Kubernetes Bundle on Github - [Bundle Repository](https://github.com/whitmo/bundle-kubernetes) * [Kubernetes master charm](https://github.com/whitmo/charm-kubernetes-master) - * [Kubernetes mininion charm](https://github.com/whitmo/charm-kubernetes) + * [Kubernetes minion charm](https://github.com/whitmo/charm-kubernetes) - [Bundle Documentation](http://whitmo.github.io/bundle-kubernetes) - [More about Juju](https://juju.ubuntu.com) diff --git a/docs/getting-started-guides/logging.md b/docs/getting-started-guides/logging.md index 4d9f7dfed30a8..ce57b22e11c5e 100644 --- a/docs/getting-started-guides/logging.md +++ b/docs/getting-started-guides/logging.md @@ -138,7 +138,7 @@ spec: path: /var/lib/docker/containers ``` -This pod specification maps the the directory on the host containing the Docker log files, `/var/lib/docker/containers`, to a directory inside the container which has the same path. The pod runs one image, `gcr.io/google_containers/fluentd-gcp:1.6`, which is configured to collect the Docker log files from the logs directory and ingest them into Google Cloud Logging. One instance of this pod runs on each node of the cluster. Kubernetes will notice if this pod fails and automatically restart it. +This pod specification maps the directory on the host containing the Docker log files, `/var/lib/docker/containers`, to a directory inside the container which has the same path. The pod runs one image, `gcr.io/google_containers/fluentd-gcp:1.6`, which is configured to collect the Docker log files from the logs directory and ingest them into Google Cloud Logging. One instance of this pod runs on each node of the cluster. Kubernetes will notice if this pod fails and automatically restart it. We can click on the Logs item under the Monitoring section of the Google Developer Console and select the logs for the counter container, which will be called kubernetes.counter_default_count. This identifies the name of the pod (counter), the namespace (default) and the name of the container (count) for which the log collection occurred. Using this name we can select just the logs for our counter container from the drop down menu: diff --git a/docs/getting-started-guides/scratch.md b/docs/getting-started-guides/scratch.md index 6f915d7081901..52f50ad231c11 100644 --- a/docs/getting-started-guides/scratch.md +++ b/docs/getting-started-guides/scratch.md @@ -361,7 +361,7 @@ installation, by following examples given in the Docker documentation. ### rkt -[rkt](https://github.com/coreos/rkt) is an alterative to Docker. You only need to install one of Docker or rkt. +[rkt](https://github.com/coreos/rkt) is an alternative to Docker. You only need to install one of Docker or rkt. *TODO*: how to install and configure rkt. diff --git a/docs/getting-started-guides/ubuntu.md b/docs/getting-started-guides/ubuntu.md index 973edf7247002..fef2c57f48805 100644 --- a/docs/getting-started-guides/ubuntu.md +++ b/docs/getting-started-guides/ubuntu.md @@ -22,7 +22,7 @@ This document describes how to deploy kubernetes on ubuntu nodes, including 1 ma *3 These guide is tested OK on Ubuntu 14.04 LTS 64bit server, but it should also work on most Ubuntu versions* -*4 Dependences of this guide: etcd-2.0.12, flannel-0.4.0, k8s-0.19.3, but it may work with higher versions* +*4 Dependencies of this guide: etcd-2.0.12, flannel-0.4.0, k8s-0.19.3, but it may work with higher versions* *5 All the remote servers can be ssh logged in without a password by using key authentication* diff --git a/docs/kube-controller-manager.md b/docs/kube-controller-manager.md index c3705e716d0eb..4860ba17fa3e9 100644 --- a/docs/kube-controller-manager.md +++ b/docs/kube-controller-manager.md @@ -25,18 +25,18 @@ controller, and serviceaccounts controller. --cluster-cidr=: CIDR Range for Pods in cluster. --cluster-name="": The instance prefix for the cluster --concurrent-endpoint-syncs=0: The number of endpoint syncing operations that will be done concurrently. Larger number = faster endpoint updating, but more CPU (and network) load - --concurrent_rc_syncs=0: The number of replication controllers that are allowed to sync concurrently. Larger number = more reponsive replica management, but more CPU (and network) load + --concurrent_rc_syncs=0: The number of replication controllers that are allowed to sync concurrently. Larger number = more responsive replica management, but more CPU (and network) load --deleting-pods-burst=10: Number of nodes on which pods are bursty deleted in case of node failure. For more details look into RateLimiter. --deleting-pods-qps=0.1: Number of nodes per second on which pods are deleted in case of node failure. -h, --help=false: help for kube-controller-manager --kubeconfig="": Path to kubeconfig file with authorization and master location information. --master="": The address of the Kubernetes API server (overrides any value in kubeconfig) --namespace-sync-period=0: The period for syncing namespace life-cycle updates - --node-monitor-grace-period=40s: Amount of time which we allow running Node to be unresponsive before marking it unhealty. Must be N times more than kubelet's nodeStatusUpdateFrequency, where N means number of retries allowed for kubelet to post node status. + --node-monitor-grace-period=40s: Amount of time which we allow running Node to be unresponsive before marking it unhealthy. Must be N times more than kubelet's nodeStatusUpdateFrequency, where N means number of retries allowed for kubelet to post node status. --node-monitor-period=5s: The period for syncing NodeStatus in NodeController. - --node-startup-grace-period=1m0s: Amount of time which we allow starting Node to be unresponsive before marking it unhealty. + --node-startup-grace-period=1m0s: Amount of time which we allow starting Node to be unresponsive before marking it unhealthy. --node-sync-period=0: The period for syncing nodes from cloudprovider. Longer periods will result in fewer calls to cloud provider, but may delay addition of new nodes to cluster. - --pod-eviction-timeout=0: The grace peroid for deleting pods on failed nodes. + --pod-eviction-timeout=0: The grace period for deleting pods on failed nodes. --port=0: The port that the controller-manager's http service runs on --profiling=true: Enable profiling via web interface host:port/debug/pprof/ --pvclaimbinder-sync-period=0: The period for syncing persistent volumes and persistent volume claims diff --git a/docs/kubectl_patch.md b/docs/kubectl_patch.md index b8683a8f07ab8..02982fc045f3d 100644 --- a/docs/kubectl_patch.md +++ b/docs/kubectl_patch.md @@ -25,7 +25,7 @@ kubectl patch node k8s-node-1 -p '{"spec":{"unschedulable":true}}' ``` -h, --help=false: help for patch - -p, --patch="": The patch to be appied to the resource JSON file. + -p, --patch="": The patch to be applied to the resource JSON file. ``` ### Options inherited from parent commands diff --git a/docs/proposals/autoscaling.md b/docs/proposals/autoscaling.md index 3acaf298c22e1..b767e132b8c61 100644 --- a/docs/proposals/autoscaling.md +++ b/docs/proposals/autoscaling.md @@ -208,7 +208,7 @@ be specified as "when requests per second fall below 25 for 30 seconds scale the This section has intentionally been left empty. I will defer to folks who have more experience gathering and analyzing time series statistics. -Data aggregation is opaque to the the auto-scaler resource. The auto-scaler is configured to use `AutoScaleThresholds` +Data aggregation is opaque to the auto-scaler resource. The auto-scaler is configured to use `AutoScaleThresholds` that know how to work with the underlying data in order to know if an application must be scaled up or down. Data aggregation must feed a common data structure to ease the development of `AutoScaleThreshold`s but it does not matter to the auto-scaler whether this occurs in a push or pull implementation, whether or not the data is stored at a granular level, diff --git a/docs/proposals/high-availability.md b/docs/proposals/high-availability.md index 60ccfce68caca..ece4739588ed3 100644 --- a/docs/proposals/high-availability.md +++ b/docs/proposals/high-availability.md @@ -4,7 +4,7 @@ This document serves as a proposal for high availability of the scheduler and co ## Design Options For complete reference see [this](https://www.ibm.com/developerworks/community/blogs/RohitShetty/entry/high_availability_cold_warm_hot?lang=en) -1. Hot Standby: In this scenario, data and state are shared between the two components such that an immediate failure in one component causes the the standby daemon to take over exactly where the failed component had left off. This would be an ideal solution for kubernetes, however it poses a series of challenges in the case of controllers where component-state is cached locally and not persisted in a transactional way to a storage facility. This would also introduce additional load on the apiserver, which is not desirable. As a result, we are **NOT** planning on this approach at this time. +1. Hot Standby: In this scenario, data and state are shared between the two components such that an immediate failure in one component causes the standby daemon to take over exactly where the failed component had left off. This would be an ideal solution for kubernetes, however it poses a series of challenges in the case of controllers where component-state is cached locally and not persisted in a transactional way to a storage facility. This would also introduce additional load on the apiserver, which is not desirable. As a result, we are **NOT** planning on this approach at this time. 2. **Warm Standby**: In this scenario there is only one active component acting as the master and additional components running but not providing service or responding to requests. Data and state are not shared between the active and standby components. When a failure occurs, the standby component that becomes the master must determine the current state of the system before resuming functionality. This is the approach that this proposal will leverage. diff --git a/docs/replication-controller.md b/docs/replication-controller.md index f114291df0679..89a6c58b3fcb4 100644 --- a/docs/replication-controller.md +++ b/docs/replication-controller.md @@ -28,7 +28,7 @@ Note that replication controllers may themselves have labels and would generally Pods may be removed from a replication controller's target set by changing their labels. This technique may be used to remove pods from service for debugging, data recovery, etc. Pods that are removed in this way will be replaced automatically (assuming that the number of replicas is not also changed). -Similarly, deleting a replication controller does not affect the pods it created. Its `replicas` field must first be set to 0 in order to delete the pods controlled. (Note that the client tool, kubectl, provides a single operation, [stop](kubectl_stop.md) to delete both the replication controller and the pods it controlls. However, there is no such operation in the API at the moment) +Similarly, deleting a replication controller does not affect the pods it created. Its `replicas` field must first be set to 0 in order to delete the pods controlled. (Note that the client tool, kubectl, provides a single operation, [stop](kubectl_stop.md) to delete both the replication controller and the pods it controls. However, there is no such operation in the API at the moment) ## Responsibilities of the replication controller diff --git a/docs/service_accounts.md b/docs/service_accounts.md index b9afe1e0b0e30..4e860fccb9048 100644 --- a/docs/service_accounts.md +++ b/docs/service_accounts.md @@ -5,7 +5,7 @@ A service account provides an identity for processes that run in a Pod. *This is a user introduction to Service Accounts. See also the [Cluster Admin Guide to Service Accounts](service_accounts_admin.md).* -*Note: This document descibes how service accounts behave in a cluster set up +*Note: This document describes how service accounts behave in a cluster set up as recommended by the Kubernetes project. Your cluster administrator may have customized the behavior in your cluster, in which case this documentation may not apply.* diff --git a/docs/troubleshooting.md b/docs/troubleshooting.md index 87ae1fc8584fd..4c96fc2cbd478 100644 --- a/docs/troubleshooting.md +++ b/docs/troubleshooting.md @@ -1,7 +1,7 @@ # Troubleshooting Sometimes things go wrong. This guide is aimed at making them right. It has two sections: * [Troubleshooting your application](application-troubleshooting.md) - Useful for users who are deploying code into Kubernetes and wondering why it is not working. - * [Troubleshooting your cluster](cluster-troubleshooting.md) - Useful for cluster adminstrators and people whose Kubernetes cluster is unhappy. + * [Troubleshooting your cluster](cluster-troubleshooting.md) - Useful for cluster administrators and people whose Kubernetes cluster is unhappy. [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/troubleshooting.md?pixel)]() diff --git a/docs/volumes.md b/docs/volumes.md index 36c425605d17c..c1ed213b01220 100644 --- a/docs/volumes.md +++ b/docs/volumes.md @@ -223,7 +223,7 @@ mounted into your pod. Unlike `emptyDir`, which is erased when a Pod is removed, the contents of an `nfs` volume are preserved and the volume is merely unmounted. This means that an NFS volume can be pre-populated with data, and that data can be "handed off" between pods. NFS can be mounted by multiple -writers simultaneuosly. +writers simultaneously. __Important: You must have your own NFS server running with the share exported before you can use it__ @@ -266,7 +266,7 @@ source networked filesystem) volume to be mounted into your pod. Unlike `glusterfs` volume are preserved and the volume is merely unmounted. This means that a glusterfs volume can be pre-populated with data, and that data can be "handed off" between pods. GlusterFS can be mounted by multiple writers -simultaneuosly. +simultaneously. __Important: You must have your own GlusterFS installation running before you can use it__