Skip to content

Commit

Permalink
Copy edits to remove doubled words
Browse files Browse the repository at this point in the history
  • Loading branch information
epc committed Jul 13, 2015
1 parent 0c5b976 commit 2b94163
Show file tree
Hide file tree
Showing 13 changed files with 19 additions and 19 deletions.
4 changes: 2 additions & 2 deletions docs/api-conventions.md
Original file line number Diff line number Diff line change
Expand Up @@ -420,7 +420,7 @@ The following HTTP status codes may be returned by the API.
* Suggested client recovery behavior
* Do not retry. Fix the request.
* `405 StatusMethodNotAllowed`
* Indicates that that the action the client attempted to perform on the resource was not supported by the code.
* Indicates that the action the client attempted to perform on the resource was not supported by the code.
* Suggested client recovery behavior
* Do not retry. Fix the request.
* `409 StatusConflict`
Expand Down Expand Up @@ -570,7 +570,7 @@ Possible values for the ```reason``` and ```details``` fields:
* The server should set the `Retry-After` HTTP header and return `retryAfterSeconds` in the details field of the object. A value of `0` is the default.
* Http status code: `504 StatusServerTimeout`
* `MethodNotAllowed`
* Indicates that that the action the client attempted to perform on the resource was not supported by the code.
* Indicates that the action the client attempted to perform on the resource was not supported by the code.
* For instance, attempting to delete a resource that can only be created.
* API calls that return MethodNotAllowed can never succeed.
* Http status code: `405 StatusMethodNotAllowed`
Expand Down
2 changes: 1 addition & 1 deletion docs/availability.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ Second, decide how many clusters should be able to be unavailable at the same ti
the number that can be unavailable `U`. If you are not sure, then 1 is a fine choice.

If it is allowable for load-balancing to direct traffic to any region in the event of a cluster failure, then
then you need `R + U` clusters. If it is not (e.g you want to ensure low latency for all users in the event of a
you need `R + U` clusters. If it is not (e.g you want to ensure low latency for all users in the event of a
cluster failure), then you need to have `R * U` clusters (`U` in each of `R` regions). In any case, try to put each cluster in a different zone.

Finally, if any of your clusters would need more than the maximum recommended number of nodes for a Kubernetes cluster, then
Expand Down
4 changes: 2 additions & 2 deletions docs/design/service_accounts.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ The distinction is useful for a number of reasons:
Pod Object.

The `secrets` field is a list of references to /secret objects that an process started as that service account should
have access to to be able to assert that role.
have access to be able to assert that role.

The secrets are not inline with the serviceAccount object. This way, most or all users can have permission to `GET /serviceAccounts` so they can remind themselves
what serviceAccounts are available for use.
Expand Down Expand Up @@ -150,7 +150,7 @@ then it copies in the referenced securityContext and secrets references for the

Second, if ServiceAccount definitions change, it may take some actions.
**TODO**: decide what actions it takes when a serviceAccount definition changes. Does it stop pods, or just
allow someone to list ones that out out of spec? In general, people may want to customize this?
allow someone to list ones that are out of spec? In general, people may want to customize this?

Third, if a new namespace is created, it may create a new serviceAccount for that namespace. This may include
a new username (e.g. `NAMESPACE-default-service-account@serviceaccounts.$CLUSTERID.kubernetes.io`), a new
Expand Down
2 changes: 1 addition & 1 deletion docs/devel/api_changes.md
Original file line number Diff line number Diff line change
Expand Up @@ -177,7 +177,7 @@ need to add cases to `pkg/api/<version>/defaults.go`. Of course, since you
have added code, you have to add a test: `pkg/api/<version>/defaults_test.go`.

Do use pointers to scalars when you need to distinguish between an unset value
and an an automatic zero value. For example,
and an automatic zero value. For example,
`PodSpec.TerminationGracePeriodSeconds` is defined as `*int64` the go type
definition. A zero value means 0 seconds, and a nil value asks the system to
pick a default.
Expand Down
2 changes: 1 addition & 1 deletion docs/devel/scheduler_algorithm.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ The details of the above predicates can be found in [plugin/pkg/scheduler/algori

## Ranking the nodes

The filtered nodes are considered suitable to host the Pod, and it is often that there are more than one nodes remaining. Kubernetes prioritizes the remaining nodes to to find the "best" one for the Pod. The prioritization is performed by a set of priority functions. For each remaining node, a priority function gives a score which scales from 0-10 with 10 representing for "most preferred" and 0 for "least preferred". Each priority function is weighted by a positive number and the final score of each node is calculated by adding up all the weighted scores. For example, suppose there are two priority functions, `priorityFunc1` and `priorityFunc2` with weighting factors `weight1` and `weight2` respectively, the final score of some NodeA is:
The filtered nodes are considered suitable to host the Pod, and it is often that there are more than one nodes remaining. Kubernetes prioritizes the remaining nodes to find the "best" one for the Pod. The prioritization is performed by a set of priority functions. For each remaining node, a priority function gives a score which scales from 0-10 with 10 representing for "most preferred" and 0 for "least preferred". Each priority function is weighted by a positive number and the final score of each node is calculated by adding up all the weighted scores. For example, suppose there are two priority functions, `priorityFunc1` and `priorityFunc2` with weighting factors `weight1` and `weight2` respectively, the final score of some NodeA is:

finalScoreNodeA = (weight1 * priorityFunc1) + (weight2 * priorityFunc2)

Expand Down
4 changes: 2 additions & 2 deletions docs/devel/writing-a-getting-started-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ These guidelines say *what* to do. See the Rationale section for *why*.
refactoring and feature additions that affect code for their IaaS.

## Rationale
- We want want people to create Kubernetes clusters with whatever IaaS, Node OS,
- We want people to create Kubernetes clusters with whatever IaaS, Node OS,
configuration management tools, and so on, which they are familiar with. The
guidelines for **versioned distros** are designed for flexibility.
- We want developers to be able to work without understanding all the permutations of
Expand All @@ -81,7 +81,7 @@ These guidelines say *what* to do. See the Rationale section for *why*.
gate commits on passing CI for all distros, and since end-to-end tests are typically somewhat
flaky, it would be highly likely for there to be false positives and CI backlogs with many CI pipelines.
- We do not require versioned distros to do **CI** for several reasons. It is a steep
learning curve to understand our our automated testing scripts. And it is considerable effort
learning curve to understand our automated testing scripts. And it is considerable effort
to fully automate setup and teardown of a cluster, which is needed for CI. And, not everyone
has the time and money to run CI. We do not want to
discourage people from writing and sharing guides because of this.
Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/gce.md
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ $ kubectl get --all-namespaces pods
```
command.

You'll see see a list of pods that looks something like this (the name specifics will be different):
You'll see a list of pods that looks something like this (the name specifics will be different):

```shell
NAMESPACE NAME READY STATUS RESTARTS AGE
Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/logging-elasticsearch.md
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,7 @@ users:
token: JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp
```

Now you can can issue requests to Elasticsearch:
Now you can issue requests to Elasticsearch:
```
$ curl --header "Authorization: Bearer JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp" --insecure https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/
{
Expand Down
4 changes: 2 additions & 2 deletions docs/persistent-volumes.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Claims will remain unbound indefinitely if a matching volume does not exist. Cl

Pods use claims as volumes. The cluster inspects the claim to find the bound volume and mounts that volume for a pod. For those volumes that support multiple access modes, the user specifies which mode desired when using their claim as a volume in a pod.

Once a user has a claim and that claim is bound, the bound PV belongs to the user for as long as she needs it. Users schedule Pods and access their their claimed PVs by including a persistentVolumeClaim in their Pod's volumes block. [See below for syntax details](#claims-as-volumes).
Once a user has a claim and that claim is bound, the bound PV belongs to the user for as long as she needs it. Users schedule Pods and access their claimed PVs by including a persistentVolumeClaim in their Pod's volumes block. [See below for syntax details](#claims-as-volumes).

### Releasing

Expand Down Expand Up @@ -113,7 +113,7 @@ Currently, NFS and HostPath support recycling.

A volume will be in one of the following phases:

* Available -- a free resource resource that is not yet bound to a claim
* Available -- a free resource that is not yet bound to a claim
* Bound -- the volume is bound to a claim
* Released -- the claim has been deleted, but the resource is not yet reclaimed by the cluster
* Failed -- the volume has failed its automatic reclamation
Expand Down
2 changes: 1 addition & 1 deletion docs/replication-controller.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Pods created by a replication controller are intended to be fungible and semanti

### Labels

The population of pods that a replication controller is monitoring is defined with a [label selector](labels.md#label-selectors), which creates a loosely coupled relationship between the controller and the pods controlled, in contrast to pods, which are more tightly coupled to their definition. We deliberately chose not to represent the set of pods controlled using a fixed-length array of pod specifications, because our experience is that that approach increases complexity of management operations, for both clients and the system.
The population of pods that a replication controller is monitoring is defined with a [label selector](labels.md#label-selectors), which creates a loosely coupled relationship between the controller and the pods controlled, in contrast to pods, which are more tightly coupled to their definition. We deliberately chose not to represent the set of pods controlled using a fixed-length array of pod specifications, because our experience is that approach increases complexity of management operations, for both clients and the system.

The replication controller should verify that the pods created from the specified template have labels that match its label selector. Though it isn't verified yet, you should also ensure that only one replication controller controls any given pod, by ensuring that the label selectors of replication controllers do not target overlapping sets.

Expand Down
2 changes: 1 addition & 1 deletion docs/services.md
Original file line number Diff line number Diff line change
Expand Up @@ -436,7 +436,7 @@ must exist in the registry for services to get IPs, otherwise creations will
fail with a message indicating an IP could not be allocated. A background
controller is responsible for creating that map (to migrate from older versions
of Kubernetes that used in memory locking) as well as checking for invalid
assignments due to administrator intervention and cleaning up any any IPs
assignments due to administrator intervention and cleaning up any IPs
that were allocated but which no service currently uses.

### IPs and VIPs
Expand Down
2 changes: 1 addition & 1 deletion docs/user-guide/connecting-applications.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ NAME LABELS SELECTOR IP(S) PORT(S)
kube-dns <none> k8s-app=kube-dns 10.0.0.10 53/UDP
53/TCP
```
If it isn’t running, you can [enable it](../../cluster/addons/dns/README.md#how-do-i-configure-it). The rest of this section will assume you have a Service with a long lived ip (nginxsvc), and a dns server that has assigned a name to that ip (the kube-dns cluster addon), so you can talk to the Service from any pod in your cluster using using standard methods (e.g. gethostbyname). Let’s create another pod to test this:
If it isn’t running, you can [enable it](../../cluster/addons/dns/README.md#how-do-i-configure-it). The rest of this section will assume you have a Service with a long lived ip (nginxsvc), and a dns server that has assigned a name to that ip (the kube-dns cluster addon), so you can talk to the Service from any pod in your cluster using standard methods (e.g. gethostbyname). Let’s create another pod to test this:

```yaml
apiVersion: v1
Expand Down
6 changes: 3 additions & 3 deletions docs/volumes.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@

On-disk files in a container are ephemeral, which presents some problems for
non-trivial applications when running in containers. First, when a container
crashes kubelet will restart it, but the files will be lost lost - the
container starts with a clean slate. second, when running containers together
crashes kubelet will restart it, but the files will be lost - the
container starts with a clean slate. Second, when running containers together
in a `Pod` it is often necessary to share files between those containers. The
Kubernetes `Volume` abstraction solves both of these problems.

Expand Down Expand Up @@ -130,7 +130,7 @@ and then serve it in parallel from as many pods as you need. Unfortunately,
PDs can only be mounted by a single consumer in read-write mode - no
simultaneous readers allowed.

Using a PD on a pod controlled by a ReplicationController will will fail unless
Using a PD on a pod controlled by a ReplicationController will fail unless
the PD is read-only or the replica count is 0 or 1.

#### Creating a PD
Expand Down

0 comments on commit 2b94163

Please sign in to comment.