Skip to content

Commit

Permalink
Copy edits for typos (resubmitted)
Browse files Browse the repository at this point in the history
  • Loading branch information
epc committed Aug 25, 2015
1 parent 34e499d commit 1916d3b
Show file tree
Hide file tree
Showing 18 changed files with 26 additions and 26 deletions.
2 changes: 1 addition & 1 deletion cluster/juju/bundles/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ juju-quickstart`) or Ubuntu (`apt-get install juju-quickstart`).
Use the 'juju quickstart' command to deploy a Kubernetes cluster to any cloud
supported by Juju.

The charm store version of the Kubernetes bundle can be deployed as folllows:
The charm store version of the Kubernetes bundle can be deployed as follows:

juju quickstart u/kubernetes/kubernetes-cluster

Expand Down
2 changes: 1 addition & 1 deletion docs/admin/authentication.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ to the OpenID provider.
will be used, which should be unique and immutable under the issuer's domain. Cluster administrator can
choose other claims such as `email` to use as the user name, but the uniqueness and immutability is not guaranteed.

Please note that this flag is still experimental until we settle more on how to handle the mapping of the OpenID user to the Kubernetes user. Thus futher changes are possible.
Please note that this flag is still experimental until we settle more on how to handle the mapping of the OpenID user to the Kubernetes user. Thus further changes are possible.

Currently, the ID token will be obtained by some third-party app. This means the app and apiserver
MUST share the `--oidc-client-id`.
Expand Down
2 changes: 1 addition & 1 deletion docs/admin/cluster-management.md
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ The autoscaler will try to maintain the average CPU and memory utilization of no
The target value can be configured by ```KUBE_TARGET_NODE_UTILIZATION``` environment variable (default: 0.7) for ``kube-up.sh`` when creating the cluster.
The node utilization is the total node's CPU/memory usage (OS + k8s + user load) divided by the node's capacity.
If the desired numbers of nodes in the cluster resulting from CPU utilization and memory utilization are different,
the autosclaer will choose the bigger number.
the autoscaler will choose the bigger number.
The number of nodes in the cluster set by the autoscaler will be limited from ```KUBE_AUTOSCALER_MIN_NODES``` (default: 1)
to ```KUBE_AUTOSCALER_MAX_NODES``` (default: the initial number of nodes in the cluster).

Expand Down
4 changes: 2 additions & 2 deletions docs/design/extending-api.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ Kubernetes API server to provide the following features:
* Watch for resource changes.

The `Kind` for an instance of a third-party object (e.g. CronTab) below is expected to be
programnatically convertible to the name of the resource using
programmatically convertible to the name of the resource using
the following conversion. Kinds are expected to be of the form `<CamelCaseKind>`, the
`APIVersion` for the object is expected to be `<domain-name>/<api-group>/<api-version>`.

Expand Down Expand Up @@ -178,7 +178,7 @@ and get back:
}
```

Because all objects are expected to contain standard Kubernetes metdata fileds, these
Because all objects are expected to contain standard Kubernetes metadata fields, these
list operations can also use `Label` queries to filter requests down to specific subsets.

Likewise, clients can use watch endpoints to watch for changes to stored objects.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -215,7 +215,7 @@ where ```<ip address>``` is the IP address that was available from the ```nova f

#### Provision Worker Nodes

Edit ```node.yaml``` and replace all instances of ```<master-private-ip>``` with the private IP address of the master node. You can get this by runnning ```nova show kube-master``` assuming you named your instance kube master. This is not the floating IP address you just assigned it.
Edit ```node.yaml``` and replace all instances of ```<master-private-ip>``` with the private IP address of the master node. You can get this by running ```nova show kube-master``` assuming you named your instance kube master. This is not the floating IP address you just assigned it.

```sh
nova boot \
Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/docker-multinode/deployDNS.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ $ export DNS_SERVER_IP=10.0.0.10 # specify in startup parameter `--cluster-dns`
$ export KUBE_SERVER=10.10.103.250 # your master server ip, you may change it
```

### Replace the correponding value in the template.
### Replace the corresponding value in the template.

```
$ sed -e "s/{{ pillar\['dns_replicas'\] }}/${DNS_REPLICAS}/g;s/{{ pillar\['dns_domain'\] }}/${DNS_DOMAIN}/g;s/{kube_server_url}/${KUBE_SERVER}/g;" skydns-rc.yaml.in > ./skydns-rc.yaml
Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/scratch.md
Original file line number Diff line number Diff line change
Expand Up @@ -235,7 +235,7 @@ You have several choices for Kubernetes images:
- The release contains files such as `./kubernetes/server/bin/kube-apiserver.tar` which
can be converted into docker images using a command like
`docker load -i kube-apiserver.tar`
- You can verify if the image is loaded successfully with the right reposity and tag using
- You can verify if the image is loaded successfully with the right repository and tag using
command like `docker images`

For etcd, you can:
Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/ubuntu-calico.md
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,7 @@ cp calico-kubernetes-ubuntu-demo-master/node/network-environment-template networ

3.) Edit `network-environment` to represent your current host's settings.

4.) Move `netework-environment` into `/etc`
4.) Move `network-environment` into `/etc`

```
sudo mv -f network-environment /etc
Expand Down
4 changes: 2 additions & 2 deletions docs/proposals/apiserver-watch.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ the objects (of a given type) without any filtering. The changes delivered from
etcd will then be stored in a cache in apiserver. This cache is in fact a
"rolling history window" that will support clients having some amount of latency
between their list and watch calls. Thus it will have a limited capacity and
whenever a new change comes from etcd when a cache is full, othe oldest change
whenever a new change comes from etcd when a cache is full, the oldest change
will be remove to make place for the new one.

When a client sends a watch request to apiserver, instead of redirecting it to
Expand Down Expand Up @@ -159,7 +159,7 @@ necessary. In such case, to avoid LIST requests coming from all watchers at
the same time, we can introduce an additional etcd event type:
[EtcdResync](../../pkg/storage/etcd/etcd_watcher.go#L36)

Whenever reslisting will be done to refresh the internal watch to etcd,
Whenever relisting will be done to refresh the internal watch to etcd,
EtcdResync event will be send to all the watchers. It will contain the
full list of all the objects the watcher is interested in (appropriately
filtered) as the parameter of this watch event.
Expand Down
6 changes: 3 additions & 3 deletions docs/proposals/federation.md
Original file line number Diff line number Diff line change
Expand Up @@ -518,7 +518,7 @@ thus far:
approach.
1. A more monolithic architecture, where a single instance of the
Kubernetes control plane itself manages a single logical cluster
composed of nodes in multiple availablity zones and cloud
composed of nodes in multiple availability zones and cloud
providers.

A very brief, non-exhaustive list of pro's and con's of the two
Expand Down Expand Up @@ -563,12 +563,12 @@ prefers the Decoupled Hierarchical model for the reasons stated below).
largely independently (different sets of developers, different
release schedules etc).
1. **Administration complexity:** Again, I think that this could be argued
both ways. Superficially it woud seem that administration of a
both ways. Superficially it would seem that administration of a
single Monolithic multi-zone cluster might be simpler by virtue of
being only "one thing to manage", however in practise each of the
underlying availability zones (and possibly cloud providers) has
it's own capacity, pricing, hardware platforms, and possibly
bureaucratic boudaries (e.g. "our EMEA IT department manages those
bureaucratic boundaries (e.g. "our EMEA IT department manages those
European clusters"). So explicitly allowing for (but not
mandating) completely independent administration of each
underlying Kubernetes cluster, and the Federation system itself,
Expand Down
8 changes: 4 additions & 4 deletions docs/proposals/horizontal-pod-autoscaler.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ We are going to introduce Scale subresource and implement horizontal autoscaling
Scale subresource will be supported for replication controllers and deployments.
Scale subresource will be a Virtual Resource (will not be stored in etcd as a separate object).
It will be only present in API as an interface to accessing replication controller or deployment,
and the values of Scale fields will be inferred from the corresponing replication controller/deployment object.
and the values of Scale fields will be inferred from the corresponding replication controller/deployment object.
HorizontalPodAutoscaler object will be bound with exactly one Scale subresource and will be
autoscaling associated replication controller/deployment through it.
The main advantage of such approach is that whenever we introduce another type we want to auto-scale,
Expand Down Expand Up @@ -132,7 +132,7 @@ type HorizontalPodAutoscaler struct {
// HorizontalPodAutoscalerSpec is the specification of a horizontal pod autoscaler.
type HorizontalPodAutoscalerSpec struct {
// ScaleRef is a reference to Scale subresource. HorizontalPodAutoscaler will learn the current
// resource consumption from its status, and will set the desired number of pods by modyfying its spec.
// resource consumption from its status, and will set the desired number of pods by modifying its spec.
ScaleRef *SubresourceReference
// MinCount is the lower limit for the number of pods that can be set by the autoscaler.
MinCount int
Expand All @@ -151,7 +151,7 @@ type HorizontalPodAutoscalerStatus struct {
CurrentReplicas int

// DesiredReplicas is the desired number of replicas of pods managed by this autoscaler.
// The number may be different because pod downscaling is someteimes delayed to keep the number
// The number may be different because pod downscaling is sometimes delayed to keep the number
// of pods stable.
DesiredReplicas int

Expand All @@ -161,7 +161,7 @@ type HorizontalPodAutoscalerStatus struct {
CurrentConsumption ResourceConsumption

// LastScaleTimestamp is the last time the HorizontalPodAutoscaler scaled the number of pods.
// This is used by the autoscaler to controll how often the number of pods is changed.
// This is used by the autoscaler to control how often the number of pods is changed.
LastScaleTimestamp *util.Time
}

Expand Down
2 changes: 1 addition & 1 deletion docs/proposals/rescheduler.md
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ case, the nodes we move the Pods onto might have been in the system for a long t
have been added by the cluster auto-scaler specifically to allow the rescheduler to
rebalance utilization.

A second spreading use case is to separate antagnosits.
A second spreading use case is to separate antagonists.
Sometimes the processes running in two different Pods on the same node
may have unexpected antagonistic
behavior towards one another. A system component might monitor for such
Expand Down
2 changes: 1 addition & 1 deletion docs/user-guide/compute-resources.md
Original file line number Diff line number Diff line change
Expand Up @@ -196,7 +196,7 @@ TotalResourceLimits:
[ ... lines removed for clarity ...]
```

Here you can see from the `Allocated resorces` section that that a pod which ask for more than
Here you can see from the `Allocated resources` section that that a pod which ask for more than
90 millicpus or more than 1341MiB of memory will not be able to fit on this node.

Looking at the `Pods` section, you can see which pods are taking up space on the node.
Expand Down
2 changes: 1 addition & 1 deletion docs/user-guide/deploying-applications.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ Kubernetes creates and manages sets of replicated containers (actually, replicat

A replication controller simply ensures that a specified number of pod "replicas" are running at any one time. If there are too many, it will kill some. If there are too few, it will start more. It’s analogous to Google Compute Engine’s [Instance Group Manager](https://cloud.google.com/compute/docs/instance-groups/manager/) or AWS’s [Auto-scaling Group](http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AutoScalingGroup.html) (with no scaling policies).

The replication controller created to run nginx by `kubctl run` in the [Quick start](quick-start.md) could be specified using YAML as follows:
The replication controller created to run nginx by `kubectl run` in the [Quick start](quick-start.md) could be specified using YAML as follows:

```yaml
apiVersion: v1
Expand Down
4 changes: 2 additions & 2 deletions docs/user-guide/environment-guide/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ Pod Name: show-rc-xxu6i
Pod Namespace: default
USER_VAR: important information
Kubenertes environment variables
Kubernetes environment variables
BACKEND_SRV_SERVICE_HOST = 10.147.252.185
BACKEND_SRV_SERVICE_PORT = 5000
KUBERNETES_RO_SERVICE_HOST = 10.147.240.1
Expand All @@ -99,7 +99,7 @@ Backend Namespace: default
```

First the frontend pod's information is printed. The pod name and
[namespace](../../../docs/design/namespaces.md) are retreived from the
[namespace](../../../docs/design/namespaces.md) are retrieved from the
[Downward API](../../../docs/user-guide/downward-api.md). Next, `USER_VAR` is the name of
an environment variable set in the [pod
definition](show-rc.yaml). Then, the dynamic Kubernetes environment
Expand Down
2 changes: 1 addition & 1 deletion docs/user-guide/environment-guide/containers/show/show.go
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ func printInfo(resp http.ResponseWriter, req *http.Request) {
envvar := os.Getenv("USER_VAR")
fmt.Fprintf(resp, "USER_VAR: %v \n", envvar)

fmt.Fprintf(resp, "\nKubenertes environment variables\n")
fmt.Fprintf(resp, "\nKubernetes environment variables\n")
var keys []string
for key := range kubeVars {
keys = append(keys, key)
Expand Down
2 changes: 1 addition & 1 deletion docs/user-guide/secrets.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ more control over how it is used, and reduces the risk of accidental exposure.
Users can create secrets, and the system also creates some secrets.

To use a secret, a pod needs to reference the secret.
A secret can be used with a pod in two ways: eithe as files in a [volume](volumes.md) mounted on one or more of
A secret can be used with a pod in two ways: either as files in a [volume](volumes.md) mounted on one or more of
its containers, or used by kubelet when pulling images for the pod.

### Service Accounts Automatically Create and Attach Secrets with API Credentials
Expand Down
2 changes: 1 addition & 1 deletion docs/user-guide/services.md
Original file line number Diff line number Diff line change
Expand Up @@ -368,7 +368,7 @@ address, other services should be visible only from inside of the cluster.
Kubernetes `ServiceTypes` allow you to specify what kind of service you want.
The default and base type is `ClusterIP`, which exposes a service to connection
from inside the cluster. `NodePort` and `LoadBalancer` are two types that expose
services to external trafic.
services to external traffic.

Valid values for the `ServiceType` field are:

Expand Down

0 comments on commit 1916d3b

Please sign in to comment.