Skip to content

Commit

Permalink
Fixed several typos
Browse files Browse the repository at this point in the history
  • Loading branch information
joe2far committed Jul 13, 2016
1 parent 7e6a856 commit 5ead89b
Show file tree
Hide file tree
Showing 66 changed files with 80 additions and 80 deletions.
2 changes: 1 addition & 1 deletion api/swagger-spec/apis.json
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@
},
"unversioned.GroupVersionForDiscovery": {
"id": "unversioned.GroupVersionForDiscovery",
"description": "GroupVersion contains the \"group/version\" and \"version\" string of a version. It is made a struct to keep extensiblity.",
"description": "GroupVersion contains the \"group/version\" and \"version\" string of a version. It is made a struct to keep extensibility.",
"required": [
"groupVersion",
"version"
Expand Down
2 changes: 1 addition & 1 deletion api/swagger-spec/apps.json
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@
},
"unversioned.GroupVersionForDiscovery": {
"id": "unversioned.GroupVersionForDiscovery",
"description": "GroupVersion contains the \"group/version\" and \"version\" string of a version. It is made a struct to keep extensiblity.",
"description": "GroupVersion contains the \"group/version\" and \"version\" string of a version. It is made a struct to keep extensibility.",
"required": [
"groupVersion",
"version"
Expand Down
2 changes: 1 addition & 1 deletion api/swagger-spec/autoscaling.json
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@
},
"unversioned.GroupVersionForDiscovery": {
"id": "unversioned.GroupVersionForDiscovery",
"description": "GroupVersion contains the \"group/version\" and \"version\" string of a version. It is made a struct to keep extensiblity.",
"description": "GroupVersion contains the \"group/version\" and \"version\" string of a version. It is made a struct to keep extensibility.",
"required": [
"groupVersion",
"version"
Expand Down
2 changes: 1 addition & 1 deletion api/swagger-spec/batch.json
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@
},
"unversioned.GroupVersionForDiscovery": {
"id": "unversioned.GroupVersionForDiscovery",
"description": "GroupVersion contains the \"group/version\" and \"version\" string of a version. It is made a struct to keep extensiblity.",
"description": "GroupVersion contains the \"group/version\" and \"version\" string of a version. It is made a struct to keep extensibility.",
"required": [
"groupVersion",
"version"
Expand Down
2 changes: 1 addition & 1 deletion api/swagger-spec/extensions.json
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@
},
"unversioned.GroupVersionForDiscovery": {
"id": "unversioned.GroupVersionForDiscovery",
"description": "GroupVersion contains the \"group/version\" and \"version\" string of a version. It is made a struct to keep extensiblity.",
"description": "GroupVersion contains the \"group/version\" and \"version\" string of a version. It is made a struct to keep extensibility.",
"required": [
"groupVersion",
"version"
Expand Down
2 changes: 1 addition & 1 deletion api/swagger-spec/policy.json
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@
},
"unversioned.GroupVersionForDiscovery": {
"id": "unversioned.GroupVersionForDiscovery",
"description": "GroupVersion contains the \"group/version\" and \"version\" string of a version. It is made a struct to keep extensiblity.",
"description": "GroupVersion contains the \"group/version\" and \"version\" string of a version. It is made a struct to keep extensibility.",
"required": [
"groupVersion",
"version"
Expand Down
2 changes: 1 addition & 1 deletion api/swagger-spec/rbac.authorization.k8s.io.json
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@
},
"unversioned.GroupVersionForDiscovery": {
"id": "unversioned.GroupVersionForDiscovery",
"description": "GroupVersion contains the \"group/version\" and \"version\" string of a version. It is made a struct to keep extensiblity.",
"description": "GroupVersion contains the \"group/version\" and \"version\" string of a version. It is made a struct to keep extensibility.",
"required": [
"groupVersion",
"version"
Expand Down
2 changes: 1 addition & 1 deletion cluster/aws/options.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ contribute!

**NON_MASQUERADE_CIDR**

The 'internal' IP range which Kuberenetes will use, which will therefore not
The 'internal' IP range which Kubernetes will use, which will therefore not
use IP masquerade. By default kubernetes runs an internal network for traffic
between pods (and between pods and services), and by default this uses the
`10.0.0.0/8` range. However, this sometimes overlaps with a range that you may
Expand Down
2 changes: 1 addition & 1 deletion cluster/juju/bundles/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ via juju ssh:

juju ssh kubernetes-master/0 -t "sudo kubectl get nodes"

You may also SSH to the kuberentes-master unit (`juju ssh kubernetes-master/0`)
You may also SSH to the kubernetes-master unit (`juju ssh kubernetes-master/0`)
and call kubectl from the command prompt.

See the
Expand Down
2 changes: 1 addition & 1 deletion cluster/juju/layers/kubernetes/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ juju add-relation kubernetes etcd

# Configuration
For your convenience this charm supports some configuration options to set up
a Kuberentes cluster that works in your environment:
a Kubernetes cluster that works in your environment:

**version**: Set the version of the Kubernetes containers to deploy. The
version string must be in the following format "v#.#.#" where the numbers
Expand Down
4 changes: 2 additions & 2 deletions cmd/kube-controller-manager/app/options/options.go
Original file line number Diff line number Diff line change
Expand Up @@ -140,11 +140,11 @@ func (s *CMServer) AddFlags(fs *pflag.FlagSet) {
"The number of retries for initial node registration. Retry interval equals node-sync-period.")
fs.MarkDeprecated("register-retry-count", "This flag is currently no-op and will be deleted.")
fs.DurationVar(&s.NodeMonitorGracePeriod.Duration, "node-monitor-grace-period", s.NodeMonitorGracePeriod.Duration,
"Amount of time which we allow running Node to be unresponsive before marking it unhealty. "+
"Amount of time which we allow running Node to be unresponsive before marking it unhealthy. "+
"Must be N times more than kubelet's nodeStatusUpdateFrequency, "+
"where N means number of retries allowed for kubelet to post node status.")
fs.DurationVar(&s.NodeStartupGracePeriod.Duration, "node-startup-grace-period", s.NodeStartupGracePeriod.Duration,
"Amount of time which we allow starting Node to be unresponsive before marking it unhealty.")
"Amount of time which we allow starting Node to be unresponsive before marking it unhealthy.")
fs.DurationVar(&s.NodeMonitorPeriod.Duration, "node-monitor-period", s.NodeMonitorPeriod.Duration,
"The period for syncing NodeStatus in NodeController.")
fs.StringVar(&s.ServiceAccountKeyFile, "service-account-private-key-file", s.ServiceAccountKeyFile, "Filename containing a PEM-encoded private RSA key used to sign service account tokens.")
Expand Down
2 changes: 1 addition & 1 deletion contrib/mesos/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ This project combines concepts and technologies from two already-complex project

To get up and running with Kubernetes-Mesos, follow:

- the [Getting started guide](../../docs/getting-started-guides/mesos.md) to launch a Kuberneters-Mesos cluster,
- the [Getting started guide](../../docs/getting-started-guides/mesos.md) to launch a Kubernetes-Mesos cluster,
- the [Kubernetes-Mesos Scheduler Guide](./docs/scheduler.md) for topics concerning the custom scheduler used in this distribution.


Expand Down
2 changes: 1 addition & 1 deletion contrib/mesos/docs/issues.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Setting either of these flags to non-zero values may impact connection tracking
In order for pods (replicated, or otherwise) to be scheduled on the cluster, it is strongly recommended that:
* `pod.spec.containers[x].ports[y].hostPort` be left unspecified (or zero), or else;
* `pod.spec.containers[x].ports[y].hostPort` exists in the range of `ports` resources declared on Mesos slaves
- double-check the resource declaraions for your Mesos slaves, the default for `ports` is typically `[31000-32000]`
- double-check the resource declarations for your Mesos slaves, the default for `ports` is typically `[31000-32000]`

Mesos slave host `ports` are resources that are managed by the Mesos resource/offers ecosystem; slave host ports are consumed by launched tasks.
Kubernetes pod container specifications identify two types of ports, "container ports" and "host ports":
Expand Down
2 changes: 1 addition & 1 deletion contrib/mesos/pkg/offers/offers.go
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ type Registry interface {
}

// callback that is invoked during a walk through a series of live offers,
// returning with stop=true (or err != nil) if the walk should stop permaturely.
// returning with stop=true (or err != nil) if the walk should stop prematurely.
type Walker func(offer Perishable) (stop bool, err error)

type RegistryConfig struct {
Expand Down
2 changes: 1 addition & 1 deletion contrib/mesos/pkg/scheduler/podtask/procurement.go
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ func (a AllOrNothingProcurement) Procure(t *T, n *api.Node, ps *ProcureState) er
}

// NewNodeProcurement returns a Procurement that checks whether the given pod task and offer
// have valid node informations available and wehther the pod spec node selector matches
// have valid node informations available and whether the pod spec node selector matches
// the pod labels.
// If the check is successful the slave ID and assigned slave is set in the given Spec.
func NewNodeProcurement() Procurement {
Expand Down
2 changes: 1 addition & 1 deletion docs/design/control-plane-resilience.md
Original file line number Diff line number Diff line change
Expand Up @@ -179,7 +179,7 @@ well-bounded time period.
Multiple stateless, self-hosted, self-healing API servers behind a HA
load balancer, built out by the default "kube-up" automation on GCE,
AWS and basic bare metal (BBM). Note that the single-host approach of
hving etcd listen only on localhost to ensure that onyl API server can
having etcd listen only on localhost to ensure that only API server can
connect to it will no longer work, so alternative security will be
needed in the regard (either using firewall rules, SSL certs, or
something else). All necessary flags are currently supported to enable
Expand Down
2 changes: 1 addition & 1 deletion docs/design/daemon.md
Original file line number Diff line number Diff line change
Expand Up @@ -174,7 +174,7 @@ upgradable, and more generally could not be managed through the API server
interface.
A third alternative is to generalize the Replication Controller. We would do
something like: if you set the `replicas` field of the ReplicationConrollerSpec
something like: if you set the `replicas` field of the ReplicationControllerSpec
to -1, then it means "run exactly one replica on every node matching the
nodeSelector in the pod template." The ReplicationController would pretend
`replicas` had been set to some large number -- larger than the largest number
Expand Down
2 changes: 1 addition & 1 deletion docs/design/federated-services.md
Original file line number Diff line number Diff line change
Expand Up @@ -505,7 +505,7 @@ depend on what scheduling policy is in force. In the above example, the
scheduler created an equal number of replicas (2) in each of the three
underlying clusters, to make up the total of 6 replicas required. To handle
entire cluster failures, various approaches are possible, including:
1. **simple overprovisioing**, such that sufficient replicas remain even if a
1. **simple overprovisioning**, such that sufficient replicas remain even if a
cluster fails. This wastes some resources, but is simple and reliable.
2. **pod autoscaling**, where the replication controller in each
cluster automatically and autonomously increases the number of
Expand Down
2 changes: 1 addition & 1 deletion docs/design/indexed-job.md
Original file line number Diff line number Diff line change
Expand Up @@ -522,7 +522,7 @@ The index-only approach:
- Requires that the user keep the *per completion parameters* in a separate
storage, such as a configData or networked storage.
- Makes no changes to the JobSpec.
- Drawback: while in separate storage, they could be mutatated, which would have
- Drawback: while in separate storage, they could be mutated, which would have
unexpected effects.
- Drawback: Logic for using index to lookup parameters needs to be in the Pod.
- Drawback: CLIs and UIs are limited to using the "index" as the identity of a
Expand Down
2 changes: 1 addition & 1 deletion docs/design/nodeaffinity.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ scheduling requirements.
rather than replacing `map[string]string`, due to backward compatibility
requirements.)

The affiniy specifications described above allow a pod to request various
The affinity specifications described above allow a pod to request various
properties that are inherent to nodes, for example "run this pod on a node with
an Intel CPU" or, in a multi-zone cluster, "run this pod on a node in zone Z."
([This issue](https://github.com/kubernetes/kubernetes/issues/9044) describes
Expand Down
2 changes: 1 addition & 1 deletion docs/design/security.md
Original file line number Diff line number Diff line change
Expand Up @@ -204,7 +204,7 @@ arbitrary containers on hosts, to gain access to any protected information
stored in either volumes or in pods (such as access tokens or shared secrets
provided as environment variables), to intercept and redirect traffic from
running services by inserting middlemen, or to simply delete the entire history
of the custer.
of the cluster.

As a general principle, access to the central data store should be restricted to
the components that need full control over the system and which can apply
Expand Down
4 changes: 2 additions & 2 deletions docs/design/taint-toleration-dedicated.md
Original file line number Diff line number Diff line change
Expand Up @@ -201,7 +201,7 @@ to both `NodeSpec` and `NodeStatus`. The value in `NodeStatus` is the union
of the taints specified by various sources. For now, the only source is
the `NodeSpec` itself, but in the future one could imagine a node inheriting
taints from pods (if we were to allow taints to be attached to pods), from
the node's startup coniguration, etc. The scheduler should look at the `Taints`
the node's startup configuration, etc. The scheduler should look at the `Taints`
in `NodeStatus`, not in `NodeSpec`.

Taints and tolerations are not scoped to namespace.
Expand Down Expand Up @@ -305,7 +305,7 @@ Users should not start using taints and tolerations until the full
implementation has been in Kubelet and the master for enough binary versions
that we feel comfortable that we will not need to roll back either Kubelet or
master to a version that does not support them. Longer-term we will use a
progamatic approach to enforcing this ([#4855](https://github.com/kubernetes/kubernetes/issues/4855)).
programatic approach to enforcing this ([#4855](https://github.com/kubernetes/kubernetes/issues/4855)).

## Related issues

Expand Down
2 changes: 1 addition & 1 deletion docs/devel/generating-clientset.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ will generate a clientset named "my_release" which includes clients for api/v1 o
- Adding expansion methods: client-gen only generates the common methods, such as `Create()` and `Delete()`. You can manually add additional methods through the expansion interface. For example, this [file](../../pkg/client/clientset_generated/release_1_2/typed/core/v1/pod_expansion.go) adds additional methods to Pod's client. As a convention, we put the expansion interface and its methods in file ${TYPE}_expansion.go.
- Generating Fake clients for testing purposes: client-gen will generate a fake clientset if the command line argument `--fake-clientset` is set. The fake clientset provides the default implementation, you only need to fake out the methods you care about when writing test cases.

The output of client-gen inlcudes:
The output of client-gen includes:
- clientset: the clientset will be generated at `pkg/client/clientset_generated/` by default, and you can change the path via the `--clientset-path` command line argument.
- Individual typed clients and client for group: They will be generated at `pkg/client/clientset_generated/${clientset_name}/typed/generated/${GROUP}/${VERSION}/`

Expand Down
2 changes: 1 addition & 1 deletion docs/devel/kubemark-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ Common workflow for Kubemark is:
- monitoring test execution and debugging problems
- turning down Kubemark cluster

Included in descrptions there will be comments helpful for anyone who’ll want to
Included in descriptions there will be comments helpful for anyone who’ll want to
port Kubemark to different providers.

### Starting a Kubemark cluster
Expand Down
2 changes: 1 addition & 1 deletion docs/devel/releasing.md
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ Then, run

This will do a dry run of the release. It will give you instructions at the
end for `pushd`ing into the dry-run directory and having a look around.
`pushd` into the directory and make sure everythig looks as you expect:
`pushd` into the directory and make sure everything looks as you expect:

```console
git log "${RELEASE_VERSION}" # do you see the commit you expect?
Expand Down
2 changes: 1 addition & 1 deletion docs/proposals/api-group.md
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ Documentation for other releases can be found at

Types in the unversioned package will not have the APIVersion field, but may retain the Kind field.

For backward compatibility, when hanlding the Status, the server will encode it to v1 if the client expects the Status to be encoded in v1, otherwise the server will send the unversioned#Status. If an error occurs before the version can be determined, the server will send the unversioned#Status.
For backward compatibility, when handling the Status, the server will encode it to v1 if the client expects the Status to be encoded in v1, otherwise the server will send the unversioned#Status. If an error occurs before the version can be determined, the server will send the unversioned#Status.

* non-top-level common API objects:

Expand Down
4 changes: 2 additions & 2 deletions docs/proposals/client-package-structure.md
Original file line number Diff line number Diff line change
Expand Up @@ -198,7 +198,7 @@ sources AND out-of-tree destinations, so it will be useful for consuming
out-of-tree APIs and for others to build custom clients into their own
repositories.

Typed clients will be constructabale given a ClientMux; the typed constructor will use
Typed clients will be constructable given a ClientMux; the typed constructor will use
the ClientMux to find or construct an appropriate RESTClient. Alternatively, a
typed client should be constructable individually given a config, from which it
will be able to construct the appropriate RESTClient.
Expand Down Expand Up @@ -342,7 +342,7 @@ changes for multiple releases, to give users time to transition.
Once we release a clientset, we will not make interface changes to it. Users of
that client will not have to change their code until they are deliberately
upgrading their import. We probably will want to generate some sort of stub test
with a clienset, to ensure that we don't change the interface.
with a clientset, to ensure that we don't change the interface.


<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
Expand Down
2 changes: 1 addition & 1 deletion docs/proposals/federated-api-servers.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ Cluster admins are also free to use any of the multiple open source API manageme
provide a lot more functionality like: rate-limiting, caching, logging,
transformations and authentication.
In future, we can also use ingress. That will give cluster admins the flexibility to
easily swap out the ingress controller by a Go reverse proxy, ngingx, haproxy
easily swap out the ingress controller by a Go reverse proxy, nginx, haproxy
or any other solution they might want.

### Storage
Expand Down
6 changes: 3 additions & 3 deletions docs/proposals/federation-lite.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ Documentation for other releases can be found at

## Introduction

Full Cluster Federation will offer sophisticated federation between multiple kuberentes
Full Cluster Federation will offer sophisticated federation between multiple kubernetes
clusters, offering true high-availability, multiple provider support &
cloud-bursting, multiple region support etc. However, many users have
expressed a desire for a "reasonably" high-available cluster, that runs in
Expand Down Expand Up @@ -73,7 +73,7 @@ advanced/experimental functionality, so the interface is not initially going to
be particularly user-friendly. As we design the evolution of kube-up, we will
make multiple zones better supported.

For the initial implemenation, kube-up must be run multiple times, once for
For the initial implementation, kube-up must be run multiple times, once for
each zone. The first kube-up will take place as normal, but then for each
additional zone the user must run kube-up again, specifying
`KUBE_USE_EXISTING_MASTER=true` and `KUBE_SUBNET_CIDR=172.20.x.0/24`. This will then
Expand Down Expand Up @@ -226,7 +226,7 @@ Initially therefore, the GCE changes will be to:

1. change kube-up to support creation of a cluster in multiple zones
1. pass a flag enabling multi-AZ clusters with kube-up
1. change the kuberentes cloud provider to iterate through relevant zones when resolving items
1. change the kubernetes cloud provider to iterate through relevant zones when resolving items
1. tag GCE PD volumes with the appropriate zone information


Expand Down
2 changes: 1 addition & 1 deletion docs/proposals/flannel-integration.md
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ The ick-iest part of this implementation is going to the the `GET /network/lease
* On each change, figure out the lease for the node, construct a [lease watch result](https://github.com/coreos/flannel/blob/0bf263826eab1707be5262703a8092c7d15e0be4/subnet/subnet.go#L72), and send it down the watch with the RV from the node
* Implement a lease list that does a similar translation

I say this is gross without an api objet because for each node->lease translation one has to store and retrieve the node metadata sent by flannel (eg: VTEP) from node annotations. [Reference implementation](https://github.com/bprashanth/kubernetes/blob/network_vxlan/pkg/kubelet/flannel_server.go) and [watch proxy](https://github.com/bprashanth/kubernetes/blob/network_vxlan/pkg/kubelet/watch_proxy.go).
I say this is gross without an api object because for each node->lease translation one has to store and retrieve the node metadata sent by flannel (eg: VTEP) from node annotations. [Reference implementation](https://github.com/bprashanth/kubernetes/blob/network_vxlan/pkg/kubelet/flannel_server.go) and [watch proxy](https://github.com/bprashanth/kubernetes/blob/network_vxlan/pkg/kubelet/watch_proxy.go).

# Limitations

Expand Down
Loading

0 comments on commit 5ead89b

Please sign in to comment.