Skip to content

Commit

Permalink
Fix typos and linted_packages sorting
Browse files Browse the repository at this point in the history
  • Loading branch information
koep committed Oct 31, 2016
1 parent e6b2517 commit cc1d895
Show file tree
Hide file tree
Showing 42 changed files with 75 additions and 75 deletions.
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Contributing guidelines

Want to hack on kubernetes? Yay!
Want to hack on Kubernetes? Yay!

## Developer Guide

Expand Down
6 changes: 3 additions & 3 deletions build-tools/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,20 +45,20 @@ All Docker names are suffixed with a hash derived from the file path (to allow c

## Proxy Settings

If you are behind a proxy and you are letting these scripts use `docker-machine` to set up your local VM for you on macOS, you need to export proxy settings for kubernetes build, the following environment variables should be defined.
If you are behind a proxy and you are letting these scripts use `docker-machine` to set up your local VM for you on macOS, you need to export proxy settings for Kubernetes build, the following environment variables should be defined.

```
export KUBERNETES_HTTP_PROXY=http://username:password@proxyaddr:proxyport
export KUBERNETES_HTTPS_PROXY=https://username:password@proxyaddr:proxyport
```

Optionally, you can specify addresses of no proxy for kubernetes build, for example
Optionally, you can specify addresses of no proxy for Kubernetes build, for example

```
export KUBERNETES_NO_PROXY=127.0.0.1
```

If you are using sudo to make kubernetes build for example make quick-release, you need run `sudo -E make quick-release` to pass the environment variables.
If you are using sudo to make Kubernetes build for example make quick-release, you need run `sudo -E make quick-release` to pass the environment variables.

## Really Remote Docker Engine

Expand Down
2 changes: 1 addition & 1 deletion cluster/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Cluster Configuration

##### Deprecation Notice: This directory has entered maintainence mode and will not be accepting new providers. Please submit new automation deployments to [kube-deploy](https://github.com/kubernetes/kube-deploy). Deployments in this directory will continue to be maintained and supported at their current level of support.
##### Deprecation Notice: This directory has entered maintenance mode and will not be accepting new providers. Please submit new automation deployments to [kube-deploy](https://github.com/kubernetes/kube-deploy). Deployments in this directory will continue to be maintained and supported at their current level of support.

The scripts and data in this directory automate creation and configuration of a Kubernetes cluster, including networking, DNS, nodes, and master components.

Expand Down
2 changes: 1 addition & 1 deletion cluster/addons/python-image/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Python image

The python image here is used by OS distros that don't have python installed to
run python scripts to parse the yaml files in the addon updator script.
run python scripts to parse the yaml files in the addon updater script.

[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/cluster/addons/python-image/README.md?pixel)]()
2 changes: 1 addition & 1 deletion cluster/addons/registry/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ spec:
This tells Kubernetes that you want to use storage, and the `PersistentVolume`
you created before will be bound to this claim (unless you have other
`PersistentVolumes` in which case those might get bound instead). This claim
gives you the rigth to use this storage until you release the claim.
gives you the right to use this storage until you release the claim.

## Run the registry

Expand Down
4 changes: 2 additions & 2 deletions cluster/juju/bundles/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ deploy a bundle.

You will need to
[install the Juju client](https://jujucharms.com/get-started) and
`juju-quickstart` as pre-requisites. To deploy the bundle use
`juju-quickstart` as prerequisites. To deploy the bundle use
`juju-quickstart` which runs on Mac OS (`brew install
juju-quickstart`) or Ubuntu (`apt-get install juju-quickstart`).

Expand Down Expand Up @@ -191,7 +191,7 @@ Send us pull requests! We'll send you a cookie if they include tests and docs.
The charms and bundles are in the [kubernetes](https://github.com/kubernetes/kubernetes)
repository in github.

- [kubernetes-master charm on Github](https://github.com/kubernetes/kubernetes/tree/master/cluster/juju/charms/trusty/kubernetes-master)
- [kubernetes-master charm on GitHub](https://github.com/kubernetes/kubernetes/tree/master/cluster/juju/charms/trusty/kubernetes-master)
- [kubernetes charm on GitHub](https://github.com/kubernetes/kubernetes/tree/master/cluster/juju/charms/trusty/kubernetes)


Expand Down
2 changes: 1 addition & 1 deletion cmd/mungedocs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ It basically does the following:
- iterate over all files in the given doc root.
- for each file split it into a slice (mungeLines) of lines (mungeLine)
- a mungeline has metadata about each line typically determined by a 'fast' regex.
- metadata contains things like 'is inside a preformmatted block'
- metadata contains things like 'is inside a preformatted block'
- contains a markdown header
- has a link to another file
- etc..
Expand Down
2 changes: 1 addition & 1 deletion docs/proposals/apparmor.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ Ubuntu system. The profiles can be found at `{securityfs}/apparmor/profiles`

## API Changes

The intial alpha support of AppArmor will follow the pattern
The initial alpha support of AppArmor will follow the pattern
[used by seccomp](https://github.com/kubernetes/kubernetes/pull/25324) and specify profiles through
annotations. Profiles can be specified per-container through pod annotations. The annotation format
is a key matching the container, and a profile name value:
Expand Down
2 changes: 1 addition & 1 deletion docs/proposals/cluster-deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ ship with all of the requirements for the node specification by default.

**Objective**: Generate security certificates used to configure secure communication between client, master and nodes

TODO: Enumerate ceritificates which have to be generated.
TODO: Enumerate certificates which have to be generated.

## Step 3: Deploy master

Expand Down
2 changes: 1 addition & 1 deletion docs/proposals/container-runtime-interface-v1.md
Original file line number Diff line number Diff line change
Expand Up @@ -245,7 +245,7 @@ discussion and may be achieved alternatively:
**Imperative pod-level interface**
The interface contains only CreatePod(), StartPod(), StopPod() and RemovePod().
This implies that the runtime needs to take over container lifecycle
manangement (i.e., enforce restart policy), lifecycle hooks, liveness checks,
management (i.e., enforce restart policy), lifecycle hooks, liveness checks,
etc. Kubelet will mainly be responsible for interfacing with the apiserver, and
can potentially become a very thin daemon.
- Pros: Lower maintenance overhead for the Kubernetes maintainers if `Docker`
Expand Down
2 changes: 1 addition & 1 deletion docs/proposals/controller-ref.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ To prevent re-adoption of an object during deletion the `DeletionTimestamp` will
Necessary related work:
* `OwnerReferences` are correctly added/deleted,
* GarbageCollector removes dangling references,
* Controllers don't take any meaningfull actions when `DeletionTimestamps` is set.
* Controllers don't take any meaningful actions when `DeletionTimestamps` is set.

# Considered alternatives

Expand Down
2 changes: 1 addition & 1 deletion docs/proposals/external-lb-source-ip-preservation.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ the pods matching the service pod-selector.

## Motivation

The current implemention requires that the cloud loadbalancer balances traffic across all
The current implementation requires that the cloud loadbalancer balances traffic across all
Kubernetes worker nodes, and this traffic is then equally distributed to all the backend
pods for that service.
Due to the DNAT required to redirect the traffic to its ultimate destination, the return
Expand Down
4 changes: 2 additions & 2 deletions docs/proposals/federated-api-servers.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ federated servers.
* Unblock new APIs from core kubernetes team review: A lot of new API proposals
are currently blocked on review from the core kubernetes team. By allowing
developers to expose their APIs as a separate server and enabling the cluster
admin to use it without any change to the core kubernetes reporsitory, we
admin to use it without any change to the core kubernetes repository, we
unblock these APIs.
* Place for staging experimental APIs: New APIs can remain in separate
federated servers until they become stable, at which point, they can be moved
Expand Down Expand Up @@ -167,7 +167,7 @@ resource.

This proposal is not enough for hosted cluster users, but allows us to improve
that in the future.
On a hosted kubernetes cluster, for eg on GKE - where Google manages the kubernetes
On a hosted kubernetes cluster, for e.g. on GKE - where Google manages the kubernetes
API server, users will have to bring up and maintain the proxy and federated servers
themselves.
Other system components like the various controllers, will not be aware of the
Expand Down
2 changes: 1 addition & 1 deletion docs/proposals/flannel-integration.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ The first is accomplished in this PR, while a timeline for 2. and 3. is TDB. To
- Put: This is a request for a lease. If the nodecontroller is allocating CIDRs we can probably just no-op.
* `/network/reservations`: TDB, we can probably use this to accommodate node controller allocating CIDR instead of flannel requesting it

The ick-iest part of this implementation is going to the `GET /network/leases`, i.e the watch proxy. We can side-step by waiting for a more generic Kubernetes resource. However, we can also implement it as follows:
The ick-iest part of this implementation is going to the `GET /network/leases`, i.e. the watch proxy. We can side-step by waiting for a more generic Kubernetes resource. However, we can also implement it as follows:
* Watch all nodes, ignore heartbeats
* On each change, figure out the lease for the node, construct a [lease watch result](https://github.com/coreos/flannel/blob/0bf263826eab1707be5262703a8092c7d15e0be4/subnet/subnet.go#L72), and send it down the watch with the RV from the node
* Implement a lease list that does a similar translation
Expand Down
2 changes: 1 addition & 1 deletion docs/proposals/image-provenance.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ The admission controller code will go in `plugin/pkg/admission/imagepolicy`.
There will be a cache of decisions in the admission controller.

If the apiserver cannot reach the webhook backend, it will log a warning and either admit or deny the pod.
A flag will control whether it admits or denys on failure.
A flag will control whether it admits or denies on failure.
The rationale for deny is that an attacker could DoS the backend or wait for it to be down, and then sneak a
bad pod into the system. The rationale for allow here is that, if the cluster admin also does
after-the-fact auditing of what images were run (which we think will be common), this will catch
Expand Down
2 changes: 1 addition & 1 deletion docs/proposals/job.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ type JobSpec struct {
}
```

`JobStatus` structure is defined to contain informations about pods executing
`JobStatus` structure is defined to contain information about pods executing
specified job. The structure holds information about pods currently executing
the job.

Expand Down
4 changes: 2 additions & 2 deletions docs/proposals/kubectl-login.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,10 +63,10 @@ by the latter command.
When clusters utilize authorization plugins access decisions are based on the
correct configuration of an auth-N plugin, an auth-Z plugin, and client side
credentials. Being rejected then begs several questions. Is the user's
kubeconfig mis-configured? Is the authorization plugin setup wrong? Is the user
kubeconfig misconfigured? Is the authorization plugin setup wrong? Is the user
authenticating as a different user than the one they assume?

To help `kubectl login` diagnose mis-configured credentials, responses from the
To help `kubectl login` diagnose misconfigured credentials, responses from the
API server to authenticated requests SHOULD include the `Authentication-Info`
header as defined in [RFC 7615](https://tools.ietf.org/html/rfc7615). The value
will hold name value pairs for `username` and `uid`. Since usernames and IDs
Expand Down
4 changes: 2 additions & 2 deletions docs/proposals/kubelet-eviction.md
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ The following node conditions are defined that correspond to the specified evict
| Node Condition | Eviction Signal | Description |
|----------------|------------------|------------------------------------------------------------------|
| MemoryPressure | memory.available | Available memory on the node has satisfied an eviction threshold |
| DiskPressure | nodefs.available, nodefs.inodesFree, imagefs.available, or imagefs.inodesFree | Available disk space and inodes on either the node's root filesytem or image filesystem has satisfied an eviction threshold |
| DiskPressure | nodefs.available, nodefs.inodesFree, imagefs.available, or imagefs.inodesFree | Available disk space and inodes on either the node's root filesystem or image filesystem has satisfied an eviction threshold |

The `kubelet` will continue to report node status updates at the frequency specified by
`--node-status-update-frequency` which defaults to `10s`.
Expand Down Expand Up @@ -300,7 +300,7 @@ In the future, if we store logs of dead containers outside of the container itse
Once the lifetime of containers and logs are split, kubelet can support more user friendly policies
around log evictions. `kubelet` can delete logs of the oldest containers first.
Since logs from the first and the most recent incarnation of a container is the most important for most applications,
kubelet can try to preserve these logs and aggresively delete logs from other container incarnations.
kubelet can try to preserve these logs and aggressively delete logs from other container incarnations.

Until logs are split from container's lifetime, `kubelet` can delete dead containers to free up disk space.

Expand Down
10 changes: 5 additions & 5 deletions docs/proposals/multi-platform.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,12 +46,12 @@ For a large enterprise where computing power is the king, one may imagine the fo
- `linux/ppc64le`: For running highly-optimized software; especially massive compute tasks
- `windows/amd64`: For running services that are only compatible on windows; e.g. business applications written in C# .NET

For a mid-sized business where efficency is most important, these could be combinations:
For a mid-sized business where efficiency is most important, these could be combinations:
- `linux/amd64`: For running most of the general-purpose computing tasks, plus tasks that require very high single-core performance.
- `linux/arm64`: For running webservices and high-density tasks => the cluster could autoscale in a way that `linux/amd64` machines could hibernate at night in order to minimize power usage.

For a small business or university, arm is often sufficent:
- `linux/arm`: Draws very little power, and can run web sites and app backends efficently on Scaleway for example.
For a small business or university, arm is often sufficient:
- `linux/arm`: Draws very little power, and can run web sites and app backends efficiently on Scaleway for example.

And last but not least; Raspberry Pi's should be used for [education at universities](http://kubecloud.io/) and are great for **demoing Kubernetes' features at conferences.**

Expand Down Expand Up @@ -514,14 +514,14 @@ Linux 0a7da80f1665 4.2.0-25-generic #30-Ubuntu SMP Mon Jan 18 12:31:50 UTC 2016

Here a linux module called `binfmt_misc` registered the "magic numbers" in the kernel, so the kernel may detect which architecture a binary is, and prepend the call with `/usr/bin/qemu-(arm|aarch64|ppc64le)-static`. For example, `/usr/bin/qemu-arm-static` is a statically linked `amd64` binary that translates all ARM syscalls to `amd64` syscalls.

The multiarch guys have done a great job here, you may find the source for this and other images at [Github](https://github.com/multiarch)
The multiarch guys have done a great job here, you may find the source for this and other images at [GitHub](https://github.com/multiarch)


## Implementation

## History

32-bit ARM (`linux/arm`) was the first platform Kubernetes was ported to, and luxas' project [`Kubernetes on ARM`](https://github.com/luxas/kubernetes-on-arm) (released on Github the 31st of September 2015)
32-bit ARM (`linux/arm`) was the first platform Kubernetes was ported to, and luxas' project [`Kubernetes on ARM`](https://github.com/luxas/kubernetes-on-arm) (released on GitHub the 31st of September 2015)
served as a way of running Kubernetes on ARM devices easily.
The 30th of November 2015, a tracking issue about making Kubernetes run on ARM was opened: [#17981](https://github.com/kubernetes/kubernetes/issues/17981). It later shifted focus to how to make Kubernetes a more platform-independent system.

Expand Down
2 changes: 1 addition & 1 deletion docs/proposals/network-policy.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ chosen networking solution.
## Implementation

The implmentation in Kubernetes consists of:
The implementation in Kubernetes consists of:
- A v1beta1 NetworkPolicy API object
- A structure on the `Namespace` object to control policy, to be developed as an annotation for now.

Expand Down
4 changes: 2 additions & 2 deletions docs/proposals/performance-related-monitoring.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ Basic ideas:

### Logging monitoring

Log spam is a serious problem and we need to keep it under control. Simplest way to check for regressions, suggested by @bredanburns, is to compute the rate in which log files
Log spam is a serious problem and we need to keep it under control. Simplest way to check for regressions, suggested by @brendandburns, is to compute the rate in which log files
grow in e2e tests.

Basic ideas:
Expand All @@ -70,7 +70,7 @@ Basic ideas:
Reverse of REST call monitoring done in the API server. We need to know when a given component increases a pressure it puts on the API server. As a proxy for number of
requests sent we can track how saturated are rate limiters. This has additional advantage of giving us data needed to fine-tune rate limiter constants.

Because we have rate limitting on both ends (client and API server) we should monitor number of inflight requests in API server and how it relates to `max-requests-inflight`.
Because we have rate limiting on both ends (client and API server) we should monitor number of inflight requests in API server and how it relates to `max-requests-inflight`.

Basic ideas:
- percentage of used non-burst limit,
Expand Down
2 changes: 1 addition & 1 deletion docs/proposals/pod-resource-management.md
Original file line number Diff line number Diff line change
Expand Up @@ -383,7 +383,7 @@ The implementation goals of the first milestone are outlined below.
- [x] Add PodContainerManagerImpl Create and Destroy methods which implements the respective PodContainerManager methods using a cgroupfs driver. #28017
- [x] Have docker manager create container cgroups under pod level cgroups. Inject creation and deletion of pod cgroups into the pod workers. Add e2e tests to test this behaviour. #29049
- [x] Add support for updating policy for the pod cgroups. Add e2e tests to test this behaviour. #29087
- [ ] Enabling 'cgroup-per-qos' flag in Kubelet: The user is expected to drain the node and restart it before eenabling this feature, but as a fallback we also want to allow the user to just restart the kubelet with the cgroup-per-qos flag enabled to use this feature. As a part of this we need to figure out a policy for pods having Restart Policy: Never. More details in this [issue](https://github.com/kubernetes/kubernetes/issues/29946).
- [ ] Enabling 'cgroup-per-qos' flag in Kubelet: The user is expected to drain the node and restart it before enabling this feature, but as a fallback we also want to allow the user to just restart the kubelet with the cgroup-per-qos flag enabled to use this feature. As a part of this we need to figure out a policy for pods having Restart Policy: Never. More details in this [issue](https://github.com/kubernetes/kubernetes/issues/29946).
- [ ] Removing terminated pod's Cgroup : We need to cleanup the pod's cgroup once the pod is terminated. More details in this [issue](https://github.com/kubernetes/kubernetes/issues/29927).
- [ ] Kubelet needs to ensure that the cgroup settings are what the kubelet expects them to be. If security is not of concern, one can assume that once kubelet applies cgroups setting successfully, the values will never change unless kubelet changes it. If security is of concern, then kubelet will have to ensure that the cgroup values meet its requirements and then continue to watch for updates to cgroups via inotify and re-apply cgroup values if necessary.
Updating QoS limits needs to happen before pod cgroups values are updated. When pod cgroups are being deleted, QoS limits have to be updated after pod cgroup values have been updated for deletion or pod cgroups have been removed. Given that kubelet doesn't have any checkpoints and updates to QoS and pod cgroups are not atomic, kubelet needs to reconcile cgroups status whenever it restarts to ensure that the cgroups values match kubelet's expectation.
Expand Down
2 changes: 1 addition & 1 deletion docs/proposals/pod-security-context.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ attributes.
Some use cases require the containers in a pod to run with different security settings. As an
example, a user may want to have a pod with two containers, one of which runs as root with the
privileged setting, and one that runs as a non-root UID. To support use cases like this, it should
be possible to override appropriate (ie, not intrinsically pod-level) security settings for
be possible to override appropriate (i.e., not intrinsically pod-level) security settings for
individual containers.

## Proposed Design
Expand Down
Loading

0 comments on commit cc1d895

Please sign in to comment.