Skip to content

Commit

Permalink
Copy edits for typos
Browse files Browse the repository at this point in the history
  • Loading branch information
epc committed Dec 22, 2015
1 parent d20ab89 commit f968c59
Show file tree
Hide file tree
Showing 21 changed files with 30 additions and 30 deletions.
2 changes: 1 addition & 1 deletion docs/admin/garbage-collection.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ pod (UID, container name) pair is allowed to have, less than zero for no limit.
`MaxContainers` is the max number of total dead containers, less than zero for no limit as well.

kubelet sorts out containers which are unidentified or stay out of bounds set by previous
mentioned three flags. Gernerally the oldest containers are removed first. Since we take both
mentioned three flags. Generally the oldest containers are removed first. Since we take both
`MaxPerPodContainer` and `MaxContainers` into consideration, it could happen when they
have conflict -- retaining the max number of containers per pod goes out of range set by max
number of global dead containers. In this case, we would sacrifice the `MaxPerPodContainer`
Expand Down
2 changes: 1 addition & 1 deletion docs/api-reference/v1/operations.html
Original file line number Diff line number Diff line change
Expand Up @@ -25733,7 +25733,7 @@ <h4 id="_tags_199">Tags</h4>
</div>
<div id="footer">
<div id="footer-text">
Last updated 2015-12-15 06:44:31 UTC
Last updated 2015-12-22 14:29:57 UTC
</div>
</div>
</body>
Expand Down
2 changes: 1 addition & 1 deletion docs/api.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ in more detail in the [API Changes documentation](devel/api_changes.md#alpha-bet
- Support for the overall feature will not be dropped, though details may change.
- The schema and/or semantics of objects may change in incompatible ways in a subsequent beta or stable release. When this happens,
we will provide instructions for migrating to the next version. This may require deleting, editing, and re-creating
API objects. The editing process may require some thought. This may require downtime for appplications that rely on the feature.
API objects. The editing process may require some thought. This may require downtime for applications that rely on the feature.
- Recommended for only non-business-critical uses because of potential for incompatible changes in subsequent releases. If you have
multiple clusters which can be upgraded independently, you may be able to relax this restriction.
- **Please do try our beta features and give feedback on them! Once they exit beta, it may not be practical for us to make more changes.**
Expand Down
6 changes: 3 additions & 3 deletions docs/design/aws_under_the_hood.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ you with sufficient instance storage for your needs.

Note: The master uses a persistent volume ([etcd](architecture.md#etcd)) to track
its state. Similar to nodes, containers are mostly run against instance
storage, except that we repoint some important data onto the peristent volume.
storage, except that we repoint some important data onto the persistent volume.

The default storage driver for Docker images is aufs. Specifying btrfs (by passing the environment
variable `DOCKER_STORAGE=btrfs` to kube-up) is also a good choice for a filesystem. btrfs
Expand Down Expand Up @@ -176,7 +176,7 @@ a distribution file, and then are responsible for attaching and detaching EBS
volumes from itself.

The node policy is relatively minimal. The master policy is probably overly
permissive. The security concious may want to lock-down the IAM policies
permissive. The security conscious may want to lock-down the IAM policies
further ([#11936](http://issues.k8s.io/11936)).

We should make it easier to extend IAM permissions and also ensure that they
Expand Down Expand Up @@ -275,7 +275,7 @@ Salt, for example). These objects can currently be manually created:

* Set the `AWS_S3_BUCKET` environment variable to use an existing S3 bucket.
* Set the `VPC_ID` environment variable to reuse an existing VPC.
* Set the `SUBNET_ID` environemnt variable to reuse an existing subnet.
* Set the `SUBNET_ID` environment variable to reuse an existing subnet.
* If your route table has a matching `KubernetesCluster` tag, it will
be reused.
* If your security groups are appropriately named, they will be reused.
Expand Down
2 changes: 1 addition & 1 deletion docs/design/daemon.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ The DaemonSet supports standard API features:
- Using the pod’s nodeSelector field, DaemonSets can be restricted to operate over nodes that have a certain label. For example, suppose that in a cluster some nodes are labeled ‘app=database’. You can use a DaemonSet to launch a datastore pod on exactly those nodes labeled ‘app=database’.
- Using the pod's nodeName field, DaemonSets can be restricted to operate on a specified node.
- The PodTemplateSpec used by the DaemonSet is the same as the PodTemplateSpec used by the Replication Controller.
- The initial implementation will not guarnatee that DaemonSet pods are created on nodes before other pods.
- The initial implementation will not guarantee that DaemonSet pods are created on nodes before other pods.
- The initial implementation of DaemonSet does not guarantee that DaemonSet pods show up on nodes (for example because of resource limitations of the node), but makes a best effort to launch DaemonSet pods (like Replication Controllers do with pods). Subsequent revisions might ensure that DaemonSet pods show up on nodes, preempting other pods if necessary.
- The DaemonSet controller adds an annotation "kubernetes.io/created-by: \<json API object reference\>"
- YAML example:
Expand Down
2 changes: 1 addition & 1 deletion docs/devel/api-conventions.md
Original file line number Diff line number Diff line change
Expand Up @@ -403,7 +403,7 @@ Using the `omitempty` tag causes swagger documentation to reflect that the field

Using a pointer allows distinguishing unset from the zero value for that type.
There are some cases where, in principle, a pointer is not needed for an optional field
since the zero value is forbidden, and thus imples unset. There are examples of this in the
since the zero value is forbidden, and thus implies unset. There are examples of this in the
codebase. However:

- it can be difficult for implementors to anticipate all cases where an empty value might need to be
Expand Down
4 changes: 2 additions & 2 deletions docs/devel/api_changes.md
Original file line number Diff line number Diff line change
Expand Up @@ -558,7 +558,7 @@ New feature development proceeds through a series of stages of increasing maturi

- Development level
- Object Versioning: no convention
- Availability: not commited to main kubernetes repo, and thus not available in offical releases
- Availability: not committed to main kubernetes repo, and thus not available in official releases
- Audience: other developers closely collaborating on a feature or proof-of-concept
- Upgradeability, Reliability, Completeness, and Support: no requirements or guarantees
- Alpha level
Expand Down Expand Up @@ -590,7 +590,7 @@ New feature development proceeds through a series of stages of increasing maturi
tests complete; the API has had a thorough API review and is thought to be complete, though use
during beta may frequently turn up API issues not thought of during review
- Upgradeability: the object schema and semantics may change in a later software release; when
this happens, an upgrade path will be documentedr; in some cases, objects will be automatically
this happens, an upgrade path will be documented; in some cases, objects will be automatically
converted to the new version; in other cases, a manual upgrade may be necessary; a manual
upgrade may require downtime for anything relying on the new feature, and may require
manual conversion of objects to the new version; when manual conversion is necessary, the
Expand Down
2 changes: 1 addition & 1 deletion docs/devel/automation.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ Documentation for other releases can be found at

## Overview

Kubernetes uses a variety of automated tools in an attempt to relieve developers of repeptitive, low
Kubernetes uses a variety of automated tools in an attempt to relieve developers of repetitive, low
brain power work. This document attempts to describe these processes.


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -179,7 +179,7 @@ exports.queue_storage_if_needed = function() {
]);
process.env['AZURE_STORAGE_ACCOUNT'] = conf.resources['storage_account'];
} else {
// Preserve it for resizing, so we don't create a new one by accedent,
// Preserve it for resizing, so we don't create a new one by accident,
// when the environment variable is unset
conf.resources['storage_account'] = process.env['AZURE_STORAGE_ACCOUNT'];
}
Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/rkt/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ For more complete applications, please look in the [examples directory](../../..

### Debugging

Here are severals tips for you when you run into any issues.
Here are several tips for you when you run into any issues.

##### Check logs

Expand Down
2 changes: 1 addition & 1 deletion docs/proposals/deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,7 @@ For each pending deployment, it will:
and the old RCs have been ramped down to 0.
6. Cleanup.

DeploymentController is stateless so that it can recover incase it crashes during a deployment.
DeploymentController is stateless so that it can recover in case it crashes during a deployment.

### MinReadySeconds

Expand Down
2 changes: 1 addition & 1 deletion docs/proposals/pod-security-context.md
Original file line number Diff line number Diff line change
Expand Up @@ -250,7 +250,7 @@ defined as:
> 3. It must be possible to round-trip your change (convert to different API versions and back) with
> no loss of information.
Previous versions of this proposal attempted to deal with backward compatiblity by defining
Previous versions of this proposal attempted to deal with backward compatibility by defining
the affect of setting the pod-level fields on the container-level fields. While trying to find
consensus on this design, it became apparent that this approach was going to be extremely complex
to implement, explain, and support. Instead, we will approach backward compatibility as follows:
Expand Down
4 changes: 2 additions & 2 deletions docs/proposals/resource-qos.md
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,7 @@ Supporting other platforms:

Protecting containers and guarantees:
- **Control loops**: The OOM score assignment is not perfect for burstable containers, and system OOM kills are expensive. TODO: Add a control loop to reduce memory pressure, while ensuring guarantees for various containers.
- **Kubelet, Kube-proxy, Docker daemon protection**: If a system is overcommitted with memory guaranteed containers, then all prcoesses will have an OOM_SCORE of 0. So Docker daemon could be killed instead of a container or pod being killed. TODO: Place all user-pods into a separate cgroup, and set a limit on the memory they can consume. Initially, the limits can be based on estimated memory usage of Kubelet, Kube-proxy, and CPU limits, eventually we can monitor the resources they consume.
- **Kubelet, Kube-proxy, Docker daemon protection**: If a system is overcommitted with memory guaranteed containers, then all processes will have an OOM_SCORE of 0. So Docker daemon could be killed instead of a container or pod being killed. TODO: Place all user-pods into a separate cgroup, and set a limit on the memory they can consume. Initially, the limits can be based on estimated memory usage of Kubelet, Kube-proxy, and CPU limits, eventually we can monitor the resources they consume.
- **OOM Assignment Races**: We cannot set OOM_SCORE_ADJ of a process until it has launched. This could lead to races. For example, suppose that a memory burstable container is using 70% of the system’s memory, and another burstable container is using 30% of the system’s memory. A best-effort burstable container attempts to launch on the Kubelet. Initially the best-effort container is using 2% of memory, and has an OOM_SCORE_ADJ of 20. So its OOM_SCORE is lower than the burstable pod using 70% of system memory. The burstable pod will be evicted by the best-effort pod. Short-term TODO: Implement a restart policy where best-effort pods are immediately evicted if OOM killed, but burstable pods are given a few retries. Long-term TODO: push support for OOM scores in cgroups to the upstream Linux kernel.
- **Swap Memory**: The QoS proposal assumes that swap memory is disabled. If swap is enabled, then resource guarantees (for pods that specify resource requirements) will not hold. For example, suppose 2 guaranteed pods have reached their memory limit. They can start allocating memory on swap space. Eventually, if there isn’t enough swap space, processes in the pods might get killed. TODO: ensure that swap space is disabled on our cluster setups scripts.

Expand All @@ -128,7 +128,7 @@ Killing and eviction mechanics:
- **Out of Resource Eviction**: If a container in a multi-container pod fails, we might want restart the entire pod instead of just restarting the container. In some cases (e.g. if a memory best-effort container is out of resource killed), we might change pods to "failed" phase and pods might need to be evicted. TODO: Draft a policy for out of resource eviction and implement it.

Maintaining CPU performance:
- **CPU-sharing Issues** Suppose that a node is running 2 container: a container A requesting for 50% of CPU (but without a CPU limit), and a container B not requesting for resoruces. Suppose that both pods try to use as much CPU as possible. After the proposal is implemented, A will get 100% of the CPU, and B will get around 0% of the CPU. However, a fairer scheme would give the Burstable container 75% of the CPU and the Best-Effort container 25% of the CPU (since resources past the Burstable container’s request are not guaranteed). TODO: think about whether this issue to be solved, implement a solution.
- **CPU-sharing Issues** Suppose that a node is running 2 container: a container A requesting for 50% of CPU (but without a CPU limit), and a container B not requesting for resources. Suppose that both pods try to use as much CPU as possible. After the proposal is implemented, A will get 100% of the CPU, and B will get around 0% of the CPU. However, a fairer scheme would give the Burstable container 75% of the CPU and the Best-Effort container 25% of the CPU (since resources past the Burstable container’s request are not guaranteed). TODO: think about whether this issue to be solved, implement a solution.
- **CPU kills**: System tasks or daemons like the Kubelet could consume more CPU, and we won't be able to guarantee containers the CPU amount they requested. If the situation persists, we might want to kill the container. TODO: Draft a policy for CPU usage killing and implement it.
- **CPU limits**: Enabling CPU limits can be problematic, because processes might be hard capped and might stall for a while. TODO: Enable CPU limits intelligently using CPU quota and core allocation.

Expand Down
6 changes: 3 additions & 3 deletions docs/proposals/selinux.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ Goals of this design:
### Docker

Docker uses a base SELinux context and calculates a unique MCS label per container. The SELinux
context of a container can be overriden with the `SecurityOpt` api that allows setting the different
context of a container can be overridden with the `SecurityOpt` api that allows setting the different
parts of the SELinux context individually.

Docker has functionality to relabel bind-mounts with a usable SElinux and supports two different
Expand All @@ -73,7 +73,7 @@ use-cases:
1. The `:Z` bind-mount flag, which tells Docker to relabel a bind-mount with the container's
SELinux context
2. The `:z` bind-mount flag, which tells Docker to relabel a bind-mount with the container's
SElinux context, but remove the MCS labels, making the volume shareable beween containers
SElinux context, but remove the MCS labels, making the volume shareable between containers

We should avoid using the `:z` flag, because it relaxes the SELinux context so that any container
(from an SELinux standpoint) can use the volume.
Expand Down Expand Up @@ -200,7 +200,7 @@ From the above, we know that label management must be applied:
Volumes should be relabeled with the correct SELinux context. Docker has this capability today; it
is desireable for other container runtime implementations to provide similar functionality.

Relabeling should be an optional aspect of a volume plugin to accomodate:
Relabeling should be an optional aspect of a volume plugin to accommodate:

1. volume types for which generalized relabeling support is not sufficient
2. testing for each volume plugin individually
Expand Down
4 changes: 2 additions & 2 deletions docs/proposals/volumes.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ Goals of this design:

1. Enumerate the different use-cases for volume usage in pods
2. Define the desired goal state for ownership and permission management in Kubernetes
3. Describe the changes necessary to acheive desired state
3. Describe the changes necessary to achieve desired state

## Constraints and Assumptions

Expand Down Expand Up @@ -250,7 +250,7 @@ override the primary GID and should be safe to use in images that expect GID 0.
### Setting ownership and permissions on volumes

For `EmptyDir`-based volumes and unshared storage, `chown` and `chmod` on the node are sufficient to
set ownershp and permissions. Shared storage is different because:
set ownership and permissions. Shared storage is different because:

1. Shared storage may not live on the node a pod that uses it runs on
2. Shared storage may be externally managed
Expand Down
2 changes: 1 addition & 1 deletion docs/user-guide/ingress.md
Original file line number Diff line number Diff line change
Expand Up @@ -243,7 +243,7 @@ __Default Backends__: An Ingress with no rules, like the one shown in the previo

### Loadbalancing

An Ingress controller is bootstrapped with some loadbalancing policy settings that it applies to all Ingress, such as the loadbalancing algorithm, backend weight scheme etc. More advanced loadbalancing concepts (eg: persistent sessions, dynamic weights) are not yet exposed through the Ingress. You can still get these features through the [service loadbalancer](https://github.com/kubernetes/contrib/tree/master/service-loadbalancer). With time, we plan to distil loadbalancing patterns that are applicable cross platform into the Ingress resource.
An Ingress controller is bootstrapped with some loadbalancing policy settings that it applies to all Ingress, such as the loadbalancing algorithm, backend weight scheme etc. More advanced loadbalancing concepts (eg: persistent sessions, dynamic weights) are not yet exposed through the Ingress. You can still get these features through the [service loadbalancer](https://github.com/kubernetes/contrib/tree/master/service-loadbalancer). With time, we plan to distill loadbalancing patterns that are applicable cross platform into the Ingress resource.

It's also worth noting that even though health checks are not exposed directly through the Ingress, there exist parallel concepts in Kubernetes such as [readiness probes](https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/production-pods.md#liveness-and-readiness-probes-aka-health-checks) which allow you to achieve the same end result.

Expand Down
6 changes: 3 additions & 3 deletions docs/user-guide/jobs.md
Original file line number Diff line number Diff line change
Expand Up @@ -161,7 +161,7 @@ the same schema as a [pod](pods.md), except it is nested and does not have an `a
`kind`.

In addition to required fields for a Pod, a pod template in a job must specify appropriate
lables (see [pod selector](#pod-selector) and an appropriate restart policy.
labels (see [pod selector](#pod-selector) and an appropriate restart policy.

Only a [`RestartPolicy`](pod-states.md) equal to `Never` or `OnFailure` are allowed.

Expand All @@ -171,7 +171,7 @@ The `.spec.selector` field is a label query over a set of pods.

The `spec.selector` is an object consisting of two fields:
* `matchLabels` - works the same as the `.spec.selector` of a [ReplicationController](replication-controller.md)
* `matchExpressions` - allows to build more sophisticated selectors by specyfing key,
* `matchExpressions` - allows to build more sophisticated selectors by specifying key,
list of values and an operator that relates the key and values.

When the two are specified the result is ANDed.
Expand Down Expand Up @@ -215,7 +215,7 @@ restarted locally, or else specify `.spec.template.containers[].restartPolicy =
See [pods-states](pod-states.md) for more information on `restartPolicy`.

An entire Pod can also fail, for a number of reasons, such as when the pod is kicked off the node
(node is upgraded, rebooted, delelted, etc.), or if a container of the Pod fails and the
(node is upgraded, rebooted, deleted, etc.), or if a container of the Pod fails and the
`.spec.template.containers[].restartPolicy = "Never"`. When a Pod fails, then the Job controller
starts a new Pod. Therefore, your program needs to handle the case when it is restarted in a new
pod. In particular, it needs to handle temporary files, locks, incomplete output and the like
Expand Down
Loading

0 comments on commit f968c59

Please sign in to comment.