Skip to content

Commit

Permalink
Fix trailing whitespace in all docs
Browse files Browse the repository at this point in the history
  • Loading branch information
eparis committed Jul 31, 2015
1 parent 3c95bd4 commit 024208e
Show file tree
Hide file tree
Showing 81 changed files with 311 additions and 311 deletions.
8 changes: 4 additions & 4 deletions docs/admin/authorization.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ Documentation for other releases can be found at


In Kubernetes, authorization happens as a separate step from authentication.
See the [authentication documentation](authentication.md) for an
See the [authentication documentation](authentication.md) for an
overview of authentication.

Authorization applies to all HTTP accesses on the main (secure) apiserver port.
Expand All @@ -60,8 +60,8 @@ The following implementations are available, and are selected by flag:
A request has 4 attributes that can be considered for authorization:
- user (the user-string which a user was authenticated as).
- whether the request is readonly (GETs are readonly)
- what resource is being accessed
- applies only to the API endpoints, such as
- what resource is being accessed
- applies only to the API endpoints, such as
`/api/v1/namespaces/default/pods`. For miscellaneous endpoints, like `/version`, the
resource is the empty string.
- the namespace of the object being access, or the empty string if the
Expand Down Expand Up @@ -95,7 +95,7 @@ interface.
A request has attributes which correspond to the properties of a policy object.

When a request is received, the attributes are determined. Unknown attributes
are set to the zero value of its type (e.g. empty string, 0, false).
are set to the zero value of its type (e.g. empty string, 0, false).

An unset property will match any value of the corresponding
attribute. An unset attribute will match any value of the corresponding property.
Expand Down
4 changes: 2 additions & 2 deletions docs/admin/cluster-troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ Documentation for other releases can be found at
This doc is about cluster troubleshooting; we assume you have already ruled out your application as the root cause of the
problem you are experiencing. See
the [application troubleshooting guide](../user-guide/application-troubleshooting.md) for tips on application debugging.
You may also visit [troubleshooting document](../troubleshooting.md) for more information.
You may also visit [troubleshooting document](../troubleshooting.md) for more information.

## Listing your cluster

Expand Down Expand Up @@ -73,7 +73,7 @@ This is an incomplete list of things that could go wrong, and how to adjust your
Root causes:
- VM(s) shutdown
- Network partition within cluster, or between cluster and users
- Crashes in Kubernetes software
- Crashes in Kubernetes software
- Data loss or unavailability of persistent storage (e.g. GCE PD or AWS EBS volume)
- Operator error, e.g. misconfigured Kubernetes software or application software

Expand Down
2 changes: 1 addition & 1 deletion docs/admin/etcd.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ Documentation for other releases can be found at

[etcd](https://coreos.com/etcd/docs/2.0.12/) is a highly-available key value
store which Kubernetes uses for persistent storage of all of its REST API
objects.
objects.

## Configuration: high-level goals

Expand Down
2 changes: 1 addition & 1 deletion docs/admin/high-availability.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ to make sure that each automatically restarts when it fails. To achieve this, w
the `kubelet` that we run on each of the worker nodes. This is convenient, since we can use containers to distribute our binaries, we can
establish resource limits, and introspect the resource usage of each daemon. Of course, we also need something to monitor the kubelet
itself (insert who watches the watcher jokes here). For Debian systems, we choose monit, but there are a number of alternate
choices. For example, on systemd-based systems (e.g. RHEL, CentOS), you can run 'systemctl enable kubelet'.
choices. For example, on systemd-based systems (e.g. RHEL, CentOS), you can run 'systemctl enable kubelet'.

If you are extending from a standard Kubernetes installation, the `kubelet` binary should already be present on your system. You can run
`which kubelet` to determine if the binary is in fact installed. If it is not installed,
Expand Down
2 changes: 1 addition & 1 deletion docs/admin/introduction.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ project](salt.md).

## Multi-tenant support

* **Resource Quota** ([resource-quota.md](resource-quota.md))
* **Resource Quota** ([resource-quota.md](resource-quota.md))

## Security

Expand Down
4 changes: 2 additions & 2 deletions docs/admin/multi-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,13 +73,13 @@ load and growth.

To pick the number of clusters, first, decide which regions you need to be in to have adequate latency to all your end users, for services that will run
on Kubernetes (if you use a Content Distribution Network, the latency requirements for the CDN-hosted content need not
be considered). Legal issues might influence this as well. For example, a company with a global customer base might decide to have clusters in US, EU, AP, and SA regions.
be considered). Legal issues might influence this as well. For example, a company with a global customer base might decide to have clusters in US, EU, AP, and SA regions.
Call the number of regions to be in `R`.

Second, decide how many clusters should be able to be unavailable at the same time, while still being available. Call
the number that can be unavailable `U`. If you are not sure, then 1 is a fine choice.

If it is allowable for load-balancing to direct traffic to any region in the event of a cluster failure, then
If it is allowable for load-balancing to direct traffic to any region in the event of a cluster failure, then
you need `R + U` clusters. If it is not (e.g you want to ensure low latency for all users in the event of a
cluster failure), then you need to have `R * U` clusters (`U` in each of `R` regions). In any case, try to put each cluster in a different zone.

Expand Down
2 changes: 1 addition & 1 deletion docs/admin/namespaces.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ Each user community has its own:

A cluster operator may create a Namespace for each unique user community.

The Namespace provides a unique scope for:
The Namespace provides a unique scope for:

1. named resources (to avoid basic naming collisions)
2. delegated management authority to trusted users
Expand Down
2 changes: 1 addition & 1 deletion docs/admin/node.md
Original file line number Diff line number Diff line change
Expand Up @@ -234,7 +234,7 @@ capacity when adding a node.
The Kubernetes scheduler ensures that there are enough resources for all the pods on a node. It
checks that the sum of the limits of containers on the node is no greater than than the node capacity. It
includes all containers started by kubelet, but not containers started directly by docker, nor
processes not in containers.
processes not in containers.

If you want to explicitly reserve resources for non-Pod processes, you can create a placeholder
pod. Use the following template:
Expand Down
4 changes: 2 additions & 2 deletions docs/admin/resource-quota.md
Original file line number Diff line number Diff line change
Expand Up @@ -160,14 +160,14 @@ Sometimes more complex policies may be desired, such as:

Such policies could be implemented using ResourceQuota as a building-block, by
writing a 'controller' which watches the quota usage and adjusts the quota
hard limits of each namespace according to other signals.
hard limits of each namespace according to other signals.

Note that resource quota divides up aggregate cluster resources, but it creates no
restrictions around nodes: pods from several namespaces may run on the same node.

## Example

See a [detailed example for how to use resource quota](../user-guide/resourcequota/).
See a [detailed example for how to use resource quota](../user-guide/resourcequota/).

## Read More

Expand Down
2 changes: 1 addition & 1 deletion docs/admin/service-accounts-admin.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ for a number of reasons:
- Auditing considerations for humans and service accounts may differ.
- A config bundle for a complex system may include definition of various service
accounts for components of that system. Because service accounts can be created
ad-hoc and have namespaced names, such config is portable.
ad-hoc and have namespaced names, such config is portable.

## Service account automation

Expand Down
2 changes: 1 addition & 1 deletion docs/api.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ What constitutes a compatible change and how to change the API are detailed by t

## API versioning

To make it easier to eliminate fields or restructure resource representations, Kubernetes supports multiple API versions, each at a different API path prefix, such as `/api/v1beta3`. These are simply different interfaces to read and/or modify the same underlying resources. In general, all API resources are accessible via all API versions, though there may be some cases in the future where that is not true.
To make it easier to eliminate fields or restructure resource representations, Kubernetes supports multiple API versions, each at a different API path prefix, such as `/api/v1beta3`. These are simply different interfaces to read and/or modify the same underlying resources. In general, all API resources are accessible via all API versions, though there may be some cases in the future where that is not true.

We chose to version at the API level rather than at the resource or field level to ensure that the API presents a clear, consistent view of system resources and behavior, and to enable controlling access to end-of-lifed and/or experimental APIs.

Expand Down
2 changes: 1 addition & 1 deletion docs/design/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ Documentation for other releases can be found at

# Kubernetes Design Overview

Kubernetes is a system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications.
Kubernetes is a system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications.

Kubernetes establishes robust declarative primitives for maintaining the desired state requested by the user. We see these primitives as the main value added by Kubernetes. Self-healing mechanisms, such as auto-restarting, re-scheduling, and replicating containers require active controllers, not just imperative orchestration.

Expand Down
6 changes: 3 additions & 3 deletions docs/design/admission_control_resource_quota.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ type ResourceQuotaList struct {

## AdmissionControl plugin: ResourceQuota

The **ResourceQuota** plug-in introspects all incoming admission requests.
The **ResourceQuota** plug-in introspects all incoming admission requests.

It makes decisions by evaluating the incoming object against all defined **ResourceQuota.Status.Hard** resource limits in the request
namespace. If acceptance of the resource would cause the total usage of a named resource to exceed its hard limit, the request is denied.
Expand All @@ -125,7 +125,7 @@ Any resource that is not part of core Kubernetes must follow the resource naming
This means the resource must have a fully-qualified name (i.e. mycompany.org/shinynewresource)

If the incoming request does not cause the total usage to exceed any of the enumerated hard resource limits, the plug-in will post a
**ResourceQuotaUsage** document to the server to atomically update the observed usage based on the previously read
**ResourceQuotaUsage** document to the server to atomically update the observed usage based on the previously read
**ResourceQuota.ResourceVersion**. This keeps incremental usage atomically consistent, but does introduce a bottleneck (intentionally)
into the system.

Expand Down Expand Up @@ -184,7 +184,7 @@ resourcequotas 1 1
services 3 5
```

## More information
## More information

See [resource quota document](../admin/resource-quota.md) and the [example of Resource Quota](../user-guide/resourcequota/) for more information.

Expand Down
2 changes: 1 addition & 1 deletion docs/design/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ Each node runs Docker, of course. Docker takes care of the details of downloadi

### Kubelet

The **Kubelet** manages [pods](../user-guide/pods.md) and their containers, their images, their volumes, etc.
The **Kubelet** manages [pods](../user-guide/pods.md) and their containers, their images, their volumes, etc.

### Kube-Proxy

Expand Down
2 changes: 1 addition & 1 deletion docs/design/event_compression.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ Event compression should be best effort (not guaranteed). Meaning, in the worst
## Design

Instead of a single Timestamp, each event object [contains](http://releases.k8s.io/HEAD/pkg/api/types.go#L1111) the following fields:
* `FirstTimestamp util.Time`
* `FirstTimestamp util.Time`
* The date/time of the first occurrence of the event.
* `LastTimestamp util.Time`
* The date/time of the most recent occurrence of the event.
Expand Down
8 changes: 4 additions & 4 deletions docs/design/expansion.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ available to subsequent expansions.

### Use Case: Variable expansion in command

Users frequently need to pass the values of environment variables to a container's command.
Users frequently need to pass the values of environment variables to a container's command.
Currently, Kubernetes does not perform any expansion of variables. The workaround is to invoke a
shell in the container's command and have the shell perform the substitution, or to write a wrapper
script that sets up the environment and runs the command. This has a number of drawbacks:
Expand Down Expand Up @@ -130,7 +130,7 @@ The exact syntax for variable expansion has a large impact on how users perceive
feature. We considered implementing a very restrictive subset of the shell `${var}` syntax. This
syntax is an attractive option on some level, because many people are familiar with it. However,
this syntax also has a large number of lesser known features such as the ability to provide
default values for unset variables, perform inline substitution, etc.
default values for unset variables, perform inline substitution, etc.

In the interest of preventing conflation of the expansion feature in Kubernetes with the shell
feature, we chose a different syntax similar to the one in Makefiles, `$(var)`. We also chose not
Expand Down Expand Up @@ -239,7 +239,7 @@ The necessary changes to implement this functionality are:
`ObjectReference` and an `EventRecorder`
2. Introduce `third_party/golang/expansion` package that provides:
1. An `Expand(string, func(string) string) string` function
2. A `MappingFuncFor(ObjectEventRecorder, ...map[string]string) string` function
2. A `MappingFuncFor(ObjectEventRecorder, ...map[string]string) string` function
3. Make the kubelet expand environment correctly
4. Make the kubelet expand command correctly

Expand Down Expand Up @@ -311,7 +311,7 @@ func Expand(input string, mapping func(string) string) string {

#### Kubelet changes

The Kubelet should be made to correctly expand variables references in a container's environment,
The Kubelet should be made to correctly expand variables references in a container's environment,
command, and args. Changes will need to be made to:

1. The `makeEnvironmentVariables` function in the kubelet; this is used by
Expand Down
14 changes: 7 additions & 7 deletions docs/design/namespaces.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ Each user community has its own:

A cluster operator may create a Namespace for each unique user community.

The Namespace provides a unique scope for:
The Namespace provides a unique scope for:

1. named resources (to avoid basic naming collisions)
2. delegated management authority to trusted users
Expand Down Expand Up @@ -142,7 +142,7 @@ type NamespaceSpec struct {

A *FinalizerName* is a qualified name.

The API Server enforces that a *Namespace* can only be deleted from storage if and only if
The API Server enforces that a *Namespace* can only be deleted from storage if and only if
it's *Namespace.Spec.Finalizers* is empty.

A *finalize* operation is the only mechanism to modify the *Namespace.Spec.Finalizers* field post creation.
Expand Down Expand Up @@ -189,12 +189,12 @@ are known to the cluster.
The *namespace controller* enumerates each known resource type in that namespace and deletes it one by one.

Admission control blocks creation of new resources in that namespace in order to prevent a race-condition
where the controller could believe all of a given resource type had been deleted from the namespace,
where the controller could believe all of a given resource type had been deleted from the namespace,
when in fact some other rogue client agent had created new objects. Using admission control in this
scenario allows each of registry implementations for the individual objects to not need to take into account Namespace life-cycle.

Once all objects known to the *namespace controller* have been deleted, the *namespace controller*
executes a *finalize* operation on the namespace that removes the *kubernetes* value from
executes a *finalize* operation on the namespace that removes the *kubernetes* value from
the *Namespace.Spec.Finalizers* list.

If the *namespace controller* sees a *Namespace* whose *ObjectMeta.DeletionTimestamp* is set, and
Expand Down Expand Up @@ -245,13 +245,13 @@ In etcd, we want to continue to still support efficient WATCH across namespaces.

Resources that persist content in etcd will have storage paths as follows:

/{k8s_storage_prefix}/{resourceType}/{resource.Namespace}/{resource.Name}
/{k8s_storage_prefix}/{resourceType}/{resource.Namespace}/{resource.Name}

This enables consumers to WATCH /registry/{resourceType} for changes across namespace of a particular {resourceType}.

### Kubelet

The kubelet will register pod's it sources from a file or http source with a namespace associated with the
The kubelet will register pod's it sources from a file or http source with a namespace associated with the
*cluster-id*

### Example: OpenShift Origin managing a Kubernetes Namespace
Expand Down Expand Up @@ -362,7 +362,7 @@ This results in the following state:

At this point, the Kubernetes *namespace controller* in its sync loop will see that the namespace
has a deletion timestamp and that its list of finalizers is empty. As a result, it knows all
content associated from that namespace has been purged. It performs a final DELETE action
content associated from that namespace has been purged. It performs a final DELETE action
to remove that Namespace from the storage.

At this point, all content associated with that Namespace, and the Namespace itself are gone.
Expand Down
12 changes: 6 additions & 6 deletions docs/design/persistent-storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,11 +41,11 @@ Two new API kinds:

A `PersistentVolume` (PV) is a storage resource provisioned by an administrator. It is analogous to a node. See [Persistent Volume Guide](../user-guide/persistent-volumes/) for how to use it.

A `PersistentVolumeClaim` (PVC) is a user's request for a persistent volume to use in a pod. It is analogous to a pod.
A `PersistentVolumeClaim` (PVC) is a user's request for a persistent volume to use in a pod. It is analogous to a pod.

One new system component:

`PersistentVolumeClaimBinder` is a singleton running in master that watches all PersistentVolumeClaims in the system and binds them to the closest matching available PersistentVolume. The volume manager watches the API for newly created volumes to manage.
`PersistentVolumeClaimBinder` is a singleton running in master that watches all PersistentVolumeClaims in the system and binds them to the closest matching available PersistentVolume. The volume manager watches the API for newly created volumes to manage.

One new volume:

Expand All @@ -69,7 +69,7 @@ Cluster administrators use the API to manage *PersistentVolumes*. A custom stor

PVs are system objects and, thus, have no namespace.

Many means of dynamic provisioning will be eventually be implemented for various storage types.
Many means of dynamic provisioning will be eventually be implemented for various storage types.


##### PersistentVolume API
Expand Down Expand Up @@ -116,7 +116,7 @@ TBD

#### Events

The implementation of persistent storage will not require events to communicate to the user the state of their claim. The CLI for bound claims contains a reference to the backing persistent volume. This is always present in the API and CLI, making an event to communicate the same unnecessary.
The implementation of persistent storage will not require events to communicate to the user the state of their claim. The CLI for bound claims contains a reference to the backing persistent volume. This is always present in the API and CLI, making an event to communicate the same unnecessary.

Events that communicate the state of a mounted volume are left to the volume plugins.

Expand Down Expand Up @@ -232,9 +232,9 @@ When a claim holder is finished with their data, they can delete their claim.
$ kubectl delete pvc myclaim-1
```

The ```PersistentVolumeClaimBinder``` will reconcile this by removing the claim reference from the PV and change the PVs status to 'Released'.
The ```PersistentVolumeClaimBinder``` will reconcile this by removing the claim reference from the PV and change the PVs status to 'Released'.

Admins can script the recycling of released volumes. Future dynamic provisioners will understand how a volume should be recycled.
Admins can script the recycling of released volumes. Future dynamic provisioners will understand how a volume should be recycled.


<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
Expand Down
Loading

0 comments on commit 024208e

Please sign in to comment.