Skip to content

Commit

Permalink
move admin related docs into docs/admin
Browse files Browse the repository at this point in the history
  • Loading branch information
lavalamp committed Jul 14, 2015
1 parent bdbcbe2 commit 2c333e4
Show file tree
Hide file tree
Showing 32 changed files with 45 additions and 30 deletions.
2 changes: 1 addition & 1 deletion docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ certainly want the docs that go with that version.</h1>
* The [User's guide](user-guide.md) is for anyone who wants to run programs and
services on an existing Kubernetes cluster.

* The [Cluster Admin's guide](cluster-admin-guide.md) is for anyone setting up
* The [Cluster Admin's guide](admin/README.md) is for anyone setting up
a Kubernetes cluster or administering it.

* The [Developer guide](developer-guide.md) is for anyone wanting to write
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
15 changes: 15 additions & 0 deletions docs/admin/namespaces.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# Namespaces

Namespaces help different projects, teams, or customers to share a kubernetes cluster. First, they provide a scope for [Names](../identifiers.md). Second, as our access control code develops, it is expected that it will be convenient to attach authorization and other policy to namespaces.

Use of multiple namespaces is optional. For small teams, they may not be needed.

This is a placeholder document about namespace administration.

TODO: document namespace creation, ownership assignment, visibility rules,
policy creation, interaction with network.

Namespaces are still under development. For now, the best documentation is the [Namespaces Design Document](../design/namespaces.md). The user documentation can be found at [Namespaces](../../docs/namespaces.md)


[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/namespaces.md?pixel)]()
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes
File renamed without changes.
File renamed without changes.
4 changes: 2 additions & 2 deletions docs/api.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Overall API conventions are described in the [API conventions doc](api-conventio

Complete API details are documented via [Swagger](http://swagger.io/). The Kubernetes apiserver (aka "master") exports an API that can be used to retrieve the [Swagger spec](https://github.com/swagger-api/swagger-spec/tree/master/schemas/v1.2) for the Kubernetes API, by default at `/swaggerapi`, and a UI you can use to browse the API documentation at `/swagger-ui`. We also periodically update a [statically generated UI](http://kubernetes.io/third_party/swagger-ui/).

Remote access to the API is discussed in the [access doc](accessing_the_api.md).
Remote access to the API is discussed in the [access doc](admin/accessing-the-api.md).

The Kubernetes API also serves as the foundation for the declarative configuration schema for the system. The [Kubectl](user-guide/kubectl/kubectl.md) command-line tool can be used to create, update, delete, and get API objects.

Expand Down Expand Up @@ -48,7 +48,7 @@ As of June 4, 2015, the Kubernetes v1 API has been enabled by default. The v1bet

### v1 conversion tips (from v1beta3)

We're working to convert all documentation and examples to v1. A simple [API conversion tool](cluster_management.md#switching-your-config-files-to-a-new-api-version) has been written to simplify the translation process. Use `kubectl create --validate` in order to validate your json or yaml against our Swagger spec.
We're working to convert all documentation and examples to v1. A simple [API conversion tool](admin/cluster-management.md#switching-your-config-files-to-a-new-api-version) has been written to simplify the translation process. Use `kubectl create --validate` in order to validate your json or yaml against our Swagger spec.

Changes to services are the most significant difference between v1beta3 and v1.

Expand Down
2 changes: 1 addition & 1 deletion docs/compute_resources.md
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,7 @@ Here are some example command lines that extract just the necessary information:
- `kubectl get nodes -o yaml | grep '\sname\|cpu\|memory'`
- `kubectl get nodes -o json | jq '.items[] | {name: .metadata.name, cap: .status.capacity}'`

The [resource quota](resource_quota_admin.md) feature can be configured
The [resource quota](admin/resource-quota.md) feature can be configured
to limit the total amount of resources that can be consumed. If used in conjunction
with namespaces, it can prevent one team from hogging all the resources.

Expand Down
2 changes: 1 addition & 1 deletion docs/design/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Kubernetes enables users to ask a cluster to run a set of containers. The system

Kubernetes is intended to run on a number of cloud providers, as well as on physical hosts.

A single Kubernetes cluster is not intended to span multiple availability zones. Instead, we recommend building a higher-level layer to replicate complete deployments of highly available applications across multiple zones (see [the availability doc](../availability.md) and [cluster federation proposal](../proposals/federation.md) for more details).
A single Kubernetes cluster is not intended to span multiple availability zones. Instead, we recommend building a higher-level layer to replicate complete deployments of highly available applications across multiple zones (see [the availability doc](../admin/availability.md) and [cluster federation proposal](../proposals/federation.md) for more details).

Finally, Kubernetes aspires to be an extensible, pluggable, building-block OSS platform and toolkit. Therefore, architecturally, we want Kubernetes to be built as a collection of pluggable components and layers, with the ability to use alternative schedulers, controllers, storage systems, and distribution mechanisms, and we're evolving its current code in that direction. Furthermore, we want others to be able to extend Kubernetes functionality, such as with higher-level PaaS functionality or multi-cluster layers, without modification of core Kubernetes source. Therefore, its API isn't just (or even necessarily mainly) targeted at end users, but at tool and extension developers. Its APIs are intended to serve as the foundation for an open ecosystem of tools, automation systems, and higher-level API layers. Consequently, there are no "internal" inter-component APIs. All APIs are visible and available, including the APIs used by the scheduler, the node controller, the replication-controller manager, Kubelet's API, etc. There's no glass to break -- in order to handle more complex use cases, one can just access the lower-level APIs in a fully transparent, composable manner.

Expand Down
2 changes: 1 addition & 1 deletion docs/design/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ The **Kubelet** manages [pods](../pods.md) and their containers, their images, t

Each node also runs a simple network proxy and load balancer (see the [services FAQ](https://github.com/GoogleCloudPlatform/kubernetes/wiki/Services-FAQ) for more details). This reflects `services` (see [the services doc](../services.md) for more details) as defined in the Kubernetes API on each node and can do simple TCP and UDP stream forwarding (round robin) across a set of backends.

Service endpoints are currently found via [DNS](../dns.md) or through environment variables (both [Docker-links-compatible](https://docs.docker.com/userguide/dockerlinks/) and Kubernetes {FOO}_SERVICE_HOST and {FOO}_SERVICE_PORT variables are supported). These variables resolve to ports managed by the service proxy.
Service endpoints are currently found via [DNS](../admin/dns.md) or through environment variables (both [Docker-links-compatible](https://docs.docker.com/userguide/dockerlinks/) and Kubernetes {FOO}_SERVICE_HOST and {FOO}_SERVICE_PORT variables are supported). These variables resolve to ports managed by the service proxy.

## The Kubernetes Control Plane

Expand Down
2 changes: 1 addition & 1 deletion docs/design/namespaces.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ distinguish distinct entities, and reference particular entities across operatio

A *Namespace* provides an authorization scope for accessing content associated with the *Namespace*.

See [Authorization plugins](../authorization.md)
See [Authorization plugins](../admin/authorization.md)

### Limit Resource Consumption

Expand Down
2 changes: 1 addition & 1 deletion docs/design/networking.md
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@ a pod tries to egress beyond GCE's project the packets must be SNAT'ed

With the primary aim of providing IP-per-pod-model, other implementations exist
to serve the purpose outside of GCE.
- [OpenVSwitch with GRE/VxLAN](../ovs-networking.md)
- [OpenVSwitch with GRE/VxLAN](../admin/ovs-networking.md)
- [Flannel](https://github.com/coreos/flannel#flannel)
- [L2 networks](http://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/)
("With Linux Bridge devices" section)
Expand Down
2 changes: 1 addition & 1 deletion docs/design/service_accounts.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ They also may interact with services other than the Kubernetes API, such as:
## Design Overview
A service account binds together several things:
- a *name*, understood by users, and perhaps by peripheral systems, for an identity
- a *principal* that can be authenticated and [authorized](../authorization.md)
- a *principal* that can be authenticated and [authorized](../admin/authorization.md)
- a [security context](security_context.md), which defines the Linux Capabilities, User IDs, Groups IDs, and other
capabilities and controls on interaction with the file system and OS.
- a set of [secrets](secrets.md), which a container may use to
Expand Down
6 changes: 3 additions & 3 deletions docs/developer-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ certainly want the docs that go with that version.</h1>
The developer guide is for anyone wanting to either write code which directly accesses the
kubernetes API, or to contribute directly to the kubernetes project.
It assumes some familiarity with concepts in the [User Guide](user-guide.md) and the [Cluster Admin
Guide](cluster-admin-guide.md).
Guide](admin/README.md).


## Developing against the Kubernetes API
Expand All @@ -35,10 +35,10 @@ Guide](cluster-admin-guide.md).

## Writing Plugins

* **Authentication Plugins** ([authentication.md](authentication.md)):
* **Authentication Plugins** ([admin/authentication.md](admin/authentication.md)):
The current and planned states of authentication tokens.

* **Authorization Plugins** ([authorization.md](authorization.md)):
* **Authorization Plugins** ([admin/authorization.md](admin/authorization.md)):
Authorization applies to all HTTP requests on the main apiserver port.
This doc explains the available authorization implementations.

Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ Definition of columns:
- **OS** is the base operating system of the nodes.
- **Config. Mgmt** is the configuration management system that helps install and maintain kubernetes software on the
nodes.
- **Networking** is what implements the [networking model](../../docs/networking.md). Those with networking type
- **Networking** is what implements the [networking model](../../docs/admin/networking.md). Those with networking type
_none_ may not support more than one node, or may support multiple VM nodes only in the same physical node.
- **Conformance** indicates whether a cluster created with this configuration has passed the project's conformance
tests for supporting the API and base features of Kubernetes v1.0.0.
Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/fedora/fedora_manual_config.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ Getting started on [Fedora](http://fedoraproject.org)

This is a getting started guide for Fedora. It is a manual configuration so you understand all the underlying packages / services / ports, etc...

This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](../../networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.
This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](../../admin/networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.

The kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, fed-master, will be the kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_ (not needed if _etcd_ runs on a different host but this guide assumes that _etcd_ and kubernetes master run on the same host). The remaining host, fed-node will be the node and run kubelet, proxy and docker.

Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/logging.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ Here is the same information in a picture which shows how the pods might be plac
![Cluster](../../examples/blog-logging/diagrams/cloud-logging.png)

This diagram shows four nodes created on a Google Compute Engine cluster with the name of each VM node on a purple background. The internal and public IPs of each node are shown on gray boxes and the pods running in each node are shown in green boxes. Each pod box shows the name of the pod and the namespace it runs in, the IP address of the pod and the images which are run as part of the pod’s execution. Here we see that every node is running a fluentd-cloud-logging pod which is collecting the log output of the containers running on the same node and sending them to Google Cloud Logging. A pod which provides the
[cluster DNS service](../../docs/dns.md) runs on one of the nodes and a pod which provides monitoring support runs on another node.
[cluster DNS service](../admin/dns.md) runs on one of the nodes and a pod which provides monitoring support runs on another node.

To help explain how cluster level logging works let’s start off with a synthetic log generator pod specification [counter-pod.yaml](../../examples/blog-logging/counter-pod.yaml):
```
Expand Down
14 changes: 7 additions & 7 deletions docs/getting-started-guides/scratch.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ on how flags are set on various components.
have identical configurations.

### Network
Kubernetes has a distinctive [networking model](../networking.md).
Kubernetes has a distinctive [networking model](../admin/networking.md).

Kubernetes allocates an IP address to each pod. When creating a cluster, you
need to allocate a block of IPs for Kubernetes to use as Pod IPs. The simplest
Expand Down Expand Up @@ -252,7 +252,7 @@ The admin user (and any users) need:

Your tokens and passwords need to be stored in a file for the apiserver
to read. This guide uses `/var/lib/kube-apiserver/known_tokens.csv`.
The format for this file is described in the [authentication documentation](../authentication.md).
The format for this file is described in the [authentication documentation](../admin/authentication.md).

For distributing credentials to clients, the convention in Kubernetes is to put the credentials
into a [kubeconfig file](../kubeconfig-file.md).
Expand Down Expand Up @@ -378,7 +378,7 @@ Arguments to consider:
- `--docker-root=`
- `--root-dir=`
- `--configure-cbr0=` (described above)
- `--register-node` (described in [Node](../node.md) documentation.
- `--register-node` (described in [Node](../admin/node.md) documentation.

### kube-proxy

Expand All @@ -398,7 +398,7 @@ Each node needs to be allocated its own CIDR range for pod networking.
Call this `NODE_X_POD_CIDR`.

A bridge called `cbr0` needs to be created on each node. The bridge is explained
further in the [networking documentation](../networking.md). The bridge itself
further in the [networking documentation](../admin/networking.md). The bridge itself
needs an address from `$NODE_X_POD_CIDR` - by convention the first IP. Call
this `NODE_X_BRIDGE_ADDR`. For example, if `NODE_X_POD_CIDR` is `10.0.0.0/16`,
then `NODE_X_BRIDGE_ADDR` is `10.0.0.1/16`. NOTE: this retains the `/16` suffix
Expand Down Expand Up @@ -444,7 +444,7 @@ traffic to the internet, but have no problem with them inside your GCE Project.
### Using Configuration Management
The previous steps all involved "conventional" system administration techniques for setting up
machines. You may want to use a Configuration Management system to automate the node configuration
process. There are examples of [Saltstack](../salt.md), Ansible, Juju, and CoreOS Cloud Config in the
process. There are examples of [Saltstack](../admin/salt.md), Ansible, Juju, and CoreOS Cloud Config in the
various Getting Started Guides.

## Bootstrapping the Cluster
Expand All @@ -463,7 +463,7 @@ You will need to run one or more instances of etcd.
- Alternative: run 3 or 5 etcd instances.
- Log can be written to non-durable storage because storage is replicated.
- run a single apiserver which connects to one of the etc nodes.
See [Availability](../availability.md) for more discussion on factors affecting cluster
See [Availability](../admin/availability.md) for more discussion on factors affecting cluster
availability.

To run an etcd instance:
Expand All @@ -489,7 +489,7 @@ Here are some apiserver flags you may need to set:
- `--tls-cert-file=/srv/kubernetes/server.cert` -%}
- `--tls-private-key-file=/srv/kubernetes/server.key` -%}
- `--admission-control=$RECOMMENDED_LIST`
- See [admission controllers](../admission_controllers.md) for recommended arguments.
- See [admission controllers](../admin/admission-controllers.md) for recommended arguments.
- `--allow-privileged=true`, only if you trust your cluster user to run pods as root.

If you are following the firewall-only security approach, then use these arguments:
Expand Down
4 changes: 2 additions & 2 deletions docs/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,9 +24,9 @@ Users can create and manage pods themselves, but Kubernetes drastically simplifi

Frequently it is useful to refer to a set of pods, for example to limit the set of pods on which a mutating operation should be performed, or that should be queried for status. As a general mechanism, users can attach to most Kubernetes API objects arbitrary key-value pairs called [labels](labels.md), and then use a set of label selectors (key-value queries over labels) to constrain the target of API operations. Each resource also has a map of string keys and values that can be used by external tooling to store and retrieve arbitrary metadata about this object, called [annotations](annotations.md).

Kubernetes supports a unique [networking model](networking.md). Kubernetes encourages a flat address space and does not dynamically allocate ports, instead allowing users to select whichever ports are convenient for them. To achieve this, it allocates an IP address for each pod.
Kubernetes supports a unique [networking model](admin/networking.md). Kubernetes encourages a flat address space and does not dynamically allocate ports, instead allowing users to select whichever ports are convenient for them. To achieve this, it allocates an IP address for each pod.

Modern Internet applications are commonly built by layering micro-services, for example a set of web front-ends talking to a distributed in-memory key-value store talking to a replicated storage service. To facilitate this architecture, Kubernetes offers the [service](services.md) abstraction, which provides a stable IP address and [DNS name](dns.md) that corresponds to a dynamic set of pods such as the set of pods constituting a micro-service. The set is defined using a label selector and thus can refer to any set of pods. When a container running in a Kubernetes pod connects to this address, the connection is forwarded by a local agent (called the kube proxy) running on the source machine, to one of the corresponding back-end containers. The exact back-end is chosen using a round-robin policy to balance load. The kube proxy takes care of tracking the dynamic set of back-ends as pods are replaced by new pods on new hosts, so that the service IP address (and DNS name) never changes.
Modern Internet applications are commonly built by layering micro-services, for example a set of web front-ends talking to a distributed in-memory key-value store talking to a replicated storage service. To facilitate this architecture, Kubernetes offers the [service](services.md) abstraction, which provides a stable IP address and [DNS name](admin/dns.md) that corresponds to a dynamic set of pods such as the set of pods constituting a micro-service. The set is defined using a label selector and thus can refer to any set of pods. When a container running in a Kubernetes pod connects to this address, the connection is forwarded by a local agent (called the kube proxy) running on the source machine, to one of the corresponding back-end containers. The exact back-end is chosen using a round-robin policy to balance load. The kube proxy takes care of tracking the dynamic set of back-ends as pods are replaced by new pods on new hosts, so that the service IP address (and DNS name) never changes.

Every resource in Kubernetes, such as a pod, is identified by a URI and has a UID. Important components of the URI are the kind of object (e.g. pod), the object’s name, and the object’s [namespace](namespaces.md). For a certain object kind, every name is unique within its namespace. In contexts where an object name is provided without a namespace, it is assumed to be in the default namespace. UID is unique across time and space.

Expand Down
Loading

0 comments on commit 2c333e4

Please sign in to comment.