Skip to content

Commit

Permalink
apply changes
Browse files Browse the repository at this point in the history
  • Loading branch information
lavalamp committed Jul 17, 2015
1 parent 2a112a0 commit f7873d2
Show file tree
Hide file tree
Showing 91 changed files with 530 additions and 7 deletions.
2 changes: 2 additions & 0 deletions docs/admin/authorization.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,11 +118,13 @@ To permit an action Policy with an unset namespace applies regardless of namespa

Other implementations can be developed fairly easily.
The APIserver calls the Authorizer interface:

```go
type Authorizer interface {
Authorize(a Attributes) error
}
```

to determine whether or not to allow each API action.

An authorization plugin is a module that implements this interface.
Expand Down
1 change: 1 addition & 0 deletions docs/admin/cluster-large.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,7 @@ To avoid running into cloud provider quota issues, when creating a cluster with
To prevent memory leaks or other resource issues in [cluster addons](../../cluster/addons/) from consuming all the resources available on a node, Kubernetes sets resource limits on addon containers to limit the CPU and Memory resources they can consume (See PR [#10653](https://github.com/GoogleCloudPlatform/kubernetes/pull/10653/files) and [#10778](https://github.com/GoogleCloudPlatform/kubernetes/pull/10778/files)).

For example:

```YAML
containers:
- image: gcr.io/google_containers/heapster:v0.15.0
Expand Down
1 change: 1 addition & 0 deletions docs/admin/cluster-troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ problems please see the [application troubleshooting guide](../user-guide/applic
The first thing to debug in your cluster is if your nodes are all registered correctly.

Run

```
kubectl get nodes
```
Expand Down
6 changes: 4 additions & 2 deletions docs/admin/high-availability.md
Original file line number Diff line number Diff line change
Expand Up @@ -131,6 +131,7 @@ for ```${NODE_IP}``` on each machine.

#### Validating your cluster
Once you copy this into all three nodes, you should have a clustered etcd set up. You can validate with

```
etcdctl member list
```
Expand Down Expand Up @@ -209,11 +210,12 @@ master election. On each of the three apiserver nodes, we run a small utility a
election protocol using etcd "compare and swap". If the apiserver node wins the election, it starts the master component it is managing (e.g. the scheduler), if it
loses the election, it ensures that any master components running on the node (e.g. the scheduler) are stopped.

In the future, we expect to more tightly integrate this lease-locking into the scheduler and controller-manager binaries directly, as described in the [high availability design proposal](proposals/high-availability.md)
In the future, we expect to more tightly integrate this lease-locking into the scheduler and controller-manager binaries directly, as described in the [high availability design proposal](../proposals/high-availability.md)

### Installing configuration files

First, create empty log files on each node, so that Docker will mount the files not make new directories:

```
touch /var/log/kube-scheduler.log
touch /var/log/kube-controller-manager.log
Expand Down Expand Up @@ -244,7 +246,7 @@ set the ```--apiserver``` flag to your replicated endpoint.

##Vagrant up!

We indeed have an initial proof of concept tester for this, which is available [here](../examples/high-availability/).
We indeed have an initial proof of concept tester for this, which is available [here](../../examples/high-availability/).

It implements the major concepts (with a few minor reductions for simplicity), of the podmaster HA implementation alongside a quick smoke test using k8petstore.

Expand Down
1 change: 1 addition & 0 deletions docs/admin/networking.md
Original file line number Diff line number Diff line change
Expand Up @@ -152,6 +152,7 @@ outbound internet access. A linux bridge (called `cbr0`) is configured to exist
on that subnet, and is passed to docker's `--bridge` flag.

We start Docker with:

```
DOCKER_OPTS="--bridge=cbr0 --iptables=false --ip-masq=false"
```
Expand Down
5 changes: 5 additions & 0 deletions docs/admin/node.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,6 +97,7 @@ Current valid condition is `Ready`. In the future, we plan to add more.
condition provides different level of understanding for node health.
Node condition is represented as a json object. For example,
the following conditions mean the node is in sane state:

```json
"conditions": [
{
Expand Down Expand Up @@ -125,6 +126,7 @@ or from your physical or virtual machines. What this means is that when
Kubernetes creates a node, it only creates a representation for the node.
After creation, Kubernetes will check whether the node is valid or not.
For example, if you try to create a node from the following content:

```json
{
"kind": "Node",
Expand Down Expand Up @@ -196,6 +198,7 @@ Making a node unscheduleable will prevent new pods from being scheduled to that
node, but will not affect any existing pods on the node. This is useful as a
preparatory step before a node reboot, etc. For example, to mark a node
unschedulable, run this command:

```
kubectl replace nodes 10.1.2.3 --patch='{"apiVersion": "v1", "unschedulable": true}'
```
Expand All @@ -214,6 +217,7 @@ processes not in containers.

If you want to explicitly reserve resources for non-Pod processes, you can create a placeholder
pod. Use the following template:

```
apiVersion: v1
kind: Pod
Expand All @@ -228,6 +232,7 @@ spec:
cpu: 100m
memory: 100Mi
```

Set the `cpu` and `memory` values to the amount of resources you want to reserve.
Place the file in the manifest directory (`--config=DIR` flag of kubelet). Do this
on each kubelet where you want to reserve resources.
Expand Down
1 change: 1 addition & 0 deletions docs/admin/resource-quota.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,6 +84,7 @@ This means the resource must have a fully-qualified name (i.e. mycompany.org/shi

## Viewing and Setting Quotas
Kubectl supports creating, updating, and viewing quotas

```
$ kubectl namespace myspace
$ cat <<EOF > quota.json
Expand Down
1 change: 1 addition & 0 deletions docs/admin/salt.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@ Each salt-minion service is configured to interact with the **salt-master** serv
[root@kubernetes-master] $ cat /etc/salt/minion.d/master.conf
master: kubernetes-master
```

The salt-master is contacted by each salt-minion and depending upon the machine information presented, the salt-master will provision the machine as either a kubernetes-master or kubernetes-minion with all the required capabilities needed to run Kubernetes.

If you are running the Vagrant based environment, the **salt-api** service is running on the kubernetes-master. It is configured to enable the vagrant user to introspect the salt cluster in order to find out about machines in the Vagrant environment via a REST API.
Expand Down
1 change: 1 addition & 0 deletions docs/admin/service-accounts-admin.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,6 +109,7 @@ $ kubectl describe secret mysecretname
```

#### To delete/invalidate a service account token

```
kubectl delete secret mysecretname
```
Expand Down
2 changes: 1 addition & 1 deletion docs/design/admission_control_limit_range.md
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,7 @@ It is expected we will want to define limits for particular pods or containers b
To make a **LimitRangeItem** more restrictive, we will intend to add these additional restrictions at a future point in time.

## Example
See the [example of Limit Range](../user-guide/limitrange) for more information.
See the [example of Limit Range](../user-guide/limitrange/) for more information.


<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
Expand Down
2 changes: 1 addition & 1 deletion docs/design/admission_control_resource_quota.md
Original file line number Diff line number Diff line change
Expand Up @@ -185,7 +185,7 @@ services 3 5
```

## More information
See [resource quota document](../admin/resource-quota.md) and the [example of Resource Quota](../user-guide/resourcequota) for more information.
See [resource quota document](../admin/resource-quota.md) and the [example of Resource Quota](../user-guide/resourcequota/) for more information.


<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
Expand Down
1 change: 1 addition & 0 deletions docs/design/event_compression.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,6 +84,7 @@ Each binary that generates events:

## Example
Sample kubectl output

```
FIRSTSEEN LASTSEEN COUNT NAME KIND SUBOBJECT REASON SOURCE MESSAGE
Thu, 12 Feb 2015 01:13:02 +0000 Thu, 12 Feb 2015 01:13:02 +0000 1 kubernetes-minion-4.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-minion-4.c.saad-dev-vms.internal} Starting kubelet.
Expand Down
7 changes: 7 additions & 0 deletions docs/design/resources.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,23 +87,27 @@ Internally (i.e., everywhere else), Kubernetes will represent resource quantitie
Both users and a number of system components, such as schedulers, (horizontal) auto-scalers, (vertical) auto-sizers, load balancers, and worker-pool managers need to reason about resource requirements of workloads, resource capacities of nodes, and resource usage. Kubernetes divides specifications of *desired state*, aka the Spec, and representations of *current state*, aka the Status. Resource requirements and total node capacity fall into the specification category, while resource usage, characterizations derived from usage (e.g., maximum usage, histograms), and other resource demand signals (e.g., CPU load) clearly fall into the status category and are discussed in the Appendix for now.

Resource requirements for a container or pod should have the following form:

```
resourceRequirementSpec: [
request: [ cpu: 2.5, memory: "40Mi" ],
limit: [ cpu: 4.0, memory: "99Mi" ],
]
```

Where:
* _request_ [optional]: the amount of resources being requested, or that were requested and have been allocated. Scheduler algorithms will use these quantities to test feasibility (whether a pod will fit onto a node). If a container (or pod) tries to use more resources than its _request_, any associated SLOs are voided &mdash; e.g., the program it is running may be throttled (compressible resource types), or the attempt may be denied. If _request_ is omitted for a container, it defaults to _limit_ if that is explicitly specified, otherwise to an implementation-defined value; this will always be 0 for a user-defined resource type. If _request_ is omitted for a pod, it defaults to the sum of the (explicit or implicit) _request_ values for the containers it encloses.

* _limit_ [optional]: an upper bound or cap on the maximum amount of resources that will be made available to a container or pod; if a container or pod uses more resources than its _limit_, it may be terminated. The _limit_ defaults to "unbounded"; in practice, this probably means the capacity of an enclosing container, pod, or node, but may result in non-deterministic behavior, especially for memory.

Total capacity for a node should have a similar structure:

```
resourceCapacitySpec: [
total: [ cpu: 12, memory: "128Gi" ]
]
```

Where:
* _total_: the total allocatable resources of a node. Initially, the resources at a given scope will bound the resources of the sum of inner scopes.

Expand Down Expand Up @@ -149,6 +153,7 @@ rather than decimal ones: "64MiB" rather than "64MB".

## Resource metadata
A resource type may have an associated read-only ResourceType structure, that contains metadata about the type. For example:

```
resourceTypes: [
"kubernetes.io/memory": [
Expand Down Expand Up @@ -194,6 +199,7 @@ resourceStatus: [
```

where a `<CPU-info>` or `<memory-info>` structure looks like this:

```
{
mean: <value> # arithmetic mean
Expand All @@ -209,6 +215,7 @@ where a `<CPU-info>` or `<memory-info>` structure looks like this:
]
}
```

All parts of this structure are optional, although we strongly encourage including quantities for 50, 90, 95, 99, 99.5, and 99.9 percentiles. _[In practice, it will be important to include additional info such as the length of the time window over which the averages are calculated, the confidence level, and information-quality metrics such as the number of dropped or discarded data points.]_
and predicted

Expand Down
1 change: 1 addition & 0 deletions docs/design/security_context.md
Original file line number Diff line number Diff line change
Expand Up @@ -179,6 +179,7 @@ type SELinuxOptions struct {
Level string
}
```

### Admission

It is up to an admission plugin to determine if the security context is acceptable or not. At the
Expand Down
1 change: 1 addition & 0 deletions docs/design/service_accounts.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,7 @@ A service account binds together several things:
## Design Discussion

A new object Kind is added:

```go
type ServiceAccount struct {
TypeMeta `json:",inline" yaml:",inline"`
Expand Down
4 changes: 4 additions & 0 deletions docs/devel/api-conventions.md
Original file line number Diff line number Diff line change
Expand Up @@ -196,12 +196,15 @@ References in the status of the referee to the referrer may be permitted, when t
Discussed in [#2004](https://github.com/GoogleCloudPlatform/kubernetes/issues/2004) and elsewhere. There are no maps of subobjects in any API objects. Instead, the convention is to use a list of subobjects containing name fields.

For example:

```yaml
ports:
- name: www
containerPort: 80
```
vs.
```yaml
ports:
www:
Expand Down Expand Up @@ -518,6 +521,7 @@ A ```Status``` kind will be returned by the API in two cases:
The status object is encoded as JSON and provided as the body of the response. The status object contains fields for humans and machine consumers of the API to get more detailed information for the cause of the failure. The information in the status object supplements, but does not override, the HTTP status code's meaning. When fields in the status object have the same meaning as generally defined HTTP headers and that header is returned with the response, the header should be considered as having higher priority.

**Example:**

```
$ curl -v -k -H "Authorization: Bearer WhCDvq4VPpYhrcfmF6ei7V9qlbqTubUc" https://10.240.122.184:443/api/v1/namespaces/default/pods/grafana

Expand Down
1 change: 1 addition & 0 deletions docs/devel/api_changes.md
Original file line number Diff line number Diff line change
Expand Up @@ -282,6 +282,7 @@ conversion functions when writing your conversion functions.
Once all the necessary manually written conversions are added, you need to
regenerate auto-generated ones. To regenerate them:
- run

```
$ hack/update-generated-conversions.sh
```
Expand Down
5 changes: 5 additions & 0 deletions docs/devel/developer-guides/vagrant.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,7 @@ vagrant ssh minion-3
```

To view the service status and/or logs on the kubernetes-master:

```sh
vagrant ssh master
[vagrant@kubernetes-master ~] $ sudo systemctl status kube-apiserver
Expand All @@ -96,6 +97,7 @@ vagrant ssh master
```

To view the services on any of the nodes:

```sh
vagrant ssh minion-1
[vagrant@kubernetes-minion-1] $ sudo systemctl status docker
Expand All @@ -109,17 +111,20 @@ vagrant ssh minion-1
With your Kubernetes cluster up, you can manage the nodes in your cluster with the regular Vagrant commands.

To push updates to new Kubernetes code after making source changes:

```sh
./cluster/kube-push.sh
```

To stop and then restart the cluster:

```sh
vagrant halt
./cluster/kube-up.sh
```

To destroy the cluster:

```sh
vagrant destroy
```
Expand Down
11 changes: 11 additions & 0 deletions docs/devel/development.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,13 +109,15 @@ source control system). Use ```apt-get install mercurial``` or ```yum install m
directly from mercurial.

2) Create a new GOPATH for your tools and install godep:

```
export GOPATH=$HOME/go-tools
mkdir -p $GOPATH
go get github.com/tools/godep
```

3) Add $GOPATH/bin to your path. Typically you'd add this to your ~/.profile:

```
export GOPATH=$HOME/go-tools
export PATH=$PATH:$GOPATH/bin
Expand All @@ -125,6 +127,7 @@ export PATH=$PATH:$GOPATH/bin
Here's a quick walkthrough of one way to use godeps to add or update a Kubernetes dependency into Godeps/_workspace. For more details, please see the instructions in [godep's documentation](https://github.com/tools/godep).

1) Devote a directory to this endeavor:

```
export KPATH=$HOME/code/kubernetes
mkdir -p $KPATH/src/github.com/GoogleCloudPlatform/kubernetes
Expand All @@ -134,6 +137,7 @@ git clone https://path/to/your/fork .
```

2) Set up your GOPATH.

```
# Option A: this will let your builds see packages that exist elsewhere on your system.
export GOPATH=$KPATH:$GOPATH
Expand All @@ -143,12 +147,14 @@ export GOPATH=$KPATH
```

3) Populate your new GOPATH.

```
cd $KPATH/src/github.com/GoogleCloudPlatform/kubernetes
godep restore
```

4) Next, you can either add a new dependency or update an existing one.

```
# To add a new dependency, do:
cd $KPATH/src/github.com/GoogleCloudPlatform/kubernetes
Expand Down Expand Up @@ -218,6 +224,7 @@ KUBE_COVER=y hack/test-go.sh
At the end of the run, an the HTML report will be generated with the path printed to stdout.

To run tests and collect coverage in only one package, pass its relative path under the `kubernetes` directory as an argument, for example:

```
cd kubernetes
KUBE_COVER=y hack/test-go.sh pkg/kubectl
Expand All @@ -230,6 +237,7 @@ Coverage results for the project can also be viewed on [Coveralls](https://cover
## Integration tests

You need an [etcd](https://github.com/coreos/etcd/releases/tag/v2.0.0) in your path, please make sure it is installed and in your ``$PATH``.

```
cd kubernetes
hack/test-integration.sh
Expand All @@ -238,12 +246,14 @@ hack/test-integration.sh
## End-to-End tests

You can run an end-to-end test which will bring up a master and two nodes, perform some tests, and then tear everything down. Make sure you have followed the getting started steps for your chosen cloud platform (which might involve changing the `KUBERNETES_PROVIDER` environment variable to something other than "gce".

```
cd kubernetes
hack/e2e-test.sh
```

Pressing control-C should result in an orderly shutdown but if something goes wrong and you still have some VMs running you can force a cleanup with this command:

```
go run hack/e2e.go --down
```
Expand Down Expand Up @@ -281,6 +291,7 @@ hack/ginkgo-e2e.sh --ginkgo.focus=Pods.*env
```

### Combining flags

```sh
# Flags can be combined, and their actions will take place in this order:
# -build, -push|-up|-pushup, -test|-tests=..., -down
Expand Down
2 changes: 2 additions & 0 deletions docs/devel/flaky-tests.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,7 @@ _Note: these instructions are mildly hacky for now, as we get run once semantics
There is a testing image ```brendanburns/flake``` up on the docker hub. We will use this image to test our fix.

Create a replication controller with the following config:

```yaml
apiVersion: v1
kind: ReplicationController
Expand All @@ -63,6 +64,7 @@ spec:
- name: REPO_SPEC
value: https://github.com/GoogleCloudPlatform/kubernetes
```
Note that we omit the labels and the selector fields of the replication controller, because they will be populated from the labels field of the pod template by default.
```
Expand Down
Loading

0 comments on commit f7873d2

Please sign in to comment.