Skip to content

Commit

Permalink
Merge pull request kubernetes#10827 from gmarek/doc
Browse files Browse the repository at this point in the history
Update cluster management doc.
  • Loading branch information
davidopp committed Jul 20, 2015
2 parents d578e28 + de07cbd commit 2d88675
Show file tree
Hide file tree
Showing 4 changed files with 32 additions and 6 deletions.
2 changes: 1 addition & 1 deletion docs/admin/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ uses OpenVSwitch to set up networking between pods across
Kubernetes nodes.

If you are modifying an existing guide which uses Salt, this document explains [how Salt is used in the Kubernetes
project.](salt.md).
project](salt.md).

## Managing a cluster, including upgrades

Expand Down
8 changes: 5 additions & 3 deletions docs/admin/cluster-management.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,9 @@ Documentation for other releases can be found at

# Cluster Management

This doc is in progress.
## Creating and configuring a Cluster

To install Kubernetes on a set of machines, consult one of the existing [Getting Started guides](../../docs/getting-started-guides/README.md) depending on your environment.

## Upgrading a cluster

Expand Down Expand Up @@ -73,7 +75,7 @@ If you need to reboot a node (such as for a kernel upgrade, libc upgrade, hardwa
brief, then when the Kubelet restarts, it will attempt to restart the pods scheduled to it. If the reboot takes longer,
then the node controller will terminate the pods that are bound to the unavailable node. If there is a corresponding
replication controller, then a new copy of the pod will be started on a different node. So, in the case where all
pods are replicated, upgrades can be done without special coordination.
pods are replicated, upgrades can be done without special coordination, assuming that not all nodes will go down at the same time.

If you want more control over the upgrading process, you may use the following workflow:
1. Mark the node to be rebooted as unschedulable:
Expand All @@ -82,7 +84,7 @@ If you want more control over the upgrading process, you may use the following w
1. Get the pods off the machine, via any of the following strategies:
1. wait for finite-duration pods to complete
1. delete pods with `kubectl delete pods $PODNAME`
1. for pods with a replication controller, the pod will eventually be replaced by a new pod which will be scheduled to a new node. additionally, if the pod is part of a service, then clients will automatically be redirected to the new pod.
1. for pods with a replication controller, the pod will eventually be replaced by a new pod which will be scheduled to a new node. Additionally, if the pod is part of a service, then clients will automatically be redirected to the new pod.
1. for pods with no replication controller, you need to bring up a new copy of the pod, and assuming it is not part of a service, redirect clients to it.
1. Work on the node
1. Make the node schedulable again:
Expand Down
14 changes: 12 additions & 2 deletions docs/admin/node.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ Documentation for other releases can be found at
- [Node Info](#node-info)
- [Node Management](#node-management)
- [Node Controller](#node-controller)
- [Self-Registration of nodes](#self-registration-of-nodes)
- [Self-Registration of Nodes](#self-registration-of-nodes)
- [Manual Node Administration](#manual-node-administration)
- [Node capacity](#node-capacity)

Expand Down Expand Up @@ -177,7 +177,7 @@ join a node to a Kubernetes cluster, you as an admin need to make sure proper se
running in the node. In the future, we plan to automatically provision some node
services.

### Self-Registration of nodes
### Self-Registration of Nodes

When kubelet flag `--register-node` is true (the default), the kubelet will attempt to
register itself with the API server. This is the preferred pattern, used by most distros.
Expand All @@ -191,6 +191,16 @@ For self-registration, the kubelet is started with the following options:
Currently, any kubelet is authorized to create/modify any node resource, but in practice it only creates/modifies
its own. (In the future, we plan to limit authorization to only allow a kubelet to modify its own Node resource.)

If your cluster runs short on resources you can easily add more machines to it if your cluster is running in Node self-registration mode. If you're using GCE or GKE it's done by resizing Instance Group managing your Nodes. It can be accomplished by modifying number of instances on `Compute > Compute Engine > Instance groups > your group > Edit group` [Google Cloud Console page](https://console.developers.google.com) or using gcloud CLI:

```
gcloud preview managed-instance-groups --zone compute-zone resize my-cluster-minon-group --new-size 42
```

Instance Group will take care of putting appropriate image on new machines and start them, while Kubelet will register its Node with API server to make it available for scheduling. If you scale the instance group down, system will randomly choose Nodes to kill.

In other environments you may need to configure the machine yourself and tell the Kubelet on which machine API server is running.

#### Manual Node Administration

A cluster administrator can create and modify Node objects.
Expand Down
14 changes: 14 additions & 0 deletions docs/user-guide/introspection-and-debugging.md
Original file line number Diff line number Diff line change
Expand Up @@ -181,6 +181,20 @@ Here you can see the event generated by the scheduler saying that the Pod failed

To correct this situation, you can use `kubectl scale` to update your Replication Controller to specify four or fewer replicas. (Or you could just leave the one Pod pending, which is harmless.)

Events such as the ones you saw at the end of `kubectl describe pod` are persisted in etcd and provide high-level information on what is happening in the cluster. To list all events you can use

```
kubectl get events
```

but you have to remember that events are namespaced. This means that if you're interested in events for some namespaced object (e.g. what happened with Pods in namespace `my-namespace`) you need to explicitly provide a namespace to the command:

```
kubectl get events --namespace=my-namespace
```

To see events from all namespaces, you can use the `--all-namespaces` argument.

In addition to `kubectl describe pod`, another way to get extra information about a pod (beyond what is provided by `kubectl get pod`) is to pass the `-o yaml` output format flag to `kubectl get pod`. This will give you, in YAML format, even more information than `kubectl describe pod`--essentially all of the information the system has about the Pod. Here you will see things like annotations (which are key-value metadata without the label restrictions, that is used internally by Kubernetes system components), restart policy, ports, and volumes.

```yaml
Expand Down

0 comments on commit 2d88675

Please sign in to comment.