Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minion->Node rename: docs/ machine names only, except gce/aws for #1111 #18177

Merged
merged 1 commit into from
Dec 11, 2015
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 22 additions & 22 deletions docs/admin/static-pods.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,17 +49,17 @@ The configuration files are just standard pod definition in json or yaml format

For example, this is how to start a simple web server as a static pod:

1. Choose a node where we want to run the static pod. In this example, it's `my-minion1`.
1. Choose a node where we want to run the static pod. In this example, it's `my-node1`.

```console
[joe@host ~] $ ssh my-minion1
[joe@host ~] $ ssh my-node1
```

2. Choose a directory, say `/etc/kubelet.d` and place a web server pod definition there, e.g. `/etc/kubernetes.d/static-web.yaml`:

```console
[root@my-minion1 ~] $ mkdir /etc/kubernetes.d/
[root@my-minion1 ~] $ cat <<EOF >/etc/kubernetes.d/static-web.yaml
[root@my-node1 ~] $ mkdir /etc/kubernetes.d/
[root@my-node1 ~] $ cat <<EOF >/etc/kubernetes.d/static-web.yaml
apiVersion: v1
kind: Pod
metadata:
Expand Down Expand Up @@ -88,7 +88,7 @@ For example, this is how to start a simple web server as a static pod:
3. Restart kubelet. On Fedora 21, this is:

```console
[root@my-minion1 ~] $ systemctl restart kubelet
[root@my-node1 ~] $ systemctl restart kubelet
```

## Pods created via HTTP
Expand All @@ -100,9 +100,9 @@ Kubelet periodically downloads a file specified by `--manifest-url=<URL>` argume
When kubelet starts, it automatically starts all pods defined in directory specified in `--config=` or `--manifest-url=` arguments, i.e. our static-web. (It may take some time to pull nginx image, be patient…):

```console
[joe@my-minion1 ~] $ docker ps
[joe@my-node1 ~] $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES
f6d05272b57e nginx:latest "nginx" 8 minutes ago Up 8 minutes k8s_web.6f802af4_static-web-fk-minion1_default_67e24ed9466ba55986d120c867395f3c_378e5f3c
f6d05272b57e nginx:latest "nginx" 8 minutes ago Up 8 minutes k8s_web.6f802af4_static-web-fk-node1_default_67e24ed9466ba55986d120c867395f3c_378e5f3c
```

If we look at our Kubernetes API server (running on host `my-master`), we see that a new mirror-pod was created there too:
Expand All @@ -111,7 +111,7 @@ If we look at our Kubernetes API server (running on host `my-master`), we see th
[joe@host ~] $ ssh my-master
[joe@my-master ~] $ kubectl get pods
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
static-web-my-minion1 172.17.0.3 my-minion1/192.168.100.71 role=myrole Running 11 minutes
static-web-my-node1 172.17.0.3 my-node1/192.168.100.71 role=myrole Running 11 minutes
web nginx Running 11 minutes
```

Expand All @@ -120,20 +120,20 @@ Labels from the static pod are propagated into the mirror-pod and can be used as
Notice we cannot delete the pod with the API server (e.g. via [`kubectl`](../user-guide/kubectl/kubectl.md) command), kubelet simply won't remove it.

```console
[joe@my-master ~] $ kubectl delete pod static-web-my-minion1
pods/static-web-my-minion1
[joe@my-master ~] $ kubectl delete pod static-web-my-node1
pods/static-web-my-node1
[joe@my-master ~] $ kubectl get pods
POD IP CONTAINER(S) IMAGE(S) HOST ...
static-web-my-minion1 172.17.0.3 my-minion1/192.168.100.71 ...
static-web-my-node1 172.17.0.3 my-node1/192.168.100.71 ...
```

Back to our `my-minion1` host, we can try to stop the container manually and see, that kubelet automatically restarts it in a while:
Back to our `my-node1` host, we can try to stop the container manually and see, that kubelet automatically restarts it in a while:

```console
[joe@host ~] $ ssh my-minion1
[joe@my-minion1 ~] $ docker stop f6d05272b57e
[joe@my-minion1 ~] $ sleep 20
[joe@my-minion1 ~] $ docker ps
[joe@host ~] $ ssh my-node1
[joe@my-node1 ~] $ docker stop f6d05272b57e
[joe@my-node1 ~] $ sleep 20
[joe@my-node1 ~] $ docker ps
CONTAINER ID IMAGE COMMAND CREATED ...
5b920cbaf8b1 nginx:latest "nginx -g 'daemon of 2 seconds ago ...
```
Expand All @@ -143,13 +143,13 @@ CONTAINER ID IMAGE COMMAND CREATED ...
Running kubelet periodically scans the configured directory (`/etc/kubelet.d` in our example) for changes and adds/removes pods as files appear/disappear in this directory.

```console
[joe@my-minion1 ~] $ mv /etc/kubernetes.d/static-web.yaml /tmp
[joe@my-minion1 ~] $ sleep 20
[joe@my-minion1 ~] $ docker ps
[joe@my-node1 ~] $ mv /etc/kubernetes.d/static-web.yaml /tmp
[joe@my-node1 ~] $ sleep 20
[joe@my-node1 ~] $ docker ps
// no nginx container is running
[joe@my-minion1 ~] $ mv /tmp/static-web.yaml /etc/kubernetes.d/
[joe@my-minion1 ~] $ sleep 20
[joe@my-minion1 ~] $ docker ps
[joe@my-node1 ~] $ mv /tmp/static-web.yaml /etc/kubernetes.d/
[joe@my-node1 ~] $ sleep 20
[joe@my-node1 ~] $ docker ps
CONTAINER ID IMAGE COMMAND CREATED ...
e7a62e3427f1 nginx:latest "nginx -g 'daemon of 27 seconds ago
```
Expand Down
12 changes: 6 additions & 6 deletions docs/design/event_compression.md
Original file line number Diff line number Diff line change
Expand Up @@ -119,17 +119,17 @@ Sample kubectl output

```console
FIRSTSEEN LASTSEEN COUNT NAME KIND SUBOBJECT REASON SOURCE MESSAGE
Thu, 12 Feb 2015 01:13:02 +0000 Thu, 12 Feb 2015 01:13:02 +0000 1 kubernetes-minion-4.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-minion-4.c.saad-dev-vms.internal} Starting kubelet.
Thu, 12 Feb 2015 01:13:09 +0000 Thu, 12 Feb 2015 01:13:09 +0000 1 kubernetes-minion-1.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-minion-1.c.saad-dev-vms.internal} Starting kubelet.
Thu, 12 Feb 2015 01:13:09 +0000 Thu, 12 Feb 2015 01:13:09 +0000 1 kubernetes-minion-3.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-minion-3.c.saad-dev-vms.internal} Starting kubelet.
Thu, 12 Feb 2015 01:13:09 +0000 Thu, 12 Feb 2015 01:13:09 +0000 1 kubernetes-minion-2.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-minion-2.c.saad-dev-vms.internal} Starting kubelet.
Thu, 12 Feb 2015 01:13:02 +0000 Thu, 12 Feb 2015 01:13:02 +0000 1 kubernetes-node-4.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-node-4.c.saad-dev-vms.internal} Starting kubelet.
Thu, 12 Feb 2015 01:13:09 +0000 Thu, 12 Feb 2015 01:13:09 +0000 1 kubernetes-node-1.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-node-1.c.saad-dev-vms.internal} Starting kubelet.
Thu, 12 Feb 2015 01:13:09 +0000 Thu, 12 Feb 2015 01:13:09 +0000 1 kubernetes-node-3.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-node-3.c.saad-dev-vms.internal} Starting kubelet.
Thu, 12 Feb 2015 01:13:09 +0000 Thu, 12 Feb 2015 01:13:09 +0000 1 kubernetes-node-2.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-node-2.c.saad-dev-vms.internal} Starting kubelet.
Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 monitoring-influx-grafana-controller-0133o Pod failedScheduling {scheduler } Error scheduling: no nodes available to schedule pods
Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 elasticsearch-logging-controller-fplln Pod failedScheduling {scheduler } Error scheduling: no nodes available to schedule pods
Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 kibana-logging-controller-gziey Pod failedScheduling {scheduler } Error scheduling: no nodes available to schedule pods
Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 skydns-ls6k1 Pod failedScheduling {scheduler } Error scheduling: no nodes available to schedule pods
Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 monitoring-heapster-controller-oh43e Pod failedScheduling {scheduler } Error scheduling: no nodes available to schedule pods
Thu, 12 Feb 2015 01:13:20 +0000 Thu, 12 Feb 2015 01:13:20 +0000 1 kibana-logging-controller-gziey BoundPod implicitly required container POD pulled {kubelet kubernetes-minion-4.c.saad-dev-vms.internal} Successfully pulled image "kubernetes/pause:latest"
Thu, 12 Feb 2015 01:13:20 +0000 Thu, 12 Feb 2015 01:13:20 +0000 1 kibana-logging-controller-gziey Pod scheduled {scheduler } Successfully assigned kibana-logging-controller-gziey to kubernetes-minion-4.c.saad-dev-vms.internal
Thu, 12 Feb 2015 01:13:20 +0000 Thu, 12 Feb 2015 01:13:20 +0000 1 kibana-logging-controller-gziey BoundPod implicitly required container POD pulled {kubelet kubernetes-node-4.c.saad-dev-vms.internal} Successfully pulled image "kubernetes/pause:latest"
Thu, 12 Feb 2015 01:13:20 +0000 Thu, 12 Feb 2015 01:13:20 +0000 1 kibana-logging-controller-gziey Pod scheduled {scheduler } Successfully assigned kibana-logging-controller-gziey to kubernetes-node-4.c.saad-dev-vms.internal
```

This demonstrates what would have been 20 separate entries (indicating scheduling failure) collapsed/compressed down to 5 entries.
Expand Down
12 changes: 6 additions & 6 deletions docs/devel/developer-guides/vagrant.md
Original file line number Diff line number Diff line change
Expand Up @@ -139,9 +139,9 @@ You may need to build the binaries first, you can do this with `make`
$ ./cluster/kubectl.sh get nodes

NAME LABELS STATUS
kubernetes-node-0whl kubernetes.io/hostname=kubernetes-node-0whl Ready
kubernetes-node-4jdf kubernetes.io/hostname=kubernetes-node-4jdf Ready
kubernetes-node-epbe kubernetes.io/hostname=kubernetes-node-epbe Ready
kubernetes-node-0whl kubernetes.io/hostname=kubernetes-node-0whl Ready
kubernetes-node-4jdf kubernetes.io/hostname=kubernetes-node-4jdf Ready
kubernetes-node-epbe kubernetes.io/hostname=kubernetes-node-epbe Ready
```

### Interacting with your Kubernetes cluster with the `kube-*` scripts.
Expand Down Expand Up @@ -206,9 +206,9 @@ Your cluster is running, you can list the nodes in your cluster:
$ ./cluster/kubectl.sh get nodes

NAME LABELS STATUS
kubernetes-node-0whl kubernetes.io/hostname=kubernetes-node-0whl Ready
kubernetes-node-4jdf kubernetes.io/hostname=kubernetes-node-4jdf Ready
kubernetes-node-epbe kubernetes.io/hostname=kubernetes-node-epbe Ready
kubernetes-node-0whl kubernetes.io/hostname=kubernetes-node-0whl Ready
kubernetes-node-4jdf kubernetes.io/hostname=kubernetes-node-4jdf Ready
kubernetes-node-epbe kubernetes.io/hostname=kubernetes-node-epbe Ready
```

Now start running some containers!
Expand Down
6 changes: 3 additions & 3 deletions docs/devel/flaky-tests.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,9 +80,9 @@ You can use this script to automate checking for failures, assuming your cluster
```sh
echo "" > output.txt
for i in {1..4}; do
echo "Checking kubernetes-minion-${i}"
echo "kubernetes-minion-${i}:" >> output.txt
gcloud compute ssh "kubernetes-minion-${i}" --command="sudo docker ps -a" >> output.txt
echo "Checking kubernetes-node-${i}"
echo "kubernetes-node-${i}:" >> output.txt
gcloud compute ssh "kubernetes-node-${i}" --command="sudo docker ps -a" >> output.txt
done
grep "Exited ([^0])" output.txt
```
Expand Down
8 changes: 4 additions & 4 deletions docs/getting-started-guides/libvirt-coreos.md
Original file line number Diff line number Diff line change
Expand Up @@ -181,9 +181,9 @@ $ virsh -c qemu:///system list
Id Name State
----------------------------------------------------
15 kubernetes_master running
16 kubernetes_minion-01 running
17 kubernetes_minion-02 running
18 kubernetes_minion-03 running
16 kubernetes_node-01 running
17 kubernetes_node-02 running
18 kubernetes_node-03 running
```

You can check that the Kubernetes cluster is working with:
Expand All @@ -208,7 +208,7 @@ Connect to `kubernetes_master`:
ssh core@192.168.10.1
```

Connect to `kubernetes_minion-01`:
Connect to `kubernetes_node-01`:

```sh
ssh core@192.168.10.2
Expand Down
8 changes: 4 additions & 4 deletions docs/getting-started-guides/logging-elasticsearch.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,10 +77,10 @@ $ kubectl get pods --namespace=kube-system
NAME READY REASON RESTARTS AGE
elasticsearch-logging-v1-78nog 1/1 Running 0 2h
elasticsearch-logging-v1-nj2nb 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-minion-5oq0 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-minion-6896 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-minion-l1ds 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-minion-lz9j 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-node-5oq0 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-node-6896 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-node-l1ds 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-node-lz9j 1/1 Running 0 2h
kibana-logging-v1-bhpo8 1/1 Running 0 2h
kube-dns-v3-7r1l9 3/3 Running 0 2h
monitoring-heapster-v4-yl332 1/1 Running 1 2h
Expand Down
10 changes: 5 additions & 5 deletions docs/getting-started-guides/logging.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,10 +41,10 @@ logging and DNS resolution for names of Kubernetes services:
```console
$ kubectl get pods --namespace=kube-system
NAME READY REASON RESTARTS AGE
fluentd-cloud-logging-kubernetes-minion-0f64 1/1 Running 0 32m
fluentd-cloud-logging-kubernetes-minion-27gf 1/1 Running 0 32m
fluentd-cloud-logging-kubernetes-minion-pk22 1/1 Running 0 31m
fluentd-cloud-logging-kubernetes-minion-20ej 1/1 Running 0 31m
fluentd-cloud-logging-kubernetes-node-0f64 1/1 Running 0 32m
fluentd-cloud-logging-kubernetes-node-27gf 1/1 Running 0 32m
fluentd-cloud-logging-kubernetes-node-pk22 1/1 Running 0 31m
fluentd-cloud-logging-kubernetes-node-20ej 1/1 Running 0 31m
kube-dns-v3-pk22 3/3 Running 0 32m
monitoring-heapster-v1-20ej 0/1 Running 9 32m
```
Expand Down Expand Up @@ -215,7 +215,7 @@ Note the first container counted to 108 and then it was terminated. When the nex

```console
SELECT metadata.timestamp, structPayload.log
FROM [mylogs.kubernetes_counter_default_count_20150611]
FROM [mylogs.kubernetes_counter_default_count_20150611]
ORDER BY metadata.timestamp DESC
```

Expand Down
2 changes: 1 addition & 1 deletion docs/proposals/compute-resource-metrics-api.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ user via a periodically refreshing interface similar to `top` on Unix-like
systems. This info could let users assign resource limits more efficiently.

```
$ kubectl top kubernetes-minion-abcd
$ kubectl top kubernetes-node-abcd
POD CPU MEM
monitoring-heapster-abcde 0.12 cores 302 MB
kube-ui-v1-nd7in 0.07 cores 130 MB
Expand Down
Loading