Skip to content

Commit

Permalink
Merge pull request kubernetes#11583 from thockin/docs-tick-tick-tick
Browse files Browse the repository at this point in the history
Collected markedown fixes around syntax.
  • Loading branch information
krousey committed Jul 20, 2015
2 parents 2d88675 + 995a7ae commit 960c6a2
Show file tree
Hide file tree
Showing 23 changed files with 43 additions and 80 deletions.
2 changes: 1 addition & 1 deletion docs/admin/admission-controllers.md
Original file line number Diff line number Diff line change
Expand Up @@ -158,7 +158,7 @@ Yes.

For Kubernetes 1.0, we strongly recommend running the following set of admission control plug-ins (order matters):

```shell
```
--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota
```

Expand Down
2 changes: 1 addition & 1 deletion docs/admin/salt.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ These keys may be leveraged by the Salt sls files to branch behavior.

In addition, a cluster may be running a Debian based operating system or Red Hat based operating system (Centos, Fedora, RHEL, etc.). As a result, it's important to sometimes distinguish behavior based on operating system using if branches like the following.

```
```jinja
{% if grains['os_family'] == 'RedHat' %}
// something specific to a RedHat environment (Centos, Fedora, RHEL) where you may use yum, systemd, etc.
{% else %}
Expand Down
1 change: 0 additions & 1 deletion docs/design/admission_control_resource_quota.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,6 @@ type ResourceQuotaList struct {
// Items is a list of ResourceQuota objects
Items []ResourceQuota `json:"items"`
}

```

## AdmissionControl plugin: ResourceQuota
Expand Down
1 change: 0 additions & 1 deletion docs/design/event_compression.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,6 @@ Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4
Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 monitoring-heapster-controller-oh43e Pod failedScheduling {scheduler } Error scheduling: no minions available to schedule pods
Thu, 12 Feb 2015 01:13:20 +0000 Thu, 12 Feb 2015 01:13:20 +0000 1 kibana-logging-controller-gziey BoundPod implicitly required container POD pulled {kubelet kubernetes-minion-4.c.saad-dev-vms.internal} Successfully pulled image "kubernetes/pause:latest"
Thu, 12 Feb 2015 01:13:20 +0000 Thu, 12 Feb 2015 01:13:20 +0000 1 kibana-logging-controller-gziey Pod scheduled {scheduler } Successfully assigned kibana-logging-controller-gziey to kubernetes-minion-4.c.saad-dev-vms.internal

```

This demonstrates what would have been 20 separate entries (indicating scheduling failure) collapsed/compressed down to 5 entries.
Expand Down
6 changes: 2 additions & 4 deletions docs/getting-started-guides/aws-coreos.md
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ Gather the public and private IPs for the master node:
aws ec2 describe-instances --instance-id <instance-id>
```

```
```json
{
"Reservations": [
{
Expand All @@ -131,7 +131,6 @@ aws ec2 describe-instances --instance-id <instance-id>
},
"PublicIpAddress": "54.68.97.117",
"PrivateIpAddress": "172.31.9.9",
...
```

#### Update the node.yaml cloud-config
Expand Down Expand Up @@ -222,7 +221,7 @@ Gather the public IP address for the worker node.
aws ec2 describe-instances --filters 'Name=private-ip-address,Values=<host>'
```

```
```json
{
"Reservations": [
{
Expand All @@ -235,7 +234,6 @@ aws ec2 describe-instances --filters 'Name=private-ip-address,Values=<host>'
"Name": "running"
},
"PublicIpAddress": "54.68.97.117",
...
```

Visit the public IP address in your browser to view the running pod.
Expand Down
1 change: 0 additions & 1 deletion docs/getting-started-guides/fedora/fedora_manual_config.md
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,6 @@ $ kubectl create -f ./node.json
$ kubectl get nodes
NAME LABELS STATUS
fed-node name=fed-node-label Unknown

```

Please note that in the above, it only creates a representation for the node
Expand Down
4 changes: 0 additions & 4 deletions docs/getting-started-guides/logging-elasticsearch.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,6 @@ NAME ZONE SIZE_GB TYPE STATUS
kubernetes-master-pd us-central1-b 20 pd-ssd READY
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam/regions/us-central1/addresses/kubernetes-master-ip].
+++ Logging using Fluentd to elasticsearch

```

The node level Fluentd collector pods and the Elasticsearech pods used to ingest cluster logs and the pod for the Kibana
Expand All @@ -86,7 +85,6 @@ kibana-logging-v1-bhpo8 1/1 Running 0 2h
kube-dns-v3-7r1l9 3/3 Running 0 2h
monitoring-heapster-v4-yl332 1/1 Running 1 2h
monitoring-influx-grafana-v1-o79xf 2/2 Running 0 2h

```

Here we see that for a four node cluster there is a `fluent-elasticsearch` pod running which gathers
Expand Down Expand Up @@ -137,7 +135,6 @@ KubeUI is running at https://146.148.94.154/api/v1/proxy/namespaces/kube-system/
Grafana is running at https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
Heapster is running at https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/monitoring-heapster
InfluxDB is running at https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb

```

Before accessing the logs ingested into Elasticsearch using a browser and the service proxy URL we need to find out
Expand Down Expand Up @@ -204,7 +201,6 @@ $ curl --header "Authorization: Bearer JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp" --insec
},
"tagline" : "You Know, for Search"
}

```

Note that you need the trailing slash at the end of the service proxy URL. Here is an example of a search:
Expand Down
7 changes: 0 additions & 7 deletions docs/getting-started-guides/scratch.md
Original file line number Diff line number Diff line change
Expand Up @@ -661,13 +661,11 @@ Next, verify that kubelet has started a container for the apiserver:
```console
$ sudo docker ps | grep apiserver:
5783290746d5 gcr.io/google_containers/kube-apiserver:e36bf367342b5a80d7467fd7611ad873 "/bin/sh -c '/usr/lo'" 10 seconds ago Up 9 seconds k8s_kube-apiserver.feb145e7_kube-apiserver-kubernetes-master_default_eaebc600cf80dae59902b44225f2fc0a_225a4695
```

Then try to connect to the apiserver:

```console
$ echo $(curl -s http://localhost:8080/healthz)
ok
$ curl -s http://localhost:8080/api
Expand All @@ -677,7 +675,6 @@ $ curl -s http://localhost:8080/api
"v1"
]
}
```

If you have selected the `--register-node=true` option for kubelets, they will now being self-registering with the apiserver.
Expand All @@ -689,7 +686,6 @@ Otherwise, you will need to manually create node objects.
Complete this template for the scheduler pod:

```json
{
"kind": "Pod",
"apiVersion": "v1",
Expand Down Expand Up @@ -719,7 +715,6 @@ Complete this template for the scheduler pod:
]
}
}
```

Optionally, you may want to mount `/var/log` as well and redirect output there.
Expand All @@ -746,7 +741,6 @@ Flags to consider using with controller manager.
Template for controller manager pod:

```json
{
"kind": "Pod",
"apiVersion": "v1",
Expand Down Expand Up @@ -802,7 +796,6 @@ Template for controller manager pod:
]
}
}
```


Expand Down
11 changes: 0 additions & 11 deletions docs/getting-started-guides/ubuntu.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,8 +97,6 @@ export NUM_MINIONS=${NUM_MINIONS:-3}
export SERVICE_CLUSTER_IP_RANGE=11.1.1.0/24

export FLANNEL_NET=172.16.0.0/16


```

The first variable `nodes` defines all your cluster nodes, MASTER node comes first and separated with blank space like `<user_1@ip_1> <user_2@ip_2> <user_3@ip_3> `
Expand All @@ -124,13 +122,11 @@ After all the above variable being set correctly. We can use below command in cl
The scripts is automatically scp binaries and config files to all the machines and start the k8s service on them. The only thing you need to do is to type the sudo password when promoted. The current machine name is shown below like. So you will not type in the wrong password.

```console

Deploying minion on machine 10.10.103.223

...

[sudo] password to copy files and start minion:

```

If all things goes right, you will see the below message from console
Expand All @@ -143,16 +139,13 @@ You can also use `kubectl` command to see if the newly created k8s is working co
For example, use `$ kubectl get nodes` to see if all your nodes are in ready status. It may take some time for the nodes ready to use like below.

```console

NAME LABELS STATUS

10.10.103.162 kubernetes.io/hostname=10.10.103.162 Ready

10.10.103.223 kubernetes.io/hostname=10.10.103.223 Ready

10.10.103.250 kubernetes.io/hostname=10.10.103.250 Ready


```

Also you can run kubernetes [guest-example](../../examples/guestbook/) to build a redis backend cluster on the k8s.
Expand All @@ -165,15 +158,13 @@ After the previous parts, you will have a working k8s cluster, this part will te
The configuration of dns is configured in cluster/ubuntu/config-default.sh.

```sh

ENABLE_CLUSTER_DNS=true

DNS_SERVER_IP="192.168.3.10"

DNS_DOMAIN="cluster.local"

DNS_REPLICAS=1

```

The `DNS_SERVER_IP` is defining the ip of dns server which must be in the service_cluster_ip_range.
Expand All @@ -183,11 +174,9 @@ The `DNS_REPLICAS` describes how many dns pod running in the cluster.
After all the above variable have been set. Just type the below command

```console

$ cd cluster/ubuntu

$ KUBERNETES_PROVIDER=ubuntu ./deployAddons.sh

```

After some time, you can use `$ kubectl get pods` to see the dns pod is running in the cluster. Done!
Expand Down
1 change: 0 additions & 1 deletion docs/user-guide/debugging-services.md
Original file line number Diff line number Diff line change
Expand Up @@ -195,7 +195,6 @@ or:

```console
u@pod$ echo $HOSTNAMES_SERVICE_HOST
```

So the first thing to check is whether that `Service` actually exists:
Expand Down
1 change: 0 additions & 1 deletion docs/user-guide/kubeconfig-file.md
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,6 @@ users:
myself:
username: admin
password: secret
```

and a kubeconfig file that looks like this
Expand Down
1 change: 0 additions & 1 deletion docs/user-guide/persistent-volumes/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,6 @@ They just know they can rely on their claim to storage and can manage its lifecy
Claims must be created in the same namespace as the pods that use them.

```console

$ kubectl create -f docs/user-guide/persistent-volumes/claims/claim-01.yaml

$ kubectl get pvc
Expand Down
1 change: 0 additions & 1 deletion examples/cassandra/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -287,7 +287,6 @@ UN 10.244.3.3 51.28 KB 256 51.0% dafe3154-1d67-42e1-ac1d-78e
For those of you who are impatient, here is the summary of the commands we ran in this tutorial.

```sh
# create a service to track all cassandra nodes
kubectl create -f examples/cassandra/cassandra-service.yaml
Expand Down
2 changes: 1 addition & 1 deletion examples/celery-rabbitmq/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ spec:
To start the service, run:
```shell
```sh
$ kubectl create -f examples/celery-rabbitmq/rabbitmq-service.yaml
```

Expand Down
8 changes: 0 additions & 8 deletions examples/elasticsearch/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,6 @@ metadata:
namespace: NAMESPACE
data:
token: "TOKEN"
```

Replace `NAMESPACE` with the actual namespace to be used and `TOKEN` with the basic64 encoded
Expand All @@ -126,7 +125,6 @@ $ kubectl config view
...
$ echo yGlDcMvSZPX4PyP0Q5bHgAYgi1iyEHv2 | base64
eUdsRGNNdlNaUFg0UHlQMFE1YkhnQVlnaTFpeUVIdjIK=
```

resulting in the file:
Expand All @@ -139,23 +137,20 @@ metadata:
namespace: mytunes
data:
token: "eUdsRGNNdlNaUFg0UHlQMFE1YkhnQVlnaTFpeUVIdjIK="
```

which can be used to create the secret in your namespace:

```console
kubectl create -f examples/elasticsearch/apiserver-secret.yaml --namespace=mytunes
secrets/apiserver-secret
```

Now you are ready to create the replication controller which will then create the pods:

```console
$ kubectl create -f examples/elasticsearch/music-rc.yaml --namespace=mytunes
replicationcontrollers/music-db
```

It's also useful to have a [service](../../docs/user-guide/services.md) with an load balancer for accessing the Elasticsearch
Expand Down Expand Up @@ -184,7 +179,6 @@ Let's create the service with an external load balancer:
```console
$ kubectl create -f examples/elasticsearch/music-service.yaml --namespace=mytunes
services/music-server
```

Let's see what we've got:
Expand Down Expand Up @@ -301,7 +295,6 @@ music-db-u1ru3 1/1 Running 0 38s
music-db-wnss2 1/1 Running 0 1m
music-db-x7j2w 1/1 Running 0 1m
music-db-zjqyv 1/1 Running 0 1m
```

Let's check to make sure that these 10 nodes are part of the same Elasticsearch cluster:
Expand Down Expand Up @@ -359,7 +352,6 @@ $ curl 104.197.12.157:9200/_nodes?pretty=true | grep name
"name" : "mytunes-db"
"vm_name" : "OpenJDK 64-Bit Server VM",
"name" : "eth0",
```


Expand Down
2 changes: 1 addition & 1 deletion examples/explorer/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ Currently, you can look at:

Example from command line (the DNS lookup looks better from a web browser):

```
```console
$ kubectl create -f examples/explorer/pod.json
$ kubectl proxy &
Starting to serve on localhost:8001
Expand Down
12 changes: 6 additions & 6 deletions examples/glusterfs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,13 +63,13 @@ The "IP" field should be filled with the address of a node in the Glusterfs serv

Create the endpoints,

```shell
```sh
$ kubectl create -f examples/glusterfs/glusterfs-endpoints.json
```

You can verify that the endpoints are successfully created by running

```shell
```sh
$ kubectl get endpoints
NAME ENDPOINTS
glusterfs-cluster 10.240.106.152:1,10.240.79.157:1
Expand All @@ -79,7 +79,7 @@ glusterfs-cluster 10.240.106.152:1,10.240.79.157:1

The following *volume* spec in [glusterfs-pod.json](glusterfs-pod.json) illustrates a sample configuration.

```js
```json
{
"name": "glusterfsvol",
"glusterfs": {
Expand All @@ -98,13 +98,13 @@ The parameters are explained as the followings.

Create a pod that has a container using Glusterfs volume,

```shell
```sh
$ kubectl create -f examples/glusterfs/glusterfs-pod.json
```

You can verify that the pod is running:

```shell
```sh
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
glusterfs 1/1 Running 0 3m
Expand All @@ -115,7 +115,7 @@ $ kubectl get pods glusterfs -t '{{.status.hostIP}}{{"\n"}}'

You may ssh to the host (the hostIP) and run 'mount' to see if the Glusterfs volume is mounted,

```shell
```sh
$ mount | grep kube_vol
10.240.106.152:kube_vol on /var/lib/kubelet/pods/f164a571-fa68-11e4-ad5c-42010af019b7/volumes/kubernetes.io~glusterfs/glusterfsvol type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
```
Expand Down
Loading

0 comments on commit 960c6a2

Please sign in to comment.