Skip to content

Commit

Permalink
minor docs/error msg cleanup
Browse files Browse the repository at this point in the history
  • Loading branch information
amygdala committed Aug 6, 2014
1 parent aa3ac32 commit f9bbddf
Show file tree
Hide file tree
Showing 5 changed files with 23 additions and 9 deletions.
2 changes: 1 addition & 1 deletion api/examples/pod.json
Original file line number Diff line number Diff line change
Expand Up @@ -29,4 +29,4 @@
"name": "foo"
}
}

4 changes: 2 additions & 2 deletions cluster/azure/kube-up.sh
Original file line number Diff line number Diff line change
Expand Up @@ -115,15 +115,15 @@ for (( i=0; i<${#MINION_NAMES[@]}; i++)); do
# Make sure docker is installed
ssh -i $AZ_SSH_KEY -p ${ssh_ports[$i]} $AZ_CS.cloudapp.net which docker > /dev/null
if [ "$?" != "0" ]; then
echo "Docker failed to install on ${MINION_NAMES[$i]} your cluster is unlikely to work correctly"
echo "Docker failed to install on ${MINION_NAMES[$i]}. Your cluster is unlikely to work correctly."
echo "Please run ./cluster/kube-down.sh and re-create the cluster. (sorry!)"
exit 1
fi

# Make sure the kubelet is running
ssh -i $AZ_SSH_KEY -p ${ssh_ports[$i]} $AZ_CS.cloudapp.net /etc/init.d/kubelet status
if [ "$?" != "0" ]; then
echo "Kubelet failed to install on ${MINION_NAMES[$i]} your cluster is unlikely to work correctly"
echo "Kubelet failed to install on ${MINION_NAMES[$i]}. Your cluster is unlikely to work correctly."
echo "Please run ./cluster/kube-down.sh and re-create the cluster. (sorry!)"
exit 1
fi
Expand Down
6 changes: 3 additions & 3 deletions cluster/gce/util.sh
Original file line number Diff line number Diff line change
Expand Up @@ -230,14 +230,14 @@ function kube-up {
# Make sure docker is installed
gcutil ssh ${MINION_NAMES[$i]} which docker > /dev/null
if [ "$?" != "0" ]; then
echo "Docker failed to install on ${MINION_NAMES[$i]} your cluster is unlikely to work correctly"
echo "Docker failed to install on ${MINION_NAMES[$i]}. Your cluster is unlikely to work correctly."
echo "Please run ./cluster/kube-down.sh and re-create the cluster. (sorry!)"
exit 1
fi

# Make sure the kubelet is healthy
if [ "$(curl --insecure --user ${user}:${passwd} https://${KUBE_MASTER_IP}/proxy/minion/${MINION_NAMES[$i]}/healthz)" != "ok" ]; then
echo "Kubelet failed to install on ${MINION_NAMES[$i]} your cluster is unlikely to work correctly"
echo "Kubelet failed to install on ${MINION_NAMES[$i]}. Your cluster is unlikely to work correctly."
echo "Please run ./cluster/kube-down.sh and re-create the cluster. (sorry!)"
exit 1
else
Expand All @@ -254,7 +254,7 @@ function kube-up {
echo
echo "Security note: The server above uses a self signed certificate. This is"
echo " subject to \"Man in the middle\" type attacks."

}

# Delete a kubernetes cluster
Expand Down
15 changes: 14 additions & 1 deletion docs/getting-started-guides/gce.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,8 @@ cluster/kubecfg.sh rm myNginx
### Running a container (more complete version)


Assuming you've run `hack/dev-build-and-up.sh` and `hack/build-go.sh`:
Assuming you've run `hack/dev-build-and-up.sh` and `hack/build-go.sh`, you
can create a pod like this:


```
Expand Down Expand Up @@ -99,6 +100,18 @@ Where pod.json contains something like:
}
```

You can see your cluster's pods:

```
cluster/kubecfg.sh list pods
```

and delete the pod you just created:

```
cluster/kubecfg.sh delete pods/php
```

Look in `api/examples/` for more examples

### Tearing down the cluster
Expand Down
5 changes: 3 additions & 2 deletions docs/getting-started-guides/vagrant.md
Original file line number Diff line number Diff line change
Expand Up @@ -132,6 +132,7 @@ hack/e2e-test.sh

If this is your first time creating the cluster, the kubelet on each minion schedules a number of docker pull requests to fetch prerequisite images. This can take some time and as a result may delay your initial pod getting provisioned.

#### I changed Kubernetes code, but its not running!
#### I changed Kubernetes code, but it's not running!

Are you sure there was no build error? After running $ vagrant provision, scroll up and ensure that each Salt state was completed successfully on each box in the cluster. Its very likely you see a build error due to an error in your source files!
Are you sure there was no build error? After running $ vagrant provision, scroll up and ensure that each Salt state was completed successfully on each box in the cluster.
It's very likely you see a build error due to an error in your source files!

0 comments on commit f9bbddf

Please sign in to comment.