Skip to content

Commit

Permalink
Fix per reviewer comments.
Browse files Browse the repository at this point in the history
  • Loading branch information
David Oppenheimer committed Jul 17, 2015
1 parent 820c4cb commit d64250c
Show file tree
Hide file tree
Showing 3 changed files with 10 additions and 9 deletions.
6 changes: 4 additions & 2 deletions docs/admin/cluster-troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,8 +31,10 @@ Documentation for other releases can be found at

<!-- END MUNGE: UNVERSIONED_WARNING -->
# Cluster Troubleshooting
Most of the time, if you encounter problems, it is your application that is the root cause. For application
problems please see the [application troubleshooting guide](../user-guide/application-troubleshooting.md). You may also visit [troubleshooting document](../troubleshooting.md) for more information.
This doc is about cluster troubleshooting; we assume you have already ruled out your application as the root cause of the
problem you are experiencing. See
the [application troubleshooting guide](../user-guide/application-troubleshooting.md) for tips on application debugging.
You may also visit [troubleshooting document](../troubleshooting.md) for more information.

## Listing your cluster
The first thing to debug in your cluster is if your nodes are all registered correctly.
Expand Down
9 changes: 4 additions & 5 deletions docs/admin/node.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ the following conditions mean the node is in sane state:
```

If the Status of the Ready condition
is Unknown or False for more than five minutes, then all of the Pods on the node are terminated.
is Unknown or False for more than five minutes, then all of the Pods on the node are terminated by the Node Controller.

### Node Capacity

Expand All @@ -128,8 +128,8 @@ The information is gathered by Kubelet from the node.
## Node Management

Unlike [Pods](../user-guide/pods.md) and [Services](../user-guide/services.md), a Node is not inherently
created by Kubernetes: it is either created from cloud providers like Google Compute Engine,
or from your physical or virtual machines. What this means is that when
created by Kubernetes: it is either taken from cloud providers like Google Compute Engine,
or from your pool of physical or virtual machines. What this means is that when
Kubernetes creates a node, it is really just creating an object that represents the node in its internal state.
After creation, Kubernetes will check whether the node is valid or not.
For example, if you try to create a node from the following content:
Expand All @@ -154,8 +154,7 @@ ignored for any cluster activity, until it becomes valid. Note that Kubernetes
will keep the object for the invalid node unless it is explicitly deleted by the client, and it will keep
checking to see if it becomes valid.

Currently, there are two agents that interacts with the Kubernetes node interface:
Node Controller and Kube Admin.
Currently, there are three components that interact with the Kubernetes node interface: Node Controller, Kubelet, and kubectl.

### Node Controller

Expand Down
4 changes: 2 additions & 2 deletions docs/admin/salt.md
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ Key | Value

These keys may be leveraged by the Salt sls files to branch behavior.

In addition, a cluster may be running a Debian based operating system or Red Hat based operating system (Centos, Fedora, RHEL, etc.). As a result, its important to sometimes distinguish behavior based on operating system using if branches like the following.
In addition, a cluster may be running a Debian based operating system or Red Hat based operating system (Centos, Fedora, RHEL, etc.). As a result, it's important to sometimes distinguish behavior based on operating system using if branches like the following.

```
{% if grains['os_family'] == 'RedHat' %}
Expand All @@ -121,7 +121,7 @@ In addition, a cluster may be running a Debian based operating system or Red Hat

## Future enhancements (Networking)

Per pod IP configuration is provider-specific, so when making networking changes, its important to sandbox these as all providers may not use the same mechanisms (iptables, openvswitch, etc.)
Per pod IP configuration is provider-specific, so when making networking changes, it's important to sandbox these as all providers may not use the same mechanisms (iptables, openvswitch, etc.)

We should define a grains.conf key that captures more specifically what network configuration environment is being used to avoid future confusion across providers.

Expand Down

0 comments on commit d64250c

Please sign in to comment.