Skip to content

Commit

Permalink
rewrite all links to issues to k8s links
Browse files Browse the repository at this point in the history
  • Loading branch information
mikedanese committed Aug 6, 2015
1 parent 7c9bbef commit fe6b15b
Show file tree
Hide file tree
Showing 40 changed files with 66 additions and 66 deletions.
4 changes: 2 additions & 2 deletions build/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ To build Kubernetes you need to have access to a Docker installation through eit

1. Be running Docker. 2 options supported/tested:
1. **Mac OS X** The best way to go is to use `boot2docker`. See instructions [here](https://docs.docker.com/installation/mac/).
**Note**: You will want to set the boot2docker vm to have at least 3GB of initial memory or building will likely fail. (See: [#11852]( https://github.com/GoogleCloudPlatform/kubernetes/issues/11852))
**Note**: You will want to set the boot2docker vm to have at least 3GB of initial memory or building will likely fail. (See: [#11852]( http://issue.k8s.io/11852))
2. **Linux with local Docker** Install Docker according to the [instructions](https://docs.docker.com/installation/#installation) for your OS. The scripts here assume that they are using a local Docker server and that they can "reach around" docker and grab results directly from the file system.
2. Have python installed. Pretty much it is installed everywhere at this point so you can probably ignore this.
3. *Optional* For uploading your release to Google Cloud Storage, have the [Google Cloud SDK](https://developers.google.com/cloud/sdk/) installed and configured.
Expand Down Expand Up @@ -89,7 +89,7 @@ These are in no particular order

* [X] Harmonize with scripts in `hack/`. How much do we support building outside of Docker and these scripts?
* [X] Deprecate/replace most of the stuff in the hack/
* [ ] Finish support for the Dockerized runtime. Issue (#19)[https://github.com/GoogleCloudPlatform/kubernetes/issues/19]. A key issue here is to make this fast/light enough that we can use it for development workflows.
* [ ] Finish support for the Dockerized runtime. Issue (#19)[http://issue.k8s.io/19]. A key issue here is to make this fast/light enough that we can use it for development workflows.


[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/build/README.md?pixel)]()
4 changes: 2 additions & 2 deletions cluster/gce/util.sh
Original file line number Diff line number Diff line change
Expand Up @@ -636,7 +636,7 @@ function kube-up {
# Generate a bearer token for this cluster. We push this separately
# from the other cluster variables so that the client (this
# computer) can forget it later. This should disappear with
# https://github.com/GoogleCloudPlatform/kubernetes/issues/3168
# http://issue.k8s.io/3168
KUBELET_TOKEN=$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null)
KUBE_PROXY_TOKEN=$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null)

Expand Down Expand Up @@ -1083,7 +1083,7 @@ function kube-push {
# is solved (because that's blocking automatic dynamic nodes from
# working). The node-kube-env has to be composed with the KUBELET_TOKEN
# and KUBE_PROXY_TOKEN. Ideally we would have
# https://github.com/GoogleCloudPlatform/kubernetes/issues/3168
# http://issue.k8s.io/3168
# implemented before then, though, so avoiding this mess until then.

echo
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ def command_succeeded(self, response, result):
'message', '').startswith('The requested resource does not exist'):
# There's something fishy in the kube api here (0.4 dev), first time we
# go to register a new minion, we always seem to get this error.
# https://github.com/GoogleCloudPlatform/kubernetes/issues/1995
# http://issue.k8s.io/1995
time.sleep(1)
print("Retrying registration...")
raise ValueError("Registration returned 500, retry")
Expand Down
2 changes: 1 addition & 1 deletion cluster/saltbase/salt/kube-apiserver/init.sls
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@
# boot, run-salt will installs kube-apiserver.manifest files to
# kubelet config directory before the installation of proper version
# kubelet. Please see
# https://github.com/GoogleCloudPlatform/kubernetes/issues/10122#issuecomment-114566063
# http://issue.k8s.io/10122#issuecomment-114566063
# for detail explanation on this very issue.
/etc/kubernetes/manifests/kube-apiserver.manifest:
file.managed:
Expand Down
2 changes: 1 addition & 1 deletion cluster/saltbase/salt/kube-controller-manager/init.sls
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
# The ordering of salt states for service docker, kubelet and
# master-addon below is very important to avoid the race between
# salt restart docker or kubelet and kubelet start master components.
# Please see https://github.com/GoogleCloudPlatform/kubernetes/issues/10122#issuecomment-114566063
# Please see http://issue.k8s.io/10122#issuecomment-114566063
# for detail explanation on this very issue.
/etc/kubernetes/manifests/kube-controller-manager.manifest:
file.managed:
Expand Down
2 changes: 1 addition & 1 deletion cluster/saltbase/salt/kube-scheduler/init.sls
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
# The ordering of salt states for service docker, kubelet and
# master-addon below is very important to avoid the race between
# salt restart docker or kubelet and kubelet start master components.
# Please see https://github.com/GoogleCloudPlatform/kubernetes/issues/10122#issuecomment-114566063
# Please see http://issue.k8s.io/10122#issuecomment-114566063
# for detail explanation on this very issue.
/etc/kubernetes/manifests/kube-scheduler.manifest:
file.managed:
Expand Down
2 changes: 1 addition & 1 deletion cluster/update-storage-objects.sh
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ KUBECTL="${KUBE_OUTPUT_HOSTBIN}/kubectl"

# List of resources to be updated.
# TODO: Get this list of resources from server once
# https://github.com/GoogleCloudPlatform/kubernetes/issues/2057 is fixed.
# http://issue.k8s.io/2057 is fixed.
declare -a resources=(
"endpoints"
"events"
Expand Down
2 changes: 1 addition & 1 deletion docs/admin/cluster-large.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ containers:
memory: 200Mi
```
These limits, however, are based on data collected from addons running on 4-node clusters (see [#10335](https://github.com/GoogleCloudPlatform/kubernetes/issues/10335#issuecomment-117861225)). The addons consume a lot more resources when running on large deployment clusters (see [#5880](https://github.com/GoogleCloudPlatform/kubernetes/issues/5880#issuecomment-113984085)). So, if a large cluster is deployed without adjusting these values, the addons may continuously get killed because they keep hitting the limits.
These limits, however, are based on data collected from addons running on 4-node clusters (see [#10335](http://issue.k8s.io/10335#issuecomment-117861225)). The addons consume a lot more resources when running on large deployment clusters (see [#5880](http://issue.k8s.io/5880#issuecomment-113984085)). So, if a large cluster is deployed without adjusting these values, the addons may continuously get killed because they keep hitting the limits.
To avoid running into cluster addon resource issues, when creating a cluster with many nodes, consider the following:
* Scale memory and CPU limits for each of the following addons, if used, along with the size of cluster (there is one replica of each handling the entire cluster so memory and CPU usage tends to grow proportionally with size/load on cluster):
Expand Down
8 changes: 4 additions & 4 deletions docs/design/access.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,8 +118,8 @@ Pods configs should be largely portable between Org-run and hosted configuration
# Design

Related discussion:
- https://github.com/GoogleCloudPlatform/kubernetes/issues/442
- https://github.com/GoogleCloudPlatform/kubernetes/issues/443
- http://issue.k8s.io/442
- http://issue.k8s.io/443

This doc describes two security profiles:
- Simple profile: like single-user mode. Make it easy to evaluate K8s without lots of configuring accounts and policies. Protects from unauthorized users, but does not partition authorized users.
Expand Down Expand Up @@ -176,7 +176,7 @@ Initially:
Improvements:
- Kubelet allocates disjoint blocks of root-namespace uids for each container. This may provide some defense-in-depth against container escapes. (https://github.com/docker/docker/pull/4572)
- requires docker to integrate user namespace support, and deciding what getpwnam() does for these uids.
- any features that help users avoid use of privileged containers (https://github.com/GoogleCloudPlatform/kubernetes/issues/391)
- any features that help users avoid use of privileged containers (http://issue.k8s.io/391)

### Namespaces

Expand Down Expand Up @@ -253,7 +253,7 @@ Policy objects may be applicable only to a single namespace or to all namespaces

## Accounting

The API should have a `quota` concept (see https://github.com/GoogleCloudPlatform/kubernetes/issues/442). A quota object relates a namespace (and optionally a label selector) to a maximum quantity of resources that may be used (see [resources design doc](resources.md)).
The API should have a `quota` concept (see http://issue.k8s.io/442). A quota object relates a namespace (and optionally a label selector) to a maximum quantity of resources that may be used (see [resources design doc](resources.md)).

Initially:
- a `quota` object is immutable.
Expand Down
2 changes: 1 addition & 1 deletion docs/design/admission_control.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Documentation for other releases can be found at

| Topic | Link |
| ----- | ---- |
| Separate validation from RESTStorage | https://github.com/GoogleCloudPlatform/kubernetes/issues/2977 |
| Separate validation from RESTStorage | http://issue.k8s.io/2977 |

## Background

Expand Down
6 changes: 3 additions & 3 deletions docs/design/command_execution_port_forwarding.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,9 +44,9 @@ This describes an approach for providing support for:

There are several related issues/PRs:

- [Support attach](https://github.com/GoogleCloudPlatform/kubernetes/issues/1521)
- [Real container ssh](https://github.com/GoogleCloudPlatform/kubernetes/issues/1513)
- [Provide easy debug network access to services](https://github.com/GoogleCloudPlatform/kubernetes/issues/1863)
- [Support attach](http://issue.k8s.io/1521)
- [Real container ssh](http://issue.k8s.io/1513)
- [Provide easy debug network access to services](http://issue.k8s.io/1863)
- [OpenShift container command execution proposal](https://github.com/openshift/origin/pull/576)

## Motivation
Expand Down
10 changes: 5 additions & 5 deletions docs/design/event_compression.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ This document captures the design of event compression.

## Background

Kubernetes components can get into a state where they generate tons of events which are identical except for the timestamp. For example, when pulling a non-existing image, Kubelet will repeatedly generate `image_not_existing` and `container_is_waiting` events until upstream components correct the image. When this happens, the spam from the repeated events makes the entire event mechanism useless. It also appears to cause memory pressure in etcd (see [#3853](https://github.com/GoogleCloudPlatform/kubernetes/issues/3853)).
Kubernetes components can get into a state where they generate tons of events which are identical except for the timestamp. For example, when pulling a non-existing image, Kubelet will repeatedly generate `image_not_existing` and `container_is_waiting` events until upstream components correct the image. When this happens, the spam from the repeated events makes the entire event mechanism useless. It also appears to cause memory pressure in etcd (see [#3853](http://issue.k8s.io/3853)).

## Proposal

Expand Down Expand Up @@ -109,10 +109,10 @@ This demonstrates what would have been 20 separate entries (indicating schedulin

## Related Pull Requests/Issues

* Issue [#4073](https://github.com/GoogleCloudPlatform/kubernetes/issues/4073): Compress duplicate events
* PR [#4157](https://github.com/GoogleCloudPlatform/kubernetes/issues/4157): Add "Update Event" to Kubernetes API
* PR [#4206](https://github.com/GoogleCloudPlatform/kubernetes/issues/4206): Modify Event struct to allow compressing multiple recurring events in to a single event
* PR [#4306](https://github.com/GoogleCloudPlatform/kubernetes/issues/4306): Compress recurring events in to a single event to optimize etcd storage
* Issue [#4073](http://issue.k8s.io/4073): Compress duplicate events
* PR [#4157](http://issue.k8s.io/4157): Add "Update Event" to Kubernetes API
* PR [#4206](http://issue.k8s.io/4206): Modify Event struct to allow compressing multiple recurring events in to a single event
* PR [#4306](http://issue.k8s.io/4306): Compress recurring events in to a single event to optimize etcd storage
* PR [#4444](https://github.com/GoogleCloudPlatform/kubernetes/pull/4444): Switch events history to use LRU cache instead of map


Expand Down
2 changes: 1 addition & 1 deletion docs/design/identifiers.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ Documentation for other releases can be found at

# Identifiers and Names in Kubernetes

A summarization of the goals and recommendations for identifiers in Kubernetes. Described in [GitHub issue #199](https://github.com/GoogleCloudPlatform/kubernetes/issues/199).
A summarization of the goals and recommendations for identifiers in Kubernetes. Described in [GitHub issue #199](http://issue.k8s.io/199).


## Definitions
Expand Down
2 changes: 1 addition & 1 deletion docs/design/principles.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ TODO: pluggability

## Bootstrapping

* [Self-hosting](https://github.com/GoogleCloudPlatform/kubernetes/issues/246) of all components is a goal.
* [Self-hosting](http://issue.k8s.io/246) of all components is a goal.
* Minimize the number of dependencies, particularly those required for steady-state operation.
* Stratify the dependencies that remain via principled layering.
* Break any circular dependencies by converting hard dependencies to soft dependencies.
Expand Down
4 changes: 2 additions & 2 deletions docs/design/resources.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ Documentation for other releases can be found at
**Note: this is a design doc, which describes features that have not been completely implemented.
User documentation of the current state is [here](../user-guide/compute-resources.md). The tracking issue for
implementation of this model is
[#168](https://github.com/GoogleCloudPlatform/kubernetes/issues/168). Currently, only memory and
[#168](http://issue.k8s.io/168). Currently, only memory and
cpu limits on containers (not pods) are supported. "memory" is in bytes and "cpu" is in
milli-cores.**

Expand Down Expand Up @@ -134,7 +134,7 @@ The following resource types are predefined ("reserved") by Kubernetes in the `k
* Units: Kubernetes Compute Unit seconds/second (i.e., CPU cores normalized to a canonical "Kubernetes CPU")
* Internal representation: milli-KCUs
* Compressible? yes
* Qualities: this is a placeholder for the kind of thing that may be supported in the future — see [#147](https://github.com/GoogleCloudPlatform/kubernetes/issues/147)
* Qualities: this is a placeholder for the kind of thing that may be supported in the future — see [#147](http://issue.k8s.io/147)
* [future] `schedulingLatency`: as per lmctfy
* [future] `cpuConversionFactor`: property of a node: the speed of a CPU core on the node's processor divided by the speed of the canonical Kubernetes CPU (a floating point value; default = 1.0).

Expand Down
6 changes: 3 additions & 3 deletions docs/design/secrets.md
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,7 @@ which consumes this type of secret, the Kubelet may take a number of actions:
file system
2. Configure that node's `kube-proxy` to decorate HTTP requests from that pod to the
`kubernetes-master` service with the auth token, e. g. by adding a header to the request
(see the [LOAS Daemon](https://github.com/GoogleCloudPlatform/kubernetes/issues/2209) proposal)
(see the [LOAS Daemon](http://issue.k8s.io/2209) proposal)

#### Example: service account consumes docker registry credentials

Expand Down Expand Up @@ -263,11 +263,11 @@ the right storage size for their installation and configuring their Kubelets cor

Configuring each Kubelet is not the ideal story for operator experience; it is more intuitive that
the cluster-wide storage size be readable from a central configuration store like the one proposed
in [#1553](https://github.com/GoogleCloudPlatform/kubernetes/issues/1553). When such a store
in [#1553](http://issue.k8s.io/1553). When such a store
exists, the Kubelet could be modified to read this configuration item from the store.

When the Kubelet is modified to advertise node resources (as proposed in
[#4441](https://github.com/GoogleCloudPlatform/kubernetes/issues/4441)), the capacity calculation
[#4441](http://issue.k8s.io/4441)), the capacity calculation
for available memory should factor in the potential size of the node-level tmpfs in order to avoid
memory overcommit on the node.

Expand Down
2 changes: 1 addition & 1 deletion docs/design/security_context.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ A security context is a set of constraints that are applied to a container in or

## Background

The problem of securing containers in Kubernetes has come up [before](https://github.com/GoogleCloudPlatform/kubernetes/issues/398) and the potential problems with container security are [well known](http://opensource.com/business/14/7/docker-security-selinux). Although it is not possible to completely isolate Docker containers from their hosts, new features like [user namespaces](https://github.com/docker/libcontainer/pull/304) make it possible to greatly reduce the attack surface.
The problem of securing containers in Kubernetes has come up [before](http://issue.k8s.io/398) and the potential problems with container security are [well known](http://opensource.com/business/14/7/docker-security-selinux). Although it is not possible to completely isolate Docker containers from their hosts, new features like [user namespaces](https://github.com/docker/libcontainer/pull/304) make it possible to greatly reduce the attack surface.

## Motivation

Expand Down
2 changes: 1 addition & 1 deletion docs/devel/api-conventions.md
Original file line number Diff line number Diff line change
Expand Up @@ -195,7 +195,7 @@ References in the status of the referee to the referrer may be permitted, when t

#### Lists of named subobjects preferred over maps

Discussed in [#2004](https://github.com/GoogleCloudPlatform/kubernetes/issues/2004) and elsewhere. There are no maps of subobjects in any API objects. Instead, the convention is to use a list of subobjects containing name fields.
Discussed in [#2004](http://issue.k8s.io/2004) and elsewhere. There are no maps of subobjects in any API objects. Instead, the convention is to use a list of subobjects containing name fields.

For example:

Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/locally.md
Original file line number Diff line number Diff line change
Expand Up @@ -161,7 +161,7 @@ One or more of the KUbernetes daemons might've crashed. Tail the logs of each in

#### The pods fail to connect to the services by host names

The local-up-cluster.sh script doesn't start a DNS service. Similar situation can be found [here](https://github.com/GoogleCloudPlatform/kubernetes/issues/6667). You can start a manually. Related documents can be found [here](../../cluster/addons/dns/#how-do-i-configure-it)
The local-up-cluster.sh script doesn't start a DNS service. Similar situation can be found [here](http://issue.k8s.io/6667). You can start a manually. Related documents can be found [here](../../cluster/addons/dns/#how-do-i-configure-it)


<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started-guides/rkt/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ Documentation for other releases can be found at
# Run Kubernetes with rkt

This document describes how to run Kubernetes using [rkt](https://github.com/coreos/rkt) as a container runtime.
We still have [a bunch of work](https://github.com/GoogleCloudPlatform/kubernetes/issues/8262) to do to make the experience with rkt wonderful, please stay tuned!
We still have [a bunch of work](http://issue.k8s.io/8262) to do to make the experience with rkt wonderful, please stay tuned!

### **Prerequisite**

Expand Down
Loading

0 comments on commit fe6b15b

Please sign in to comment.