Skip to content

Commit

Permalink
Merge pull request kubernetes#2084 from jeefy/master
Browse files Browse the repository at this point in the history
Community Recipes section
  • Loading branch information
brendandburns committed Oct 31, 2014
2 parents e792248 + a8af6c8 commit e8b5bad
Show file tree
Hide file tree
Showing 3 changed files with 168 additions and 0 deletions.
5 changes: 5 additions & 0 deletions contrib/recipes/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
#Kubernetes Community Recipes
Solutions to interesting problems and unique implementations that showcase the extensibility of Kubernetes

- [Automated APIServer load balancing using Hipache and Fleet](docs/apiserver_hipache_registration.md)
- [Jenkins-triggered rolling updates on sucessful "builds"](docs/rollingupdates_from_jenkins.md)
106 changes: 106 additions & 0 deletions contrib/recipes/docs/apiserver_hipache_registration.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
###Background
When deploying Kubernetes using something like [Fleet](https://github.com/coreos/fleet), the API Server (and other services) may not stay on the same host (depending on your setup)

In these cases it's ideal to have a dynamic load balancer ([Hipache](https://github.com/hipache/hipache)) that can receive updates from your services.

###Setup
Our example is based on Kelsey Hightower's "[Kubernetes Fleet Tutorial](https://github.com/kelseyhightower/kubernetes-fleet-tutorial)" (The bash variable ${DEFAULT_IPV4} is set in Kelsey's /etc/network-environment file)

For this write-up we are going to assume you have a dedicated [etcd](https://github.com/coreos/etcd) endpoint (10.1.10.10 Private IPV4) and are running kubernetes on systems managed by systemd / fleet.

The Hipache instance is going to run on 172.20.1.20 (Public IPV4) but will have a Private IPV4 address as well (10.1.10.11)


First, create your kube-apiserver.service file (change necessary variables)
`~/hipache/kube-apiserver.service`
```
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
EnvironmentFile=/etc/network-environment
ExecStartPre=/usr/bin/rm /opt/bin/apiserver
ExecStartPre=/usr/bin/wget -P /opt/bin https://path/to/apiserver/binary
ExecStartPre=/usr/bin/chmod +x /opt/bin/apiserver
ExecStart=/opt/bin/apiserver \
-address=0.0.0.0 \
-port=8080 \
-etcd_servers=http://10.1.10.10:4001
ExecStartPost=/usr/bin/etcdctl -C 10.1.10.10:4001 set /frontend:172.20.1.20 '[ "kubernetes", "http://${DEFAULT_IPV4}:8080" ]'
Restart=always
RestartSec=10
[X-Fleet]
MachineMetadata=role=kubernetes
```

Next we need a Hipache instance and a config file. In our case, we just rolled our own docker container for it.

`~/workspace/hipache/Dockerfile`
```
FROM ubuntu:14.04
RUN apt-get update && \
apt-get -y install nodejs npm && \
npm install node-etcd hipache -g
RUN mkdir /hipache
ADD . /hipache
RUN cd /hipache
ENV NODE_ENV production
EXPOSE 80
CMD hipache -c /hipache/config.json
```
`~/workspace/hipache/config.json`
```
{
"server": {
"accessLog": "/tmp/access.log",
"port": 80,
"workers": 10,
"maxSockets": 100,
"deadBackendTTL": 30,
"tcpTimeout": 30,
"retryOnError": 3,
"deadBackendOn500": true,
"httpKeepAlive": false
},
"driver": ["etcd://10.1.10.10:4001"]
}
```

We need to build the docker container and set up the systemd service for our Hipache container.
`docker build -t kube-hipache .`

`/etc/systemd/system/kube-hipache.service`
```
[Unit]
Description=Hipache Router
After=docker.service
Requires=docker.service
[Service]
TimeoutStartSec=0
ExecStartPre=-/usr/bin/docker kill hipache
ExecStartPre=-/usr/bin/docker rm hipache
ExecStart=/usr/bin/docker run -d -p 80:80 --name hipache hipache
[Install]
WantedBy=multi-user.target
```
Let's put some pieces together! Run the following commands:
- `systemctl enable /etc/systemd/system/kube-hipache.service `
- `systemctl start kube-hipache.service`
- `journalctl -b -u kube-hipache.service` (Make sure it's running)
- `fleetctl start ~/hipache/kube-apiserver.service`

That's it! Fleet will schedule the apiserver on one of your minions and once it's started it will register itself in etcd. Hipache will auto-update once this happens and you should never have to worry which node the apiserver is sitting on.


###Questions
twitter @jeefy

irc.freenode.net #kubernetes jeefy
57 changes: 57 additions & 0 deletions contrib/recipes/docs/rollingupdates_from_jenkins.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
###How To
For our example, Jenkins is set up to have one build step in bash:

`Jenkins "Bash" build step`
```
#!/bin/bash
cd $WORKSPACE
source bin/jenkins.sh
source bin/kube-rolling.sh
```

Our project's build script (`bin/jenkins.sh`), is followed by our new kube-rolling script. Jenkins already has `$BUILD_NUMBER` set, but we need a few other variables that are set in `jenkins.sh` that we reference in `kube-rolling.sh`:

```
DOCKER_IMAGE="path_webteam/public"
REGISTRY_LOCATION="dockerreg.web.local/"
```

Jenkins builds our container, tags it with the build number, and runs a couple rudimentary tests on it. On success, it pushes it to our private docker registry. Once the container is pushed, it then executes our rolling update script.

`kube-rolling.sh`
```
#!/bin/bash
# KUBERNETES_MASTER: Your Kubernetes API Server endpoint
# BINARY_LOCATION: Location of pre-compiled Binaries (We build our own, there are others available)
# CONTROLLER_NAME: Name of the replicationController you're looking to update
# RESET_INTERVAL: Interval between pod updates
export KUBERNETES_MASTER="http://10.1.10.1:8080"
BINARY_LOCATION="https://build.web.local/kubernetes/"
CONTROLLER_NAME="public-frontend-controller"
RESET_INTERVAL="10s"
echo "*** Time to push to Kubernetes!";
#Delete then graba kubecfg binary from a static location
rm kubecfg
wget $BINARY_LOCATION/kubecfg
echo "*** Downloaded binary from $BINARY_LOCATION/kubecfg"
chmod +x kubecfg
# Update the controller with your new image!
echo "*** ./kubecfg -image \"$REGISTRY_LOCATION$DOCKER_IMAGE:$BUILD_NUMBER\" -u $RESET_INTERVAL rollingupdate $CONTROLLER_NAME"
./kubecfg -image "$REGISTRY_LOCATION$DOCKER_IMAGE:$BUILD_NUMBER" -u $RESET_INTERVAL rollingupdate $CONTROLLER_NAME
```

Though basic, this implementation allows our Jenkins instance to push container updates to our Kubernetes cluster without much trouble.

### Notes
When using a private docker registry as we are, the Jenkins slaves as well as the Kubernetes minions require the [.dockercfg](https://coreos.com/docs/launching-containers/building/customizing-docker/#using-a-dockercfg-file-for-authentication) file in order to function properly.

### Questions
twitter @jeefy

irc.freenode.net #kubernetes jeefy

0 comments on commit e8b5bad

Please sign in to comment.