Skip to content

Commit

Permalink
Add authentik, tidy up recipe-footer
Browse files Browse the repository at this point in the history
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
  • Loading branch information
funkypenguin committed Oct 31, 2023
1 parent 0378e35 commit f22dd8e
Show file tree
Hide file tree
Showing 142 changed files with 805 additions and 708 deletions.
21 changes: 21 additions & 0 deletions _includes/kubernetes-flux-dnsendpoint.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
### {{ page.meta.slug }} DNSEndpoint

If, like me, you prefer to create your DNS records the "GitOps way" using [ExternalDNS](/kubernetes/external-dns/), create something like the following example to create a DNS entry for your Authentik ingress:

```yaml title="/{{ page.meta.helmrelease_namespace }}/dnsendpoint-{{ page.meta.helmrelease_name }}.example.com.yaml"
apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
name: "{{ page.meta.helmrelease_name }}.example.com"
namespace: {{ page.meta.helmrelease_namespace }}
spec:
endpoints:
- dnsName: "{{ page.meta.helmrelease_name }}.example.com"
recordTTL: 180
recordType: CNAME
targets:
- "traefik-ingress.example.com"
```
!!! tip
Rather than creating individual A records for each host, I prefer to create one A record (*`nginx-ingress.example.com` in the example above*), and then create individual CNAME records pointing to that A record.
6 changes: 3 additions & 3 deletions _includes/kubernetes-flux-helmrelease.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
### HelmRelease
### {{ page.meta.slug }} HelmRelease

Lastly, having set the scene above, we define the HelmRelease which will actually deploy {{ page.meta.helmrelease_name }} into the cluster. We start with a basic HelmRelease YAML, like this example:

Expand All @@ -23,10 +23,10 @@ spec:
values: # paste contents of upstream values.yaml below, indented 4 spaces (2)
```
1. I like to set this to the semver minor version of the upstream chart, so that I'll inherit bug fixes but not any new features (*since I'll need to manually update my values to accommodate new releases anyway*)
1. I like to set this to the semver minor version of the {{ page.meta.slug }} current helm chart, so that I'll inherit bug fixes but not any new features (*since I'll need to manually update my values to accommodate new releases anyway*)
2. Paste the full contents of the upstream [values.yaml]({{ page.meta.values_yaml_url }}) here, indented 4 spaces under the `values:` key

If we deploy this helmrelease as-is, we'll inherit every default from the upstream chart. That's probably hardly ever what we want to do, so my preference is to take the entire contents of the helm chart's [values.yaml]({{ page.meta.values_yaml_url }}), and to paste these (*indented*), under the `values` key. This means that I can then make my own changes in the context of the entire values.yaml, rather than cherry-picking just the items I want to change, to make future chart upgrades simpler.
If we deploy this helmrelease as-is, we'll inherit every default from the upstream {{ page.meta.slug }} helm chart. That's probably hardly ever what we want to do, so my preference is to take the entire contents of the {{ page.meta.slug }} helm chart's [values.yaml]({{ page.meta.values_yaml_url }}), and to paste these (*indented*), under the `values` key. This means that I can then make my own changes in the context of the entire values.yaml, rather than cherry-picking just the items I want to change, to make future chart upgrades simpler.

--8<-- "kubernetes-why-not-full-values-in-configmap.md"

Expand Down
4 changes: 2 additions & 2 deletions _includes/kubernetes-flux-helmrepository.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
### HelmRepository
### {{ page.meta.slug }} HelmRepository

We're going to install a helm chart from the [{{ page.meta.helm_chart_repo_name }}]({{ page.meta.helm_chart_repo_url }}) repository, so I create the following in my flux repo (*assuming it doesn't already exist*):
We're going to install the {{ page.slug }} helm chart from the [{{ page.meta.helm_chart_repo_name }}]({{ page.meta.helm_chart_repo_url }}) repository, so I create the following in my flux repo (*assuming it doesn't already exist*):

```yaml title="/bootstrap/helmrepositories/helmrepository-{{ page.meta.helm_chart_repo_name }}.yaml"
apiVersion: source.toolkit.fluxcd.io/v1beta1
Expand Down
2 changes: 1 addition & 1 deletion _includes/kubernetes-flux-kustomization.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
### Kustomization
### {{ page.meta.slug }} Kustomization

Now that the "global" elements of this deployment (*just the HelmRepository in this case*) have been defined, we do some "flux-ception", and go one layer deeper, adding another Kustomization, telling flux to deploy any YAMLs found in the repo at `/{{ page.meta.helmrelease_namespace }}/`. I create this example Kustomization in my flux repo:

Expand Down
2 changes: 1 addition & 1 deletion _includes/kubernetes-flux-namespace.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
## Preparation

### Namespace
### {{ page.meta.slug }} Namespace

We need a namespace to deploy our HelmRelease and associated YAMLs into. Per the [flux design](/kubernetes/deployment/flux/), I create this example yaml in my flux repo at `/bootstrap/namespaces/namespace-{{ page.meta.helmrelease_namespace }}.yaml`:

Expand Down
11 changes: 11 additions & 0 deletions _snippets/recipe-footer.md → _includes/recipe-footer.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,17 @@

///Footnotes Go Here///

{% if page.meta.upstream %}
### {{ page.meta.slug }} resources

* [{{ page.meta.slug }} (official site)]({{ page.meta.upstream }})
{% endif %}
{% if page.meta.links %}
{% for link in page.meta.links %}
* [{{ page.meta.slug }} {{ link.name }}]({{ link.uri }})
{% endfor %}
{% endif %}

### Tip your waiter (sponsor) 👏

Did you receive excellent service? Want to compliment the chef? (_..and support development of current and future recipes!_) Sponsor me on [Github][github_sponsor] / [Ko-Fi][kofi] / [Patreon][patreon], or see the [contribute](/community/contribute/) page for more (_free or paid)_ ways to say thank you! 👏
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
---
date: 2023-06-09
categories:
- note
tags:
- elfhosted
title: Baby steps towards ElfHosted
description: Every journey has a beginning. This is the beginning of the ElfHosted journey
draft: true
---

Securing the Hetzner environment

Before building out our Kubernetes cluster, I wanted to secure the environment a little. On Hetzner, each server is assigned a public IP from a huge pool, and is directly accessible over the internet. This provides quick access for administration, but before building out our controlplane, I wanted to lock down access.

## Requirements

* [x] Kubernetes worker/controlplane nodes are privately addressed
* [x] Control plane (API) will be accessible only internally
* [x] Nodes can be administered directly on their private address range

## The bastion VM

I created a small cloud "ampere" VM using Hetzner's cloud console. These cloud VMs are provisioned separately from dedicated servers, but it's possible to interconnect them with dedicated servers using vSwitches/subnets (bascically VLANs)

I needed a "bastion" host - a small node (probably a VM), which I could secure and then use for further ingress into my infrastructure.

## Connecting Bastion VM to dedicated VMs

I

https://tailscale.com/kb/1150/cloud-hetzner/


https://tailscale.com/kb/1077/secure-server-ubuntu-18-04/


https://docs.hetzner.com/cloud/networks/connect-dedi-vswitch

```bash
tailscale up --advertise-routes 10.0.42.0/24
```

sysctl edit

```bash
# NAT table rules
*nat
:POSTROUTING ACCEPT [0:0]

# Forward traffic through eth0 - Change to match you out-interface
-A POSTROUTING -s <your tailscale ip> -j MASQUERADE

# don't delete the 'COMMIT' line or these nat table rules won't
# be processed
COMMIT
```


hetzner_cloud_console_subnet_routes.png

hetzner_vswitch_setup.png

## Secure hosts

* [ ] Create last-resort root password
* [ ] Setup non-root sudo account (ansiblize this?)
151 changes: 151 additions & 0 deletions docs/blog/posts/notes/elfhosted/setup-k3s.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,151 @@
---
date: 2023-06-11
categories:
- note
tags:
- elfhosted
title: Kubernetes on Hetzner dedicated server
description: How to setup and secure a bare-metal Kubernetes infrastructure on Hetzner dedicated servers
draft: true
---

# Kubernetes (K3s) on Hetzner

In this post, we continue our adventure setting up an app hosting platform running on Kubernetes.

--8<-- "blog-series-elfhosted.md"

My two physical servers were "delivered" (to my inbox), along with instructions re SSHing to the "rescueimage" environment, which looks like this:



<!-- more -->

--8<-- "what-is-elfhosted.md"


## Secure nodes

Per the K3s docs, there are some local firewall requirements for K3s server/worker nodes:

https://docs.k3s.io/installation/requirements#inbound-rules-for-k3s-server-nodes



It's aliiive!

```
root@fairy01 ~ # kubectl get nodes
NAME STATUS ROLES AGE VERSION
elf01 Ready <none> 15s v1.26.5+k3s1
fairy01 Ready control-plane,etcd,master 96s v1.26.5+k3s1
root@fairy01 ~ #
```

Now install flux, according to this documentedb bootstrap process...


https://metallb.org/configuration/k3s/


Prepare for Longhorn's [NFS schenanigans](https://longhorn.io/docs/1.4.2/deploy/install/#installing-nfsv4-client):

```
apt-get -y install nfs-common tuned
```

Performance mode!

`tuned-adm profile throughput-performance`

Taint the master(s)

```
kubectl taint node fairy01 node-role.kubernetes.io/control-plane=true:NoSchedule
```


```
increase max pods:
https://stackoverflow.com/questions/65894616/how-do-you-increase-maximum-pods-per-node-in-k3s
https://gist.github.com/rosskirkpat/57aa392a4b44cca3d48dfe58b5716954
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --kubelet-arg=config=/etc/rancher/k3s/kubelet-server.config --disable traefik --disable servicelb --flannel-backend=wireguard-native --flannel-iface=enp0s31f6.4000 --kube-controller-manager-arg=node-cidr-mask-size=22 --kubelet-arg=max-pods=500 --node-taint node-role.kubernetes.io/control-plane --prefer-bundled-bin" sh -
```

create secondary masters:

```
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --kubelet-arg=config=/etc/rancher/k3s/kubelet-server.config --disable traefik --disable servicelb --flannel-backend=wireguard-native --flannel-iface=enp0s31f6.4000 --kube-controller-manager-arg=node-cidr-mask-size=22 --kubelet-arg=max-pods=500 --node-taint node-role.kubernetes.io/control-plane --prefer-bundled-bin" sh -
```


```
mkdir -p /etc/rancher/k3s/
cat << EOF >> /etc/rancher/k3s/kubelet-server.config
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
maxPods: 500
EOF
```




and on the worker


Ensure that `/etc/rancher/k3s` exists, to hold our kubelet custom configuration file:

```bash
mkdir -p /etc/rancher/k3s/
cat << EOF >> /etc/rancher/k3s/kubelet-server.config
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
maxPods: 500
EOF
```

Get [token](https://docs.k3s.io/cli/token) from `/var/lib/rancher/k3s/server/token` on the server, and prepare the environment like this:
```bash
export K3S_TOKEN=<token from master>
export K3S_URL=https://<ip of master>:6443
```

Now join the worker using

```
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="agent --flannel-iface=eno1.4000 --kubelet-arg=config=/etc/rancher/k3s/kubelet-server.config --prefer-bundled-bin" sh -
```


```
flux bootstrap github \
--owner=geek-cookbook \
--repository=geek-cookbook/elfhosted-flux \
--path bootstrap
```

```
root@fairy01:~# kubectl -n sealed-secrets create secret tls elfhosted-expires-june-2033 \
--cert=mytls.crt --key=mytls.key
secret/elfhosted-expires-june-2033 created
root@fairy01:~# kubectl kubectl -n sealed-secrets label secret^C
root@fairy01:~# kubectl -n sealed-secrets label secret elfhosted-expires-june-2033 sealedsecrets.bitnami.com/sealed-secrets-key=active
secret/elfhosted-expires-june-2033 labeled
root@fairy01:~# kubectl rollout restart -n sealed-secrets deployment sealed-secrets
deployment.apps/sealed-secrets restarted
```

increase watchers (jellyfin)
echo fs.inotify.max_user_watches=2097152 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p

echo 512 > /proc/sys/fs/inotify/max_user_instances

on dwarves

k taint node dwarf01.elfhosted.com node-role.elfhosted.com/node=storage:NoSchedule

2 changes: 1 addition & 1 deletion docs/docker-swarm/authelia.md
Original file line number Diff line number Diff line change
Expand Up @@ -274,4 +274,4 @@ What have we achieved? By adding a simple label to any service, we can secure an

[^1]: The initial inclusion of Authelia was due to the efforts of @bencey in Discord (Thanks Ben!)

--8<-- "recipe-footer.md"
{% include 'recipe-footer.md' %}
2 changes: 1 addition & 1 deletion docs/docker-swarm/design.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,4 +94,4 @@ In summary, although I suffered an **unplanned power outage to all of my infrast

[^1]: Since there's no impact to availability, I can fix (or just reinstall) the failed node whenever convenient.

--8<-- "recipe-footer.md"
{% include 'recipe-footer.md' %}
2 changes: 1 addition & 1 deletion docs/docker-swarm/docker-swarm-mode.md
Original file line number Diff line number Diff line change
Expand Up @@ -180,4 +180,4 @@ What have we achieved?

* [X] [Docker swarm cluster](/docker-swarm/design/)

--8<-- "recipe-footer.md"
{% include 'recipe-footer.md' %}
2 changes: 1 addition & 1 deletion docs/docker-swarm/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ You too, action-geek, can save the day, by...

Ready to enter the matrix? Jump in on one of the links above, or start reading the [design](/docker-swarm/design/)

--8<-- "recipe-footer.md"
{% include 'recipe-footer.md' %}

[^1]: This was an [iconic movie](https://www.imdb.com/title/tt0111257/). It even won 2 Oscars! (*but not for the acting*)
[^2]: There are significant advantages to using Docker Swarm, even on just a single node.
2 changes: 1 addition & 1 deletion docs/docker-swarm/keepalived.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,4 +88,4 @@ What have we achieved?
[^1]: Some hosting platforms (*OpenStack, for one*) won't allow you to simply "claim" a virtual IP. Each node is only able to receive traffic targetted to its unique IP, unless certain security controls are disabled by the cloud administrator. In this case, keepalived is not the right solution, and a platform-specific load-balancing solution should be used. In OpenStack, this is Neutron's "Load Balancer As A Service" (LBAAS) component. AWS, GCP and Azure would likely include similar protections.
[^2]: More than 2 nodes can participate in keepalived. Simply ensure that each node has the appropriate priority set, and the node with the highest priority will become the master.

--8<-- "recipe-footer.md"
{% include 'recipe-footer.md' %}
2 changes: 1 addition & 1 deletion docs/docker-swarm/nodes.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,4 +77,4 @@ After completing the above, you should have:
* At least 20GB disk space (_but it'll be tight_)
* [X] Connectivity to each other within the same subnet, and on a low-latency link (_i.e., no WAN links_)

--8<-- "recipe-footer.md"
{% include 'recipe-footer.md' %}
2 changes: 1 addition & 1 deletion docs/docker-swarm/registry.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,4 +110,4 @@ Then restart docker itself, by running `systemctl restart docker`

[^1]: Note the extra comma required after "false" above

--8<-- "recipe-footer.md"
{% include 'recipe-footer.md' %}
2 changes: 1 addition & 1 deletion docs/docker-swarm/shared-storage-ceph.md
Original file line number Diff line number Diff line change
Expand Up @@ -227,4 +227,4 @@ Here's a screencast of the playbook in action. I sped up the boring parts, it ac
[patreon]: <https://www.patreon.com/bePatron?u=6982506>
[github_sponsor]: <https://github.com/sponsors/funkypenguin>

--8<-- "recipe-footer.md"
{% include 'recipe-footer.md' %}
2 changes: 1 addition & 1 deletion docs/docker-swarm/shared-storage-gluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -172,4 +172,4 @@ After completing the above, you should have:
1. Migration of shared storage from GlusterFS to Ceph
2. Correct the fact that volumes don't automount on boot

--8<-- "recipe-footer.md"
{% include 'recipe-footer.md' %}
2 changes: 1 addition & 1 deletion docs/docker-swarm/traefik-forward-auth/dex-static.md
Original file line number Diff line number Diff line change
Expand Up @@ -203,4 +203,4 @@ What have we achieved? By adding an additional label to any service, we can secu

[^1]: You can remove the `whoami` container once you know Traefik Forward Auth is working properly

--8<-- "recipe-footer.md"
{% include 'recipe-footer.md' %}
2 changes: 1 addition & 1 deletion docs/docker-swarm/traefik-forward-auth/google.md
Original file line number Diff line number Diff line change
Expand Up @@ -133,4 +133,4 @@ What have we achieved? By adding an additional three simple labels to any servic

[^1]: Be sure to populate `WHITELIST` in `traefik-forward-auth.env`, else you'll happily be granting **any** authenticated Google account access to your services!

--8<-- "recipe-footer.md"
{% include 'recipe-footer.md' %}
2 changes: 1 addition & 1 deletion docs/docker-swarm/traefik-forward-auth/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,6 @@ Traefik Forward Auth needs to authenticate an incoming user against a provider.
* [Authenticate Traefik Forward Auth against a whitelist of Google accounts][tfa-google]
* [Authenticate Traefik Forward Auth against a self-hosted Keycloak instance][tfa-keycloak] with an optional [OpenLDAP backend][openldap]

--8<-- "recipe-footer.md"
{% include 'recipe-footer.md' %}

[^1]: Authhost mode is specifically handy for Google authentication, since Google doesn't permit wildcard redirect_uris, like [Keycloak][keycloak] does.
2 changes: 1 addition & 1 deletion docs/docker-swarm/traefik-forward-auth/keycloak.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,4 +100,4 @@ What have we achieved? By adding an additional three simple labels to any servic
[KeyCloak][keycloak] is the "big daddy" of self-hosted authentication platforms - it has a beautiful GUI, and a very advanced and mature featureset. Like Authelia, KeyCloak can [use an LDAP server](/recipes/keycloak/authenticate-against-openldap/) as a backend, but _unlike_ Authelia, KeyCloak allows for 2-way sync between that LDAP backend, meaning KeyCloak can be used to _create_ and _update_ the LDAP entries (*Authelia's is just a one-way LDAP lookup - you'll need another tool to actually administer your LDAP database*).
--8<-- "recipe-footer.md"
{% include 'recipe-footer.md' %}
2 changes: 1 addition & 1 deletion docs/docker-swarm/traefik.md
Original file line number Diff line number Diff line change
Expand Up @@ -250,4 +250,4 @@ You should now be able to access[^1] your traefik instance on `https://traefik.<

[^1]: Did you notice how no authentication was required to view the Traefik dashboard? Eek! We'll tackle that in the next section, regarding [Traefik Forward Authentication](/docker-swarm/traefik-forward-auth/)!

--8<-- "recipe-footer.md"
{% include 'recipe-footer.md' %}
Binary file added docs/images/authentik.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/joplin-server.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
Expand Up @@ -68,4 +68,4 @@ What have we achieved? We've got snapshot-controller running, and ready to manag
* [ ] Configure [Velero](/kubernetes/backup/velero/) with a VolumeSnapshotLocation, so that volume snapshots can be made as part of a BackupSchedule!
--8<-- "recipe-footer.md"
{% include 'recipe-footer.md' %}
Loading

0 comments on commit f22dd8e

Please sign in to comment.