Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE] Add cert-manager Support on Kubernetes Integration #585

Open
polarstack opened this issue Jul 31, 2023 · 3 comments
Open

[FEATURE] Add cert-manager Support on Kubernetes Integration #585

polarstack opened this issue Jul 31, 2023 · 3 comments
Assignees
Labels
enhancement New feature or request kubernetes Kubernetes integration

Comments

@polarstack
Copy link

polarstack commented Jul 31, 2023

What's needed and why ?
Hi All

  • As a BunkerWeb User having cert-manager in the Kubernetes Integration would make my experience even more cloud native.
  • As a Homelab User on a Kubernetes Cluster I would like to configure cert-manager with the DNS-01 challenge instead of HTTP-01 to be able, to close all unnecessary ports and apply GeoBlocking (Let's Encrypt does not support IP sets to apply/whitelist on a Firewall).
  • As a potential Business Customer I can store my Certificates in Hashicorp Vault or Venafi and apply them to my Ingress as they are supported by cert-manager.

I see the certbot-dns-* examples which could cover at least the second use case (e.g. ../examples/certbot-dns-cloudflare/docker-compose.yml), but as far as I understands it needs you to mount the "certs" Volume to bunkerweb, scheduler and a custom certbot container with the corresponding config. But not sure how I would implement that on Kubernetes. Using Kubernetes Secrets and Ingress Annotations would make it more natively on that Integration.

Implementations ideas (optional)
The Documentation for cert-manager is here: https://cert-manager.io/docs/
But the Installation and Configuration of cert-manager can be out of scope

cert-manager stores the key and crt in a Kubernetes Secret:

# k3s kubectl describe secrets/dummy-secret  -n dummy-namespace
Name:         dummy-secret-tls-0
Namespace:    dummy-namespace
Labels:       controller.cert-manager.io/fao=true
Annotations:  cert-manager.io/alt-names: myapp.example.com
              cert-manager.io/certificate-name: dummy-secret-tls-0
              cert-manager.io/common-name: myapp.example.com
              cert-manager.io/issuer-group: cert-manager.io
              cert-manager.io/issuer-kind: ClusterIssuer
              cert-manager.io/issuer-name: my-issuer-example

Type:  kubernetes.io/tls

Data
====
tls.key:  xxxx bytes
tls.crt:  yyyy bytes

On the Ingress the mapping happens in the annotations section:

Annotations:                   cert-manager.io/cluster-issuer: my-issuer-example
                               cert-manager.io/private-key-rotation-policy: Always

Not a specialist as I'm still learning, but I guess the Ingress Annotation triggers cert-manager which then stores the crt/key as a secret. Finally the ingress controller (e.g. Traefik) picks it up and deploys/configures the TLS termination. Maybe it's also triggered by the Ingress Controller itself. See for example the official Kubernetes NGinx Ingress Chart: https://github.com/kubernetes/ingress-nginx/blob/afd1311f8529c21fdf6621bf683bec814e698f1d/charts/ingress-nginx/templates/admission-webhooks/cert-manager.yaml

As one can have multiple issuer, I would suggest to leave that as a matter of cert-manager and define the secret only:

annotations:
   bunkerweb.io/myapp.example.com_CLUSTER_ISSUER_CERTIFICATE: dummy-secret-tls-0

Finally the BunkerWeb Scheduler(?) would pick up the secret and store it in /certs/ like it does for example on http, server-http, modsec etc. with the ConfigMap Feature.

Hopefully I was able to explain the need simply, otherwise please let me know if I should elaborate. If you think this is an edge case and doesn't map your Roadmap, don't worry about it and close the issue :-)

@polarstack polarstack added the enhancement New feature or request label Jul 31, 2023
@fl0ppy-d1sk fl0ppy-d1sk self-assigned this Aug 1, 2023
@fl0ppy-d1sk fl0ppy-d1sk added the kubernetes Kubernetes integration label Aug 1, 2023
@fl0ppy-d1sk
Copy link
Member

Hello @polarstack,

That's an interesting idea. IMO it's even a must have.

Looks like we should parse the tls part of the ingress resource but we need to dig deeper.

@fl0ppy-d1sk
Copy link
Member

Hello @polarstack,

Quick update : we now support tls section of Ingress in 1.5.5. Don't hesitate to test it by yourself.

Keeping this issue open because we still need to document (and test) the cert-manager integration.

@schmittse
Copy link

Hi,

The secret loading is a start but not enough for the webapp challenge.

When you deploy a new Ingress using let's encrypt (annotation cert-manager.io/cluster-issuer: letsencrypt) and the secret does not exists, the certmanager spin a new nginx pod and a new ingress listening only on port 80 for the let's encrypt challenge :

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0,::/0
  generateName: cm-acme-http-solver-
  labels:
    acme.cert-manager.io/http-domain: "123456789"
    acme.cert-manager.io/http-token: "987654321"
    acme.cert-manager.io/http01-solver: "true"
  name: cm-acme-http-solver-b2n7j
  namespace: hello-world
  ownerReferences:
  - apiVersion: acme.cert-manager.io/v1
    blockOwnerDeletion: true
    controller: true
    kind: Challenge
    name: lets-hello-world-bw-1-1580467882-4097216534
    uid: 389d03bc-9634-4641-8f6c-20c2bb07c647
spec:
  ingressClassName: nginx
  rules:
  - host: hello-world.example.com
    http:
      paths:
      - backend:
          service:
            name: cm-acme-http-solver-lglzx
            port:
              number: 8089
        path: /.well-known/acme-challenge/qRh5NAZCNVFnq0qkNQx3vGDHhPbBUN0n5Cjm8Hlcc_A
        pathType: ImplementationSpecific

So we have 2 Ingresses for the same domain ; one with path: / and one with the .well-known path seen above.

In the Nginx Ingress Controller, both Ingress are merged into a server directive :

[...]
server {                               
  server_name hello-world.example.com ;

  listen 80 proxy_protocol ;
  [...]

  location /.well-known/acme-challenge/qRhcNAZCNVFnq9qFNQx3vGDHhCbBUN0S5CjZ8Hlcc_A/ {
    [...]
    set $proxy_upstream_name "hello-world-cm-acme-http-solver-lglzx-8089";
    [...]
  }
  [...]
  location / {
    [...]
    set $proxy_upstream_name "hello-world-hello-world-service-8888";
    [...]
  }
}

For what I could found in your generated config files, I only get the / location.

location / {
  etag off;
  set $backend1069 "http://hello-world-service.hello-world.svc.cluster.local:8888";
  [...]
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request kubernetes Kubernetes integration
Projects
None yet
Development

No branches or pull requests

3 participants