Terraform modules to deploy DoiT's EKS Lens feature to AWS.
- Terraform >= 1.5.0
- terraform-provider-aws v5.3+
- terraform-provider-kubernetes plugin v2.2+
- region-base - creates the base resources required for running EKS Lens in an account and region.
- cluster - creates the resources required for EKS Lens in a given Kubernetes cluster and optionally deploys it to that cluster.
See some configuration example in providers_example.tf.
If not already in use, the AWS provider will need to be configured by adding a provider
block.
If not already in use, the Kubernetes provider will need to be configured by adding a provider
block.
Download your accounts Terraform configuration from the EKS Lens console and place it alongside your provider configuration.
Then run the following commands:
terraform init
terraform plan
terraform apply
The configuration file should contain the following module definitions:
The region-base
module needs to be created once per account and region.
module "<REGION_NAME>-base" {
source = "git::https://github.com/doitintl/terraform-eks-lens.git//region-base"
}
The region and account ID are inferred from your AWS provider configuration. Make sure that your AWS provider is configured to use the correct account and region.
The cluster
module needs to be created once per cluster.
module "<REGION>-<CLUSTER_NAME>" {
source = "git::https://github.com/doitintl/terraform-eks-lens.git//cluster"
cluster = {
cluster_name = "<CLUSTER_NAME>"
deployment_id = "<DEPLOYMENT_ID>"
kube_state_image = "registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.9.2" # make sure to use the latest available image
otel_image = "otel/opentelemetry-collector-contrib:0.83.0" # make sure to use the latest available image
}
# If running in EKS:
cluster_oidc_issuer_url = "<CLUSTER_OIDC_ISSUER_URL>"
# Alternatively, if managing your own cluster on EC2, set `cluster_oidc_issuer_url` to an empty string and uncomment the following:
#ec2_cluster = true
# By default, this module will also deploy the k8s manifests. Set to `false` if planning to deploy with another tool
#deploy_manifests = false
# when configuring multiple providers for different clusters, you can configure the module to use to correct provider alias:
providers = {
kubernetes = kubernetes.<PROVIDER_ALIAS>
}
}
The cluster
module contains a null_resource
that should run a webhook on creation and destruction of a given cluster module.
The on-boarding hook validates your cluster deployment and registers it. If it fails on creation, you might need to try applying again for it to register successfully.
The off-boarding hook de-registers your cluster deployment. In case it fails you might need to manually call the webhook:
curl -X POST -H 'Content-Type: application/json' -d '{"account_id": "<<AccountID>>","region": "<<Region>>","cluster_name": "<<ClusterName>>", "deployment_id": "<<DeploymentID>>" }' https://console.doit.com/webhooks/v1/eks-metrics/terraform-destroy