Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 

Multi-Cluster centralized hub-spoke topology

This example uses Pod Identity

This example deploys ArgoCD on the Hub cluster (ie. management/control-plane cluster). The spoke clusters are registered as remote clusters in the Hub Cluster's ArgoCD The ArgoCD on the Hub Cluster deploy addons and workloads to the spoke clusters

Each spoke cluster gets deployed an app of apps ArgoCD Application with the name workloads-${env}

Prerequisites

Before you begin, make sure you have the following command line tools installed:

  • git
  • terraform
  • kubectl
  • argocd

Fork the Git Repositories

Fork the Addon GitOps Repo

  1. Fork the git repository for addons here.
  2. Update the following environment variables to point to your fork by changing the default values:
export TF_VAR_gitops_addons_org=https://github.com/gitops-bridge-dev
export TF_VAR_gitops_addons_repo=gitops-bridge-argocd-control-plane-template

Deploy the Hub EKS Cluster

Change Director to hub

cd hub

Initialize Terraform and deploy the EKS cluster:

terraform init
terraform apply -auto-approve

Retrieve kubectl config, then execute the output command:

terraform output -raw configure_kubectl

Monitor GitOps Progress for Addons

Wait until all the ArgoCD applications' HEALTH STATUS is Healthy. Use Crl+C to exit the watch command

watch kubectl get applications -n argocd

Access ArgoCD on Hub Cluster

Access ArgoCD's UI, run the command from the output:

terraform output -raw access_argocd

Deploy the Spoke EKS Cluster

Initialize Terraform and deploy the EKS clusters:

cd ../spokes
./deploy.sh dev
./deploy.sh staging
./deploy.sh prod

Each environment uses a Terraform workspace

To access Terraform output run the following commands for the particular environment

terraform workspace select dev
terraform output
terraform workspace select staging
terraform output
terraform workspace select prod
terraform output

Retrieve kubectl config, then execute the output command:

terraform output -raw configure_kubectl

Verify ArgoCD Cluster Secret for Spoke has the correct IAM Role to be assume by Hub Cluster

kubectl get secret -n argocd hub-spoke-dev --template='{{index .data.config | base64decode}}'

Do the same for the other cluster replaced dev in hub-spoke-dev The output have a section awsAuthConfig with the clusterName and the roleARN that has write access to the spoke cluster

{
  "tlsClientConfig": {
    "insecure": false,
    "caData" : "LS0tL...."
  },
  "awsAuthConfig" : {
    "clusterName": "hub-spoke-dev",
    "roleARN": "arn:aws:iam::0123456789:role/hub-spoke-dev-argocd-spoke"
  }
}

Verify the Addons on Spoke Clusters

Verify that the addons are ready:

kubectl get deployment -n kube-system \
  metrics-server

Monitor GitOps Progress for Workloads from Hub Cluster (run on Hub Cluster context)

Watch until *all the Workloads ArgoCD Applications are Healthy

watch kubectl get -n argocd applications

Wait until the ArgoCD Applications HEALTH STATUS is Healthy. Crl+C to exit the watch command

Verify the Application

Verify that the application configuration is present and the pod is running:

kubectl get all -n workload

Container Metrics

Check the application's CPU and memory metrics:

kubectl top pods -n workload

Destroy the Spoke EKS Clusters

To tear down all the resources and the EKS cluster, run the following command:

./destroy.sh dev
./destroy.sh staging
./destroy.sh prod

Destroy the Hub EKS Clusters

To tear down all the resources and the EKS cluster, run the following command: Destroy Hub Clusters

cd ../hub
./destroy.sh