This article will explain the boilerplate template code I developed for creating EKS Clusters with deployments managed by ArgoCD. The boilerplate provides a quick starting point for creating an EKS Cluster with ArgoCD preconfigured and other best practices.
The template primarily uses Terraform and Helm for the cluster deployment. It was originally developed as a way for me to learn some Kubernetes and ArgoCD concepts using an Amazon EKS Cluster but quickly morphed into a more generic template that I plan to use for future Amazon EKS-based projects.
This article is targeted at readers who have worked with Kubernetes. A basic understanding of the tools listed in the next section will also be required.
Before diving into the code, here is a brief description of all major tools and frameworks used for this boilerplate:
Amazon EKS is AWS's managed offering for Kubernetes. Amazon EKS provides a Kubernetes environment where AWS primarily manages the control plane. Amazon EKS can deploy to on-prem or AWS outposts, but this template is for cloud deployments.
Terraform is an Infrastructure as Code tool by Hashicorp which can be used to manage cloud and other infrastructure using declarative configuration files.
Helm is a tool used to package Kubernetes deployments in the form of charts and manage the deployment of these charts to any Kubernetes cluster. Vanilla Kubernetes deploys resources in a cluster declaratively using YAML or JSON manifest files. Helm builds on top of this by defining charts. The charts consist of one or more templates as well as input parameters.
GitHub Actions is a workflow automation framework which is directly integrated with GitHub. It define a CI/CD pipeline for any repository.
Copying the exact definition from the ArgoCD website: Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes. Let's analyze each word in this definition:
GitOps: Files stored in a Git repository define the desired state for resources deployed to a Kubernetes cluster.
Declarative: Declarative configurations such as Helm charts, Kustomize applications, or even plain JSON or YAML Kubernetes manifests are supported by ArgoCD.
Continuous Delivery: Configuration changes pushed to the Git repo are immediately deployed (auto sync or manual sync can be configured).
The Terraform code is divided into three modules:
Each Terraform module has three workflows: Terraform Plan, Terraform Apply, and Terraform Destroy. Authentication for the GitHub actions is done by integrating GitHub's OIDC provider. S3 is used as a backend to store the Terraform state.
The Terraform module creates a generic VPC with a configurable number of public and private subnets.
Public Subnets: The subnets route to Internet Gateway, and can be used to create resources with public IP. The subnets can be configured to have one NAT Gateway across all subnets or one NAT Gateway per AZ. The NAT Gateway can be an AWS managed NAT Gateway or a cheaper NAT instance using fck-NAT.
Private Subnets: The subnets route to NAT Gateway (if public subnets are configured to have NAT) or are isolated if there is no NAT Gateway.
1module "vpc" {2 source = "git::https://github.com/terraform-aws-modules/terraform-aws-vpc.git?ref=e226cc15a7b8f62fd0e108792fea66fa85bcb4b9"34 name = "${var.name_prefix}vpc"5 cidr = var.vpc_cidr67 azs = var.availability_zones8 private_subnets = var.vpc_private_cidrs9 public_subnets = var.vpc_public_cidrs10 map_public_ip_on_launch = true1112 enable_dns_support = true13 enable_dns_hostnames = true1415 enable_nat_gateway = var.use_managed_nat16 single_nat_gateway = var.use_managed_nat ? !var.use_ha_nat : null17 one_nat_gateway_per_az = var.use_managed_nat ? var.use_ha_nat : null18}1920module "fcknat" {21 count = var.use_managed_nat ? 0 : (var.use_ha_nat ? length(var.availability_zones) : 1)22 source = "git::https://github.com/RaJiska/terraform-aws-fck-nat.git?ref=9377bf9247c96318b99273eb2978d1afce8cf0eb"23 name = "fck-nat"24 vpc_id = module.vpc.vpc_id25 subnet_id = module.vpc.public_subnets[count.index]26 ha_mode = true # Enables high-availability mode2728 update_route_tables = true29 route_tables_ids = { for idx, route_table_id in module.vpc.private_route_table_ids : idx => route_table_id }30}
Major Dependencies:
The Terraform module creates an Amazon EKS Cluster with an ArgoCD deployment.
EKS Cluster: The configuration for the cluster is opinionated. Some of the major points:
Depending on the exact requirements, the configuration can be updated.
1module "central_eks" {2 source = "git::https://github.com/terraform-aws-modules/terraform-aws-eks.git?ref=c60b70fbc80606eb4ed8cf47063ac6ed0d8dd435"34 cluster_name = var.central_eks_cluster.cluster_name5 cluster_version = var.central_eks_cluster.cluster_version67 cluster_endpoint_public_access = var.central_eks_cluster.publicly_accessible_cluster8 cluster_endpoint_public_access_cidrs = var.central_eks_cluster.publicly_accessible_cluster ? var.central_eks_cluster.publicly_accessible_cluster_cidrs : null910 cluster_addons = {11 coredns = {12 most_recent = true13 }14 eks-pod-identity-agent = {15 before_compute = true16 most_recent = true17 }18 kube-proxy = {19 before_compute = true20 most_recent = true21 }22 vpc-cni = {23 before_compute = true24 most_recent = true25 configuration_values = jsonencode({26 eniConfig = var.eks_vpc_cni_custom_networking ? {27 create = true28 region = data.aws_region.current.name29 subnets = { for az, subnet_id in local.cni_az_subnet_map : az => { securityGroups : [module.central_eks.node_security_group_id], id : subnet_id } }30 } : null31 env = {32 AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG = var.eks_vpc_cni_custom_networking ? "true" : "false"33 ENI_CONFIG_LABEL_DEF = "topology.kubernetes.io/zone"34 } })35 }36 }3738 vpc_id = data.aws_vpc.vpc.id39 subnet_ids = local.private_subnet_ids40 control_plane_subnet_ids = local.public_subnet_ids41 cluster_ip_family = var.cluster_ip_family4243 # Cluster access entry44 # To add the current caller identity as an administrator45 enable_cluster_creator_admin_permissions = true4647 eks_managed_node_groups = {48 nodegroup = {49 instance_types = ["m6i.large"]5051 min_size = 352 max_size = 353 desired_size = 35455 subnet_ids = local.private_subnet_ids56 }57 }58 access_entries = {59 ssorole = {60 kubernetes_groups = []61 principal_arn = "arn:${data.aws_partition.current.partition}:iam::${data.aws_caller_identity.current.account_id}:role/aws-reserved/sso.amazonaws.com/${var.sso_cluster_admin_role_name}"6263 policy_associations = {64 example = {65 policy_arn = "arn:${data.aws_partition.current.partition}:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy"66 access_scope = {67 type = "cluster"68 }69 }70 }71 }72 }73 cluster_upgrade_policy = {74 support_type = var.central_eks_cluster.cluster_support_type75 }76}
AWS Certificate Manager (ACM) Certificate for ArgoCD: The SSL certificate in ACM is created using Terraform, and validation records are added to Route 53.
1resource "aws_acm_certificate" "argocd" {2 domain_name = local.argocd_hostname3 validation_method = "DNS"4 lifecycle {5 create_before_destroy = true6 }7}89data "aws_route53_zone" "argocd" {10 name = var.argocd_domainname11 private_zone = false12}1314resource "aws_route53_record" "argocd" {15 for_each = {16 for dvo in aws_acm_certificate.argocd.domain_validation_options : dvo.domain_name => {17 name = dvo.resource_record_name18 record = dvo.resource_record_value19 type = dvo.resource_record_type20 }21 }2223 allow_overwrite = true24 name = each.value.name25 records = [each.value.record]26 ttl = 6027 type = each.value.type28 zone_id = data.aws_route53_zone.argocd.zone_id29}3031resource "aws_acm_certificate_validation" "argocd" {32 certificate_arn = aws_acm_certificate.argocd.arn33 validation_record_fqdns = [for record in aws_route53_record.argocd : record.fqdn]34}
ArgoCD Service Account: A Service Account created specifically for ArgoCD Application Controller and ArgoCD Server. This Service Account will have access to other clusters where ArgoCD will be managing deployments. This is done by having the underlying AWS IAM Role for the Service Account (IRSA) assume another IAM Role with cluster admin permissions for the other cluster.
1module "controller_role" {2 source = "git::https://github.com/terraform-aws-modules/terraform-aws-iam.git//modules/iam-role-for-service-accounts-eks?ref=89fe17a6549728f1dc7e7a8f7b707486dfb45d89"34 role_name = "${var.name_prefix}ManagementRole"56 policy_name_prefix = var.name_prefix78 oidc_providers = {9 main = {10 provider_arn = module.central_eks.oidc_provider_arn11 namespace_service_accounts = ["argocd:argocd-application-controller", "argocd:argocd-server"]12 }13 }14}
EKS Blueprint Addons: The following addons are added to the cluster using the Blueprints module:
1module "eks_blueprints_addons" {2 depends_on = [module.central_eks]3 source = "git::https://github.com/aws-ia/terraform-aws-eks-blueprints-addons.git?ref=a9963f4a0e168f73adb033be594ac35868696a91"45 cluster_name = module.central_eks.cluster_name6 cluster_endpoint = module.central_eks.cluster_endpoint7 cluster_version = module.central_eks.cluster_version8 oidc_provider_arn = module.central_eks.oidc_provider_arn910 enable_argocd = true11 enable_aws_load_balancer_controller = true12 enable_metrics_server = true13 enable_external_dns = true14 external_dns_route53_zone_arns = [data.aws_route53_zone.argocd.arn]15 argocd = {16 chart_version = "7.4.5"17 values = [file("${path.module}/helmvalues/argocd.yaml")]18 set = [19 {20 name = "controller.serviceAccount.annotations.eks\.amazonaws\.com/role-arn"21 value = module.controller_role.iam_role_arn22 },23 {24 name = "server.serviceAccount.annotations.eks\.amazonaws\.com/role-arn"25 value = module.controller_role.iam_role_arn26 }27 ]28 }29}
ArgoCD ingress (AWS Application Load Balancer): An Ingress object with routes for HTTPS (UI) and gRPC (CLI). The AWS Load Balancer Controller creates an AWS Application Load Balancer corresponding to the ingress object and associates the ACM Certificate with the Load Balancer. ExternalDNS reads the domain name specified for the Load Balancer and creates the required Route53 records. A sleep is added to wait for the AWS Load Balancer Controller deployment to complete before the ingress deployment is attempted.
1resource "time_sleep" "wait_lb_controller_deployment" {2 depends_on = [module.eks_blueprints_addons]3 create_duration = "60s"4}56resource "helm_release" "argocdingress" {7 depends_on = [aws_acm_certificate_validation.argocd, time_sleep.wait_lb_controller_deployment]8 name = "argocdingress"9 chart = "${path.module}/../charts/argocdingress"10 namespace = "argocd"11 version = "0.10.0"1213 set {14 name = "argocdlb.hostname"15 value = local.argocd_hostname16 }1718 set {19 name = "argocdlb.certificatearn"20 value = aws_acm_certificate.argocd.arn21 }2223 set {24 name = "argocdlb.subnetlist"25 value = join("\,", local.public_subnet_ids)26 }27}
Major Dependencies:
The Terraform module creates an Amazon EKS Cluster where deployments can be done using the ArgoCD server deployed to the Central EKS Cluster.
EKS Cluster: The configuration for the cluster is opinionated. Some of the major points:
Depending on the exact requirements, the configuration can be updated.
1module "managed_eks" {2 source = "git::https://github.com/terraform-aws-modules/terraform-aws-eks.git?ref=c60b70fbc80606eb4ed8cf47063ac6ed0d8dd435"34 cluster_name = var.managed_eks_cluster.cluster_name5 cluster_version = var.managed_eks_cluster.cluster_version67 cluster_endpoint_public_access = var.managed_eks_cluster.publicly_accessible_cluster8 cluster_endpoint_public_access_cidrs = var.managed_eks_cluster.publicly_accessible_cluster ? var.managed_eks_cluster.publicly_accessible_cluster_cidrs : null910 cluster_addons = {11 coredns = {12 most_recent = true13 }14 eks-pod-identity-agent = {15 before_compute = true16 most_recent = true17 }18 kube-proxy = {19 before_compute = true20 most_recent = true21 }22 vpc-cni = {23 before_compute = true24 most_recent = true25 configuration_values = jsonencode({26 eniConfig = var.eks_vpc_cni_custom_networking ? {27 create = true28 region = data.aws_region.current.name29 subnets = { for az, subnet_id in local.cni_az_subnet_map : az => { securityGroups : [module.managed_eks.node_security_group_id], id : subnet_id } }30 } : null31 env = {32 AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG = var.eks_vpc_cni_custom_networking ? "true" : "false"33 ENI_CONFIG_LABEL_DEF = "topology.kubernetes.io/zone"34 } })35 }36 }3738 vpc_id = data.aws_vpc.vpc.id39 subnet_ids = local.private_subnet_ids40 control_plane_subnet_ids = local.public_subnet_ids41 cluster_ip_family = var.cluster_ip_family4243 # Cluster access entry44 # To add the current caller identity as an administrator45 enable_cluster_creator_admin_permissions = true4647 eks_managed_node_groups = {48 nodegroup = {49 instance_types = ["m6i.large"]5051 min_size = 352 max_size = 353 desired_size = 35455 subnet_ids = local.private_subnet_ids56 }57 }58 access_entries = {59 ssorole = {60 kubernetes_groups = []61 principal_arn = "arn:${data.aws_partition.current.partition}:iam::${data.aws_caller_identity.current.account_id}:role/aws-reserved/sso.amazonaws.com/${var.sso_cluster_admin_role_name}"6263 policy_associations = {64 example = {65 policy_arn = "arn:${data.aws_partition.current.partition}:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy"66 access_scope = {67 type = "cluster"68 }69 }70 }71 }72 argocdrole = {73 kubernetes_groups = []74 principal_arn = aws_iam_role.argocd_admin_role.arn7576 policy_associations = {77 example = {78 policy_arn = "arn:${data.aws_partition.current.partition}:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy"79 access_scope = {80 type = "cluster"81 }82 }83 }84 }85 }86 cluster_upgrade_policy = {87 support_type = var.managed_eks_cluster.cluster_support_type88 }89 # Allow central cluster to access api endpoint90 cluster_security_group_additional_rules = {91 "central_cluster_to_managed_cluster" = {92 description = "cluster api access"93 protocol = "tcp"94 from_port = 44395 to_port = 44396 type = "ingress"97 source_security_group_id = data.aws_security_group.central_cluster_node.id98 }99 }100}
EKS Blueprint Addons: The following addons can be added to the cluster using the Blueprints module:
The Addons listed here are optional and their deployment can be controlled through Terraform variables. The Blueprint module is highly configurable, and more addons supported by the module can be added if required.
1data "aws_route53_zone" "route53_zones" {2 for_each = toset(var.route53_zone_names)3 name = each.value4 private_zone = false5}67module "eks_blueprints_addons" {8 depends_on = [module.managed_eks]9 source = "git::https://github.com/aws-ia/terraform-aws-eks-blueprints-addons.git?ref=a9963f4a0e168f73adb033be594ac35868696a91"1011 cluster_name = module.managed_eks.cluster_name12 cluster_endpoint = module.managed_eks.cluster_endpoint13 cluster_version = module.managed_eks.cluster_version14 oidc_provider_arn = module.managed_eks.oidc_provider_arn1516 enable_aws_load_balancer_controller = var.enable_aws_load_balancer_controller17 enable_metrics_server = var.enable_metrics_server18 enable_external_dns = var.enable_external_dns19 external_dns_route53_zone_arns = var.enable_external_dns ? data.aws_route53_zone.route53_zones[*].arn : []20}
Cluster Admin IAM Role: An IAM role added as cluster admin to the managed EKS cluster. This role has a trust policy which allows it to be assumed by the ArgoCD Service Account via IRSA. An IAM policy is also added to the ArgoCD IAM role so that it can assume the cluster admin IAM Role.
1resource "aws_iam_role" "argocd_admin_role" {2 name = "${var.name_prefix}ArgoCDAdminRole"34 assume_role_policy = jsonencode({5 Version = "2012-10-17"6 Statement = [7 {8 Action = "sts:AssumeRole"9 Effect = "Allow"10 Principal = {11 AWS = data.aws_iam_role.argocd_service_account.arn12 }13 },14 ]15 })16}1718resource "aws_iam_policy" "argocd_admin_assume_role_policy" {19 name = "${var.name_prefix}ArgoCDAdminAssumeRole"20 description = "Allow ArgoCD service account to assume cluster admin role"21 policy = jsonencode({22 Version = "2012-10-17"23 Statement = [24 {25 Action = [26 "sts:AssumeRole",27 ]28 Effect = "Allow"29 Resource = aws_iam_role.argocd_admin_role.arn30 },31 ]32 })33}3435resource "aws_iam_role_policy_attachment" "argocd_admin_assume_role_policy_attachment" {36 role = data.aws_iam_role.argocd_service_account.name37 policy_arn = aws_iam_policy.argocd_admin_assume_role_policy.arn38}
Secret with Managed Cluster details: The secret consists of:
ArgoCD uses these details to connect to the cluster and deploy to it.
1resource "helm_release" "argocdmanagedcluster" {2 provider = helm.argocdcluster3 depends_on = [module.managed_eks, aws_iam_role_policy_attachment.argocd_admin_assume_role_policy_attachment]4 name = module.managed_eks.cluster_name5 chart = "${path.module}/../charts/argocdmanagedcluster"6 namespace = "argocd"7 version = "0.1.0"8 create_namespace = true9 set {10 name = "cluster.name"11 value = module.managed_eks.cluster_name12 }13 set {14 name = "cluster.role_arn"15 value = aws_iam_role.argocd_admin_role.arn16 }17 set {18 name = "cluster.ca_data"19 value = module.managed_eks.cluster_certificate_authority_data20 }21 set {22 name = "cluster.arn"23 value = module.managed_eks.cluster_arn24 }25 set {26 name = "cluster.endpoint"27 value = module.managed_eks.cluster_endpoint28 }29}
Major Dependencies: