Amazon EKS ArgoCD boilerplate

Published on 2024-09-26
Last modified on 2024-09-26

Introduction

This article will explain the boilerplate template code I developed for creating EKS Clusters with deployments managed by ArgoCD. The boilerplate provides a quick starting point for creating an EKS Cluster with ArgoCD preconfigured and other best practices.

The template primarily uses Terraform and Helm for the cluster deployment. It was originally developed as a way for me to learn some Kubernetes and ArgoCD concepts using an Amazon EKS Cluster but quickly morphed into a more generic template that I plan to use for future Amazon EKS-based projects.

Target Audience

This article is targeted at readers who have worked with Kubernetes. A basic understanding of the tools listed in the next section will also be required.

Technology Stack

Before diving into the code, here is a brief description of all major tools and frameworks used for this boilerplate:

Amazon Elastic Kubernetes Service (Amazon EKS)

Amazon EKS is AWS's managed offering for Kubernetes. Amazon EKS provides a Kubernetes environment where AWS primarily manages the control plane. Amazon EKS can deploy to on-prem or AWS outposts, but this template is for cloud deployments.

Terraform

Terraform is an Infrastructure as Code tool by Hashicorp which can be used to manage cloud and other infrastructure using declarative configuration files.

Helm

Helm is a tool used to package Kubernetes deployments in the form of charts and manage the deployment of these charts to any Kubernetes cluster. Vanilla Kubernetes deploys resources in a cluster declaratively using YAML or JSON manifest files. Helm builds on top of this by defining charts. The charts consist of one or more templates as well as input parameters.

GitHub Actions

GitHub Actions is a workflow automation framework which is directly integrated with GitHub. It define a CI/CD pipeline for any repository.

ArgoCD

Copying the exact definition from the ArgoCD website: Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes. Let's analyze each word in this definition:
GitOps: Files stored in a Git repository define the desired state for resources deployed to a Kubernetes cluster.
Declarative: Declarative configurations such as Helm charts, Kustomize applications, or even plain JSON or YAML Kubernetes manifests are supported by ArgoCD.
Continuous Delivery: Configuration changes pushed to the Git repo are immediately deployed (auto sync or manual sync can be configured).

Boilerplate Template Structure

The Terraform code is divided into three modules:

  • VPC Deployment: Create VPC Networking resources required for the clusters.
  • Central EKS Cluster: ArgoCD is deployed in this cluster.
  • Managed EKS Cluster: ArgoCD manages deployments in this cluster. Multiple managed clusters can be created.

Each Terraform module has three workflows: Terraform Plan, Terraform Apply, and Terraform Destroy. Authentication for the GitHub actions is done by integrating GitHub's OIDC provider. S3 is used as a backend to store the Terraform state.

VPC Deployment Repository Folder

The Terraform module creates a generic VPC with a configurable number of public and private subnets.

Public Subnets: The subnets route to Internet Gateway, and can be used to create resources with public IP. The subnets can be configured to have one NAT Gateway across all subnets or one NAT Gateway per AZ. The NAT Gateway can be an AWS managed NAT Gateway or a cheaper NAT instance using fck-NAT.

Private Subnets: The subnets route to NAT Gateway (if public subnets are configured to have NAT) or are isolated if there is no NAT Gateway.

1module "vpc" {
2 source = "git::https://github.com/terraform-aws-modules/terraform-aws-vpc.git?ref=e226cc15a7b8f62fd0e108792fea66fa85bcb4b9"
3
4 name = "${var.name_prefix}vpc"
5 cidr = var.vpc_cidr
6
7 azs = var.availability_zones
8 private_subnets = var.vpc_private_cidrs
9 public_subnets = var.vpc_public_cidrs
10 map_public_ip_on_launch = true
11
12 enable_dns_support = true
13 enable_dns_hostnames = true
14
15 enable_nat_gateway = var.use_managed_nat
16 single_nat_gateway = var.use_managed_nat ? !var.use_ha_nat : null
17 one_nat_gateway_per_az = var.use_managed_nat ? var.use_ha_nat : null
18}
19
20module "fcknat" {
21 count = var.use_managed_nat ? 0 : (var.use_ha_nat ? length(var.availability_zones) : 1)
22 source = "git::https://github.com/RaJiska/terraform-aws-fck-nat.git?ref=9377bf9247c96318b99273eb2978d1afce8cf0eb"
23 name = "fck-nat"
24 vpc_id = module.vpc.vpc_id
25 subnet_id = module.vpc.public_subnets[count.index]
26 ha_mode = true # Enables high-availability mode
27
28 update_route_tables = true
29 route_tables_ids = { for idx, route_table_id in module.vpc.private_route_table_ids : idx => route_table_id }
30}

Major Dependencies:

Central EKS Cluster Repository Folder

ArgoCDCluster

The Terraform module creates an Amazon EKS Cluster with an ArgoCD deployment.

EKS Cluster: The configuration for the cluster is opinionated. Some of the major points:

  • CoreDNS, EKS Pod Identity Agent, KubeProxy, and VPC CNI are configured as EKS cluster addons. The VPC CNI is configured to use custom networking.
  • An EKS Managed Node group with three instances, one per AZ.

Depending on the exact requirements, the configuration can be updated.

1module "central_eks" {
2 source = "git::https://github.com/terraform-aws-modules/terraform-aws-eks.git?ref=c60b70fbc80606eb4ed8cf47063ac6ed0d8dd435"
3
4 cluster_name = var.central_eks_cluster.cluster_name
5 cluster_version = var.central_eks_cluster.cluster_version
6
7 cluster_endpoint_public_access = var.central_eks_cluster.publicly_accessible_cluster
8 cluster_endpoint_public_access_cidrs = var.central_eks_cluster.publicly_accessible_cluster ? var.central_eks_cluster.publicly_accessible_cluster_cidrs : null
9
10 cluster_addons = {
11 coredns = {
12 most_recent = true
13 }
14 eks-pod-identity-agent = {
15 before_compute = true
16 most_recent = true
17 }
18 kube-proxy = {
19 before_compute = true
20 most_recent = true
21 }
22 vpc-cni = {
23 before_compute = true
24 most_recent = true
25 configuration_values = jsonencode({
26 eniConfig = var.eks_vpc_cni_custom_networking ? {
27 create = true
28 region = data.aws_region.current.name
29 subnets = { for az, subnet_id in local.cni_az_subnet_map : az => { securityGroups : [module.central_eks.node_security_group_id], id : subnet_id } }
30 } : null
31 env = {
32 AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG = var.eks_vpc_cni_custom_networking ? "true" : "false"
33 ENI_CONFIG_LABEL_DEF = "topology.kubernetes.io/zone"
34 } })
35 }
36 }
37
38 vpc_id = data.aws_vpc.vpc.id
39 subnet_ids = local.private_subnet_ids
40 control_plane_subnet_ids = local.public_subnet_ids
41 cluster_ip_family = var.cluster_ip_family
42
43 # Cluster access entry
44 # To add the current caller identity as an administrator
45 enable_cluster_creator_admin_permissions = true
46
47 eks_managed_node_groups = {
48 nodegroup = {
49 instance_types = ["m6i.large"]
50
51 min_size = 3
52 max_size = 3
53 desired_size = 3
54
55 subnet_ids = local.private_subnet_ids
56 }
57 }
58 access_entries = {
59 ssorole = {
60 kubernetes_groups = []
61 principal_arn = "arn:${data.aws_partition.current.partition}:iam::${data.aws_caller_identity.current.account_id}:role/aws-reserved/sso.amazonaws.com/${var.sso_cluster_admin_role_name}"
62
63 policy_associations = {
64 example = {
65 policy_arn = "arn:${data.aws_partition.current.partition}:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy"
66 access_scope = {
67 type = "cluster"
68 }
69 }
70 }
71 }
72 }
73 cluster_upgrade_policy = {
74 support_type = var.central_eks_cluster.cluster_support_type
75 }
76}

AWS Certificate Manager (ACM) Certificate for ArgoCD: The SSL certificate in ACM is created using Terraform, and validation records are added to Route 53.

1resource "aws_acm_certificate" "argocd" {
2 domain_name = local.argocd_hostname
3 validation_method = "DNS"
4 lifecycle {
5 create_before_destroy = true
6 }
7}
8
9data "aws_route53_zone" "argocd" {
10 name = var.argocd_domainname
11 private_zone = false
12}
13
14resource "aws_route53_record" "argocd" {
15 for_each = {
16 for dvo in aws_acm_certificate.argocd.domain_validation_options : dvo.domain_name => {
17 name = dvo.resource_record_name
18 record = dvo.resource_record_value
19 type = dvo.resource_record_type
20 }
21 }
22
23 allow_overwrite = true
24 name = each.value.name
25 records = [each.value.record]
26 ttl = 60
27 type = each.value.type
28 zone_id = data.aws_route53_zone.argocd.zone_id
29}
30
31resource "aws_acm_certificate_validation" "argocd" {
32 certificate_arn = aws_acm_certificate.argocd.arn
33 validation_record_fqdns = [for record in aws_route53_record.argocd : record.fqdn]
34}

ArgoCD Service Account: A Service Account created specifically for ArgoCD Application Controller and ArgoCD Server. This Service Account will have access to other clusters where ArgoCD will be managing deployments. This is done by having the underlying AWS IAM Role for the Service Account (IRSA) assume another IAM Role with cluster admin permissions for the other cluster.

1module "controller_role" {
2 source = "git::https://github.com/terraform-aws-modules/terraform-aws-iam.git//modules/iam-role-for-service-accounts-eks?ref=89fe17a6549728f1dc7e7a8f7b707486dfb45d89"
3
4 role_name = "${var.name_prefix}ManagementRole"
5
6 policy_name_prefix = var.name_prefix
7
8 oidc_providers = {
9 main = {
10 provider_arn = module.central_eks.oidc_provider_arn
11 namespace_service_accounts = ["argocd:argocd-application-controller", "argocd:argocd-server"]
12 }
13 }
14}

EKS Blueprint Addons: The following addons are added to the cluster using the Blueprints module:

1module "eks_blueprints_addons" {
2 depends_on = [module.central_eks]
3 source = "git::https://github.com/aws-ia/terraform-aws-eks-blueprints-addons.git?ref=a9963f4a0e168f73adb033be594ac35868696a91"
4
5 cluster_name = module.central_eks.cluster_name
6 cluster_endpoint = module.central_eks.cluster_endpoint
7 cluster_version = module.central_eks.cluster_version
8 oidc_provider_arn = module.central_eks.oidc_provider_arn
9
10 enable_argocd = true
11 enable_aws_load_balancer_controller = true
12 enable_metrics_server = true
13 enable_external_dns = true
14 external_dns_route53_zone_arns = [data.aws_route53_zone.argocd.arn]
15 argocd = {
16 chart_version = "7.4.5"
17 values = [file("${path.module}/helmvalues/argocd.yaml")]
18 set = [
19 {
20 name = "controller.serviceAccount.annotations.eks\.amazonaws\.com/role-arn"
21 value = module.controller_role.iam_role_arn
22 },
23 {
24 name = "server.serviceAccount.annotations.eks\.amazonaws\.com/role-arn"
25 value = module.controller_role.iam_role_arn
26 }
27 ]
28 }
29}

ArgoCD ingress (AWS Application Load Balancer): An Ingress object with routes for HTTPS (UI) and gRPC (CLI). The AWS Load Balancer Controller creates an AWS Application Load Balancer corresponding to the ingress object and associates the ACM Certificate with the Load Balancer. ExternalDNS reads the domain name specified for the Load Balancer and creates the required Route53 records. A sleep is added to wait for the AWS Load Balancer Controller deployment to complete before the ingress deployment is attempted.

1resource "time_sleep" "wait_lb_controller_deployment" {
2 depends_on = [module.eks_blueprints_addons]
3 create_duration = "60s"
4}
5
6resource "helm_release" "argocdingress" {
7 depends_on = [aws_acm_certificate_validation.argocd, time_sleep.wait_lb_controller_deployment]
8 name = "argocdingress"
9 chart = "${path.module}/../charts/argocdingress"
10 namespace = "argocd"
11 version = "0.10.0"
12
13 set {
14 name = "argocdlb.hostname"
15 value = local.argocd_hostname
16 }
17
18 set {
19 name = "argocdlb.certificatearn"
20 value = aws_acm_certificate.argocd.arn
21 }
22
23 set {
24 name = "argocdlb.subnetlist"
25 value = join("\,", local.public_subnet_ids)
26 }
27}

Major Dependencies:

Managed EKS Cluster Repository Folder

ArgoCDManagedCluster

The Terraform module creates an Amazon EKS Cluster where deployments can be done using the ArgoCD server deployed to the Central EKS Cluster.

EKS Cluster: The configuration for the cluster is opinionated. Some of the major points:

  • CoreDNS, EKS Pod Identity Agent, KubeProxy, and VPC CNI are configured as EKS Cluster addons. The VPC CNI is configured to use custom networking.
  • An EKS Managed Node group with three instances, one per AZ.

Depending on the exact requirements, the configuration can be updated.

1module "managed_eks" {
2 source = "git::https://github.com/terraform-aws-modules/terraform-aws-eks.git?ref=c60b70fbc80606eb4ed8cf47063ac6ed0d8dd435"
3
4 cluster_name = var.managed_eks_cluster.cluster_name
5 cluster_version = var.managed_eks_cluster.cluster_version
6
7 cluster_endpoint_public_access = var.managed_eks_cluster.publicly_accessible_cluster
8 cluster_endpoint_public_access_cidrs = var.managed_eks_cluster.publicly_accessible_cluster ? var.managed_eks_cluster.publicly_accessible_cluster_cidrs : null
9
10 cluster_addons = {
11 coredns = {
12 most_recent = true
13 }
14 eks-pod-identity-agent = {
15 before_compute = true
16 most_recent = true
17 }
18 kube-proxy = {
19 before_compute = true
20 most_recent = true
21 }
22 vpc-cni = {
23 before_compute = true
24 most_recent = true
25 configuration_values = jsonencode({
26 eniConfig = var.eks_vpc_cni_custom_networking ? {
27 create = true
28 region = data.aws_region.current.name
29 subnets = { for az, subnet_id in local.cni_az_subnet_map : az => { securityGroups : [module.managed_eks.node_security_group_id], id : subnet_id } }
30 } : null
31 env = {
32 AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG = var.eks_vpc_cni_custom_networking ? "true" : "false"
33 ENI_CONFIG_LABEL_DEF = "topology.kubernetes.io/zone"
34 } })
35 }
36 }
37
38 vpc_id = data.aws_vpc.vpc.id
39 subnet_ids = local.private_subnet_ids
40 control_plane_subnet_ids = local.public_subnet_ids
41 cluster_ip_family = var.cluster_ip_family
42
43 # Cluster access entry
44 # To add the current caller identity as an administrator
45 enable_cluster_creator_admin_permissions = true
46
47 eks_managed_node_groups = {
48 nodegroup = {
49 instance_types = ["m6i.large"]
50
51 min_size = 3
52 max_size = 3
53 desired_size = 3
54
55 subnet_ids = local.private_subnet_ids
56 }
57 }
58 access_entries = {
59 ssorole = {
60 kubernetes_groups = []
61 principal_arn = "arn:${data.aws_partition.current.partition}:iam::${data.aws_caller_identity.current.account_id}:role/aws-reserved/sso.amazonaws.com/${var.sso_cluster_admin_role_name}"
62
63 policy_associations = {
64 example = {
65 policy_arn = "arn:${data.aws_partition.current.partition}:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy"
66 access_scope = {
67 type = "cluster"
68 }
69 }
70 }
71 }
72 argocdrole = {
73 kubernetes_groups = []
74 principal_arn = aws_iam_role.argocd_admin_role.arn
75
76 policy_associations = {
77 example = {
78 policy_arn = "arn:${data.aws_partition.current.partition}:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy"
79 access_scope = {
80 type = "cluster"
81 }
82 }
83 }
84 }
85 }
86 cluster_upgrade_policy = {
87 support_type = var.managed_eks_cluster.cluster_support_type
88 }
89 # Allow central cluster to access api endpoint
90 cluster_security_group_additional_rules = {
91 "central_cluster_to_managed_cluster" = {
92 description = "cluster api access"
93 protocol = "tcp"
94 from_port = 443
95 to_port = 443
96 type = "ingress"
97 source_security_group_id = data.aws_security_group.central_cluster_node.id
98 }
99 }
100}

EKS Blueprint Addons: The following addons can be added to the cluster using the Blueprints module:

The Addons listed here are optional and their deployment can be controlled through Terraform variables. The Blueprint module is highly configurable, and more addons supported by the module can be added if required.

1data "aws_route53_zone" "route53_zones" {
2 for_each = toset(var.route53_zone_names)
3 name = each.value
4 private_zone = false
5}
6
7module "eks_blueprints_addons" {
8 depends_on = [module.managed_eks]
9 source = "git::https://github.com/aws-ia/terraform-aws-eks-blueprints-addons.git?ref=a9963f4a0e168f73adb033be594ac35868696a91"
10
11 cluster_name = module.managed_eks.cluster_name
12 cluster_endpoint = module.managed_eks.cluster_endpoint
13 cluster_version = module.managed_eks.cluster_version
14 oidc_provider_arn = module.managed_eks.oidc_provider_arn
15
16 enable_aws_load_balancer_controller = var.enable_aws_load_balancer_controller
17 enable_metrics_server = var.enable_metrics_server
18 enable_external_dns = var.enable_external_dns
19 external_dns_route53_zone_arns = var.enable_external_dns ? data.aws_route53_zone.route53_zones[*].arn : []
20}

Cluster Admin IAM Role: An IAM role added as cluster admin to the managed EKS cluster. This role has a trust policy which allows it to be assumed by the ArgoCD Service Account via IRSA. An IAM policy is also added to the ArgoCD IAM role so that it can assume the cluster admin IAM Role.

1resource "aws_iam_role" "argocd_admin_role" {
2 name = "${var.name_prefix}ArgoCDAdminRole"
3
4 assume_role_policy = jsonencode({
5 Version = "2012-10-17"
6 Statement = [
7 {
8 Action = "sts:AssumeRole"
9 Effect = "Allow"
10 Principal = {
11 AWS = data.aws_iam_role.argocd_service_account.arn
12 }
13 },
14 ]
15 })
16}
17
18resource "aws_iam_policy" "argocd_admin_assume_role_policy" {
19 name = "${var.name_prefix}ArgoCDAdminAssumeRole"
20 description = "Allow ArgoCD service account to assume cluster admin role"
21 policy = jsonencode({
22 Version = "2012-10-17"
23 Statement = [
24 {
25 Action = [
26 "sts:AssumeRole",
27 ]
28 Effect = "Allow"
29 Resource = aws_iam_role.argocd_admin_role.arn
30 },
31 ]
32 })
33}
34
35resource "aws_iam_role_policy_attachment" "argocd_admin_assume_role_policy_attachment" {
36 role = data.aws_iam_role.argocd_service_account.name
37 policy_arn = aws_iam_policy.argocd_admin_assume_role_policy.arn
38}

Secret with Managed Cluster details: The secret consists of:

  • EKS Cluster ARN
  • EKS Cluster name
  • EKS Cluster endpoint
  • EKS Cluster Certificate Authority data
  • Cluster Admin IAM Role ARN

ArgoCD uses these details to connect to the cluster and deploy to it.

1resource "helm_release" "argocdmanagedcluster" {
2 provider = helm.argocdcluster
3 depends_on = [module.managed_eks, aws_iam_role_policy_attachment.argocd_admin_assume_role_policy_attachment]
4 name = module.managed_eks.cluster_name
5 chart = "${path.module}/../charts/argocdmanagedcluster"
6 namespace = "argocd"
7 version = "0.1.0"
8 create_namespace = true
9 set {
10 name = "cluster.name"
11 value = module.managed_eks.cluster_name
12 }
13 set {
14 name = "cluster.role_arn"
15 value = aws_iam_role.argocd_admin_role.arn
16 }
17 set {
18 name = "cluster.ca_data"
19 value = module.managed_eks.cluster_certificate_authority_data
20 }
21 set {
22 name = "cluster.arn"
23 value = module.managed_eks.cluster_arn
24 }
25 set {
26 name = "cluster.endpoint"
27 value = module.managed_eks.cluster_endpoint
28 }
29}

Major Dependencies: