Refer here for corresponding Git commit: Provision EKS Cluster

In this article, we'll explore how to deploy the Amazon EKS cluster using Terraform. We'll examine each component and explain how to customize them for your specific requirements.

For this EKS deployment, we'll add / make changes for these files:

  • eks.tf: Contains the EKS cluster configuration
  • variables.tf: Contains variable definitions

Adding the required the Variables

Let's start with the new variables in variables.tf:


variable "cluster_name" {
  default     = "opsninja"
}

variable "worker_group_name" {
  default     = "opsninja"
}

variable "cluster_creator_admin_permission" {
  description = "Enable admin permissions to access Kubernetes API - "
  type        = bool
  default     = true
}

variable "cluster_endpoint_public_access" {
  description = "Enable or disable public access to the EKS cluster endpoint"
  type        = bool
  default     = true
}

variable "public_access_cidrs" {
  description = "List of CIDRs allowed for public access to the EKS cluster"
  type        = list(string)
  default     = ["0.0.0.0/0"]
}

variable "coredns_addon_version" {
  description = "Version of the CoreDNS addon"
  type        = string
  default     = "v1.11.3-eksbuild.2"
}

variable "kube_proxy_addon_version" {
  description = "Version of the kube-proxy addon"
  type        = string
  default     = "v1.31.2-eksbuild.2"
}

variable "vpc_cni_addon_version" {
  description = "Version of the VPC CNI addon"
  type        = string
  default     = "v1.19.0-eksbuild.1"
}

variable "pod_identity_agent_addon_version" {
  description = "Version of the POD IDENTITY addon"
  type        = string
  default     = "v1.3.2-eksbuild.2"
}

variable "app_namespace" {
  default = "opsninja"
}

EKS Configuration Breakdown

Here's the complete configuration for the EKS cluster.

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "20.29.0"

  cluster_name    = local.cluster_name
  cluster_version = "1.31"

  vpc_id                         = module.vpc.vpc_id
  subnet_ids                     = module.vpc.private_subnets
  control_plane_subnet_ids       = module.vpc.intra_subnets

  cluster_endpoint_private_access       = true
  cluster_endpoint_public_access        = var.cluster_endpoint_public_access
  cluster_endpoint_public_access_cidrs  = var.public_access_cidrs

  enable_cluster_creator_admin_permissions = var.cluster_creator_admin_permission

  eks_managed_node_group_defaults = {
    ami_type = "AL2_x86_64"
    disk_size = 100
  }

  cluster_addons = {
    coredns = { 
      addon_version = var.coredns_addon_version
      resolve_conflicts = "OVERWRITE"
    }
    kube-proxy = {
      addon_version = var.kube_proxy_addon_version
      resolve_conflicts = "OVERWRITE"
    }
    eks-pod-identity-agent = {
      addon_version = var.pod_identity_agent_addon_version
      resolve_conflicts = "OVERWRITE"
    }
    vpc-cni = {
      addon_version = var.vpc_cni_addon_version
      resolve_conflicts = "OVERWRITE"
      before_compute           = true
      configuration_values = jsonencode({
        env = {
          # Reference docs https://docs.aws.amazon.com/eks/latest/userguide/cni-increase-ip-addresses.html
          ENABLE_PREFIX_DELEGATION = "true"
          WARM_PREFIX_TARGET       = "1"
        }
      })
    }
  }

  enable_irsa = true

  cluster_security_group_additional_rules = {
    egress_nodes_ephemeral_ports_tcp = {
      description                = "To node 1025-65535"
      protocol                   = "tcp"
      from_port                  = 1025
      to_port                    = 65535
      type                       = "egress"
      source_node_security_group = true
    }
  }

  node_security_group_additional_rules = {
    ingress_self_all = {
      description = "Node to node all ports/protocols"
      protocol    = "-1"
      from_port   = 0
      to_port     = 0
      type        = "ingress"
      self        = true
    }
    egress_all = {
      description      = "Node all egress"
      protocol         = "-1"
      from_port        = 0
      to_port          = 0
      type             = "egress"
      cidr_blocks      = ["0.0.0.0/0"]
      ipv6_cidr_blocks = ["::/0"]
    }
  }

  eks_managed_node_groups = {
    opsninja-wg = {
      name = "${var.worker_group_name}-${var.env}-wg"

      instance_types = ["m6a.xlarge"]

      create_security_group                 = false
      attach_cluster_primary_security_group = false

      min_size     = 1
      max_size     = 6
      desired_size = 1

      iam_role_additional_policies = {
        policy1 = "arn:${local.partition}:iam::aws:policy/AmazonSSMManagedInstanceCore",
        policy_cw = "arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy"
      }

      pre_bootstrap_user_data = <<-EOT
        #!/bin/bash
        LINE_NUMBER=$(grep -n "KUBELET_EXTRA_ARGS=\$2" /etc/eks/bootstrap.sh | cut -f1 -d:)
        REPLACEMENT="\ \ \ \ \ \ KUBELET_EXTRA_ARGS=\$(echo \$2 | sed -s -E 's/--max-pods=[0-9]+/--max-pods=90/g')"
        sed -i '/KUBELET_EXTRA_ARGS=\$2/d' /etc/eks/bootstrap.sh
        sed -i "$${LINE_NUMBER}i $${REPLACEMENT}" /etc/eks/bootstrap.sh
      EOT
    }
  }
  node_security_group_tags = {
    "karpenter.sh/discovery" = local.cluster_name
  }

}

resource "kubernetes_namespace" "app_namespace" {
  metadata {
    name = var.app_namespace
  }
}

Let's examine each part in detail:

Network Integration

vpc_id                         = module.vpc.vpc_id
subnet_ids                     = module.vpc.private_subnets
control_plane_subnet_ids       = module.vpc.intra_subnets

cluster_endpoint_private_access      = true
cluster_endpoint_public_access       = var.cluster_endpoint_public_access
cluster_endpoint_public_access_cidrs = var.public_access_cidrs

Customization Points:
We are referring to the VPC module we provisioned earlier to obtain most of the information.

  • cluster_endpoint_public_access: Set to false for private-only clusters. However, in our case, we can set this to 'true' while we provision infrastructure.
  • public_access_cidrs: Restrict access to specific IP ranges. In thr variables.tf we have set this to 0.0.0.0/0 which allows all IPs to access the Kubernetes API Server. This is insecure. If you have a bastion host or a server which you would like to access the EKS cluster with, you can enter that IP here.
  • Control plane placed in intra subnets for enhanced security
  • Worker nodes placed in private subnets

Node Group Configuration

eks_managed_node_group_defaults = {
  ami_type = "AL2_x86_64"
  disk_size = 100
}

eks_managed_node_groups = {
  opsninja-wg = {
    name = "${var.worker_group_name}-${var.env}-wg"

    instance_types = ["m6a.xlarge"]

    create_security_group                 = false
    attach_cluster_primary_security_group = false

    min_size     = 1
    max_size     = 6
    desired_size = 1

    iam_role_additional_policies = {
      policy1 = "arn:${local.partition}:iam::aws:policy/AmazonSSMManagedInstanceCore",
      policy_cw = "arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy"
    }
  }
}

In this configuration we are using the following specs. But you can customize them:

  • Using Amazon Linux 2 AMI
  • 100GB root volume for container images and system files
  • m6a.xlarge instances for good price/performance ratio
  • Autoscaling configuration: min=1, max=6
  • SSM and CloudWatch policies attached for monitoring and management

Add-ons Configuration

cluster_addons = {
  coredns = { 
    addon_version = var.coredns_addon_version
    resolve_conflicts = "OVERWRITE"
  }
  kube-proxy = {
    addon_version = var.kube_proxy_addon_version
    resolve_conflicts = "OVERWRITE"
  }
  eks-pod-identity-agent = {
    addon_version = var.pod_identity_agent_addon_version
    resolve_conflicts = "OVERWRITE"
  }
  vpc-cni = {
    addon_version = var.vpc_cni_addon_version
    resolve_conflicts = "OVERWRITE"
    before_compute           = true
    configuration_values = jsonencode({
      env = {
        ENABLE_PREFIX_DELEGATION = "true"
        WARM_PREFIX_TARGET      = "1"
      }
    })
  }
}

Important notes:

  • Update addon versions as new releases become available. In our case we have pinned the addon versions since terraform would otherwise show a drift everytime we apply.
  • VPC CNI configured with prefix delegation for better IP address management. You can read more about it: here. Without this configuration, there is a limit on number of IPs that are made available for a Node. To bypass this limitation, we have done a hack on pre_bootstrap_user_data aswell under eks_managed_node_group. More on that later in this article..

Security Group Rules

cluster_security_group_additional_rules = {
  egress_nodes_ephemeral_ports_tcp = {
    description                = "To node 1025-65535"
    protocol                   = "tcp"
    from_port                  = 1025
    to_port                    = 65535
    type                       = "egress"
    source_node_security_group = true
  }
}

node_security_group_additional_rules = {
  ingress_self_all = {
    description = "Node to node all ports/protocols"
    protocol    = "-1"
    from_port   = 0
    to_port     = 0
    type        = "ingress"
    self        = true
  }
  egress_all = {
    description      = "Node all egress"
    protocol         = "-1"
    from_port        = 0
    to_port          = 0
    type            = "egress"
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }
}

What this configuration does::

  • Allows necessary communication between control plane and nodes
  • Enables node-to-node communication for pod networking
  • Permits outbound internet access for nodes

Bootstrap Customization

pre_bootstrap_user_data = <<-EOT
  #!/bin/bash
  LINE_NUMBER=$(grep -n "KUBELET_EXTRA_ARGS=\$2" /etc/eks/bootstrap.sh | cut -f1 -d:)
  REPLACEMENT="\ \ \ \ \ \ KUBELET_EXTRA_ARGS=\$(echo \$2 | sed -s -E 's/--max-pods=[0-9]+/--max-pods=90/g')"
  sed -i '/KUBELET_EXTRA_ARGS=\$2/d' /etc/eks/bootstrap.sh
  sed -i "$${LINE_NUMBER}i $${REPLACEMENT}" /etc/eks/bootstrap.sh
EOT

Configuration Notes:

  • Modifies max pods per node to 90
  • However note that if you want to use a different AMI, the pre_bootstrap_user_data here might not work (Although it should work for atleast most of the Linux AMIs).

Namespace Creation

resource "kubernetes_namespace" "app_namespace" {
  metadata {
    name = var.app_namespace
  }
}

We are creating the namespace where our application will ultimately run aswell.

Accessing the cluster

Once you have applied the changes above through terraform, you will finally have your cluster created.

To access your cluster, now you can configure kubectl:

aws eks update-kubeconfig --name <cluster-name> --region <region>

Author Of article : Ravindu Fernando Read full article