This blog post picks up from the previous article which provisions an EKS cluster using Terraform and GitHub Actions.
Here, we'll look at securing our cluster's resources using pod security groups and network policies.
First, we need to configure our bastion host to be able to communicate with the cluster. We'll need to use Session Manager to connect to our bastion host to be able to follow along in this blog post.
Configure AWS credentials
Check this link to see how to configure your AWS credentials. Make sure to use the same credentials as those used to create the EKS cluster.
Use AWS CLI to save kubeconfig file
aws eks update-kubeconfig --name <cluster_name>
Be sure to replace <cluster_name>
with the name of your EKS cluster. Mine is eks-demo
.
Check the kubeconfig file
cat ~/.kube/config
Download and apply EKS aws-auth
To grant our IAM principal the ability to interact with our EKS cluster, first download the aws-auth
ConfigMap
.
curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/aws-auth-cm.yaml
We should then edit the downloaded aws-auth-cm.yaml
file (using Vim or Nano) and replace <ARN of instance role (not instance profile)>
with the ARN of our worker node IAM role (not its instance profile's ARN), then save the file.
We can then apply the configuration with the following line:
kubectl apply -f aws-auth-cm.yaml
Configure Pod Security Group
Below is a diagram of the infrastructure we want to set up:
In the diagram we have an RDS database with its security group configured that only allows access to the green pod (through its security group). So no other pod, besides the green pod, will be able to communicate with the RDS database.
These are the steps we'll follow to configure and test our pod security group:
- Create an Amazon RDS database protected by a security group called db_sg.
- Create a security group called pod_sg that will be allowed to connect to the RDS instance.
- Deploy a SecurityGroupPolicy that will automatically attach the pod_sg security group to a pod with the correct metadata.
- Deploy two pods (green and blue) using the same image and verify that only one of them (green) can connect to the Amazon RDS database.
Create DB Security Group (db_sg)
export VPC_ID=$(aws eks describe-cluster \
--name eks-demo \
--query "cluster.resourcesVpcConfig.vpcId" \
--output text)
# create DB security group
aws ec2 create-security-group \
--description 'DB SG' \
--group-name 'db_sg' \
--vpc-id ${VPC_ID}
# save the security group ID for future use
export DB_SG=$(aws ec2 describe-security-groups \
--filters Name=group-name,Values=db_sg Name=vpc-id,Values=${VPC_ID} \
--query "SecurityGroups[0].GroupId" --output text)
Create Pod Security Group (pod_sg)
# create the Pod security group
aws ec2 create-security-group \
--description 'POD SG' \
--group-name 'pod_sg' \
--vpc-id ${VPC_ID}
# save the security group ID for future use
export POD_SG=$(aws ec2 describe-security-groups \
--filters Name=group-name,Values=pod_sg Name=vpc-id,Values=${VPC_ID} \
--query "SecurityGroups[0].GroupId" --output text)
echo "Pod security group ID: ${POD_SG}"
Add Ingress Rules to db_sg
One rule is to allow bastion host to populate DB, the other rule is to allow pod_sg to connect to DB.
# Get IMDSv2 Token
export TOKEN=`curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"`
# Instance IP
export INSTANCE_IP=$(curl -H "X-aws-ec2-metadata-token: $TOKEN" -s http://169.254.169.254/latest/meta-data/local-ipv4)
# allow instance to connect to RDS
aws ec2 authorize-security-group-ingress \
--group-id ${DB_SG} \
--protocol tcp \
--port 5432 \
--cidr ${INSTANCE_IP}/32
# Allow pod_sg to connect to the RDS
aws ec2 authorize-security-group-ingress \
--group-id ${DB_SG} \
--protocol tcp \
--port 5432 \
--source-group ${POD_SG}
Configure Node Group's Security Group to Allow Pod to Communicate with its Node for DNS Resolution
export NODE_GROUP_SG=$(aws ec2 describe-security-groups \
--filters Name=tag:Name,Values=eks-cluster-sg-eks-demo-* Name=vpc-id,Values=${VPC_ID} \
--query "SecurityGroups[0].GroupId" \
--output text)
echo "Node Group security group ID: ${NODE_GROUP_SG}"
# allow pod_sg to connect to NODE_GROUP_SG using TCP 53
aws ec2 authorize-security-group-ingress \
--group-id ${NODE_GROUP_SG} \
--protocol tcp \
--port 53 \
--source-group ${POD_SG}
# allow pod_sg to connect to NODE_GROUP_SG using UDP 53
aws ec2 authorize-security-group-ingress \
--group-id ${NODE_GROUP_SG} \
--protocol udp \
--port 53 \
--source-group ${POD_SG}
Create RDS DB
This post assumes that you have some knowledge of RDS databases and won't focus on this step.
You should create a DB subnet group consisting of the 2 data subnets created in the previous article, and use this subnet group for the RDS database you're provisioning.
I have named my database eks_demo
(DB name, not DB identifier), and this name is referenced in some steps below. If you give your database a different name, you must update this in the corresponding steps below.
Populate DB with sample data
sudo dnf update
sudo dnf install postgresql15.x86_64 postgresql15-server -y
sudo postgresql-setup --initdb
sudo systemctl start postgresql
sudo systemctl enable postgresql
# Use Vim to edit the postgresql.conf file to listen from all address
sudo vi /var/lib/pgsql/data/postgresql.conf
# Replace this line
listen_addresses = 'localhost'
# with the following line
listen_addresses = '*'
# Backup your postgres config file
sudo cp /var/lib/pgsql/data/pg_hba.conf /var/lib/pgsql/data/pg_hba.conf.bck
# Allow connections from all addresses with password authentication
# First edit the pg_hba.conf file
sudo vi /var/lib/pgsql/data/pg_hba.conf
# Then add the following line to the file
host all all 0.0.0.0/0 md5
# Restart the postgres service
sudo systemctl restart postgresql
cat << EOF > sg-per-pod-pgsql.sql
CREATE TABLE welcome (column1 TEXT);
insert into welcome values ('--------------------------');
insert into welcome values (' Welcome to the EKS lab ');
insert into welcome values ('--------------------------');
EOF
psql postgresql://<RDS_USER>:<RDS_PASSWORD>@<RDS_ENDPOINT>:5432/<RDS_DATABASE_NAME>?ssl=true -f sg-per-pod-pgsql.sql
Be sure to replace <RDS_USER>
, <RDS_PASSWORD>
, <RDS_ENDPOINT>
and <RDS_DATABASE_NAME>
with the right values for your RDS database.
Configure CNI to Manage Network Interfaces for Pods
kubectl -n kube-system set env daemonset aws-node ENABLE_POD_ENI=true
# Wait for the rolling update of the daemonset
kubectl -n kube-system rollout status ds aws-node
Note that this requires the AmazonEKSVPCResourceController
AWS-managed policy to be attached to the cluster's role, that will allow it to manage ENIs and IPs for the worker nodes.
Create SecurityGroupPolicy Custom Resource
A new Custom Resource Definition (CRD) has also been added automatically at the cluster creation. Cluster administrators can specify which security groups to assign to pods through the SecurityGroupPolicy CRD. Within a namespace, you can select pods based on pod labels, or based on labels of the service account associated with a pod. For any matching pods, you also define the security group IDs to be applied.
Verify the CRD is present with this command:
kubectl get crd securitygrouppolicies.vpcresources.k8s.aws
The webhook watches SecurityGroupPolicy custom resources for any changes, and automatically injects matching pods with the extended resource request required for the pod to be scheduled onto a node with available branch network interface capacity. Once the pod is scheduled, the resource controller will create and attach a branch interface to the trunk interface. Upon successful attachment, the controller adds an annotation to the pod object with the branch interface details.
Next, create the policy configuration file:
cat << EOF > sg-per-pod-policy.yaml
apiVersion: vpcresources.k8s.aws/v1beta1
kind: SecurityGroupPolicy
metadata:
name: allow-rds-access
spec:
podSelector:
matchLabels:
app: green-pod
securityGroups:
groupIds:
- ${POD_SG}
EOF
Finally, deploy the policy:
kubectl apply -f sg-per-pod-policy.yaml
kubectl describe securitygrouppolicy
Create Secret for DB Access
kubectl create secret generic rds --from-literal="password=<RDS_PASSWORD>" --from-literal="host=<RDS_ENDPOINT>"
kubectl describe secret rds
Make sure you replace RDS_PASSWORD
and RDS_ENDPOINT
with the correct values for your RDS database.
Create Docker Image to Test RDS Connection
In order to test our connection to the database, we need to create a Docker image which we'll use to create our pods.
First, we create a Python script that will handle this connection test:
postgres_test.py
import os
import boto3
import psycopg2
HOST = os.getenv('HOST')
PORT = "5432"
USER = os.getenv('USER')
REGION = "us-east-1"
DB_NAME = os.getenv('DB_NAME')
PASSWORD = os.getenv('PASSWORD')
session = boto3.Session()
client = boto3.client('rds', region_name=REGION)
conn = None
try:
conn = psycopg2.connect(host=HOST, port=PORT, database=DB_NAME, user=USER, password=PASSWORD, connect_timeout=3)
cur = conn.cursor()
cur.execute("""SELECT version()""")
query_results = cur.fetchone()
print(query_results)
cur.close()
except Exception as e:
print("Database connection failed due to {}".format(e))
finally:
if conn is not None:
conn.close()
This code connects to our RDS database and prints the version if successful, otherwise it prints an error message.
Then, we create a Dockerfile which we'll use to build a Docker image:
Dockerfile
FROM python:3.8.5-slim-buster
ADD postgres_test.py /
RUN pip install psycopg2-binary boto3
CMD [ "python", "-u", "./postgres_test.py" ]
Then we build and push our Docker image to an ECR repo. Make sure you replace <region>
and <account_id>
with appropriate values:
docker build -t postgres-test .
aws ecr create-repository --repository-name postgres-test-demo
aws ecr get-login-password --region <region> | docker login --username AWS --password-stdin <account_id>.dkr.ecr.<region>.amazonaws.com
docker tag postgres-test:latest <account_id>.dkr.ecr.<region>.amazonaws.com/postgres-test-demo:latest
docker push <account_id>.dkr.ecr.<region>.amazonaws.com/postgres-test-demo:latest
We can then proceed to create our pod configuration files.
green-pod.yaml
apiVersion: v1
kind: Pod
metadata: name: green-pod
labels:
app: green-pod
spec:
containers:
- name: green-pod
image: postgres-test:latest
env:
- name: HOST
valueFrom:
secretKeyRef:
name: rds
key: host
- name: DB_NAME
value: eks_demo
- name: USER
value: postgres
- name: PASSWORD
valueFrom:
secretKeyRef:
name: rds
key: password
blue-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: blue-pod
labels:
app: blue-pod
spec:
containers:
- name: blue-pod
image: postgres-test:latest
env:
- name: HOST
valueFrom:
secretKeyRef:
name: rds
key: host
- name: DB_NAME
value: eks_demo
- name: USER
value: postgres
- name: PASSWORD
valueFrom:
secretKeyRef:
name: rds
key: password
We can then apply our configurations and check if the connections succeeded:
kubectl apply -f green-pod.yaml -f blue-pod.yaml
You can then check the status of your pods using:
kubectl get pod
You should see an output similar to this (the status could either be Completed
or CrashLoopBackOff
):
We can now check our pod's logs and see that the green pod logs the version of our RDS database, while the blue pod logs a timeout error:
Something else you could check to confirm that your green pod actually uses the pod security group we created is by first describing the pod:
kubectl describe pod green-pod
You should see that it has an annotations with an ENI ID:
We can go to the AWS EC2 console, look for Network Interfaces
under the Network & Security
menu to the left, then look for an interface whose ID matches the one we saw in the pod annotation. If you select that interface, you should be able to see that it is of type branch
and it has the pod_sg
security group attached to it:
Configure Network Policies
In order to be able to use network policies in our cluster, we must first configure the VPC CNI addon to enable network policies.
We can use the AWS CLI to get the version of our CNI addon. Replace <cluster_name>
with the name of your cluster:
aws eks describe-addon --cluster-name <cluster_name> --addon-name vpc-cni --query addon.addonVersion --output text
We then update the CNI addon's configuration to enable network policies. Replace <cluster_name>
and <addon_version>
with the appropriate values:
aws eks update-addon --cluster-name <cluster_name> --addon-name vpc-cni --addon-version <addon_version> --resolve-conflicts PRESERVE --configuration-values '{"enableNetworkPolicy": "true"}'
With this done, we can now define network policies to limit access to our pods. Below is a diagram of what we're trying to accomplish:
Take this network policy, for example (network-policy.yaml):
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-netpol
namespace: default
spec:
podSelector:
matchLabels:
run: web2
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
run: web1
ports:
- protocol: TCP
port: 80
egress:
- to:
- podSelector:
matchLabels:
run: web1
ports:
- protocol: TCP
port: 8
Author Of article : Stephane Noutsa
Read full article