Running Containers on EKS Fargate in Isolated Private Subnets Behind ALB
EKS on Fargate can run containers within private subnets.
Overview
In this post, the key points are:
- Using bastion to operate and test the EKS cluster
- Downloading a container image from ECR and S3 through the VPC endpoints
Prerequisites
Install the following on you computer.
VPC
VPC
Create a dedicated VPC.
If you use custom DNS domain names defined in a private hosted zone in Amazon Route 53, or use private DNS with interface VPC endpoints (AWS PrivateLink), you must set both the enableDnsHostnames and enableDnsSupport attributes to true.
$ aws ec2 create-vpc \
--cidr-block 192.168.0.0/16 \
--tag-specifications "ResourceType=vpc,Tags=[{Key=Name,Value=eks-fargate-vpc}]"
$ aws ec2 modify-vpc-attribute \
--vpc-id vpc-xxxxxxxxxxxxxxxxx \
--enable-dns-hostnames
Subnets
Create two private subnets in which Fargate pods will run and one public subnet hosting a bastion EC2 instance to operate an EKS cluster.
$ aws ec2 create-subnet \
--vpc-id vpc-xxxxxxxxxxxxxxxxx \
--availability-zone ap-northeast-1a \
--cidr-block 192.168.0.0/20 \
--tag-specifications "ResourceType=subnet,Tags=[{Key=Name,Value=eks-fargate-private-subnet-1a}]"
$ aws ec2 create-subnet \
--vpc-id vpc-xxxxxxxxxxxxxxxxx \
--availability-zone ap-northeast-1c \
--cidr-block 192.168.16.0/20 \
--tag-specifications "ResourceType=subnet,Tags=[{Key=Name,Value=eks-fargate-private-subnet-1c}]"
$ aws ec2 create-subnet \
--vpc-id vpc-xxxxxxxxxxxxxxxxx \
--availability-zone ap-northeast-1a \
--cidr-block 192.168.32.0/20 \
--tag-specifications "ResourceType=subnet,Tags=[{Key=Name,Value=eks-fargate-public-subnet-1a}]"
Internet Gateway
Create an internet gateway for the public subnet.
$ aws ec2 create-internet-gateway \
--tag-specifications "ResourceType=internet-gateway,Tags=[{Key=Name,Value=igw-eks-fargate}]"
$ aws ec2 attach-internet-gateway \
--internet-gateway-id igw-xxxxxxxxxxxxxxxxx \
--vpc-id vpc-xxxxxxxxxxxxxxxxx
Create a route table and associate it with the internet gateway.
$ aws ec2 create-route-table \
--vpc-id vpc-xxxxxxxxxxxxxxxxx \
--tag-specifications "ResourceType=route-table,Tags=[{Key=Name,Value=rtb-eks-fargate-public}]"
$ aws ec2 create-route \
--route-table-id rtb-xxxxxxxx \
--destination-cidr-block 0.0.0.0/0 \
--gateway-id igw-xxxxxxxxxxxxxxxxx
$ aws ec2 associate-route-table \
--route-table-id rtb-xxxxxxxx \
--subnet-id subnet-xxxxxxxxxxxxxxxxx
VPC Endpoints
Create the following VPC endpoints for an EKS private cluster.
Replace region-code
with your actual region.
Type | Endpoint |
---|---|
Interface | com.amazonaws.region-code.ecr.api |
Interface | com.amazonaws.region-code.ecr.dkr |
Interface | com.amazonaws.region-code.ec2 |
Interface | com.amazonaws.region-code.elasticloadbalancing |
Interface | com.amazonaws.region-code.sts |
Gateway | com.amazonaws.region-code.s3 |
The following example uses the ap-northeast-1
region.
$ aws ec2 create-security-group \
--description "VPC endpoints" \
--group-name eks-fargate-vpc-endpoints-sg \
--vpc-id vpc-xxxxxxxxxxxxxxxxx \
--tag-specifications "ResourceType=security-group,Tags=[{Key=Name,Value=eks-fargate-vpc-endpoints-sg}]"
$ aws ec2 authorize-security-group-ingress \
--group-id sg-xxxxxxxxxxxxxxxxx \
--protocol tcp \
--port 443 \
--cidr 192.168.0.0/16
$ for name in com.amazonaws.ap-northeast-1.ecr.api com.amazonaws.ap-northeast-1.ecr.dkr com.amazonaws.region-code.ec2 com.amazonaws.ap-northeast-1.elasticloadbalancing com.amazonaws.ap-northeast-1.sts; do \
aws ec2 create-vpc-endpoint \
--vpc-id vpc-xxxxxxxxxxxxxxxxx \
--vpc-endpoint-type Interface \
--service-name $name \
--security-group-ids sg-xxxxxxxxxxxxxxxxx \
--subnet-ids subnet-xxxxxxxxxxxxxxxxx subnet-xxxxxxxxxxxxxxxxx;
done;
$ aws ec2 create-vpc-endpoint \
--vpc-id vpc-xxxxxxxxxxxxxxxxx \
--service-name com.amazonaws.ap-northeast-1.s3 \
--route-table-ids rtb-xxxxxxxxxxxxxxxxx
Bastion EC2
In this post, you will access an EKS private cluster through a bastion EC2.
If you have disabled public access for your cluster’s Kubernetes API server endpoint, you can only access the API server from within your VPC or a connected network.
Instance IAM Role
Create an instance IAM role and attach AmazonSSMManagedInstanceCore
managed policy to the role, allowing sessions to the bastion EC2 instance through Session Manager.
$ echo '{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}' > policy.json
$ aws iam create-role \
--role-name eks-fargate-bastion-ec2-role \
--assume-role-policy-document file://./policy.json
$ aws iam create-instance-profile \
--instance-profile-name eks-fargate-bastion-ec2-instance-profile
$ aws iam add-role-to-instance-profile \
--instance-profile-name eks-fargate-bastion-ec2-instance-profile \
--role-name eks-fargate-bastion-ec2-role
$ aws iam attach-role-policy \
--role-name eks-fargate-bastion-ec2-role \
--policy-arn arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
Also attach the following policy to the role so that the EC2 instance can set up and operate EKS, EC2, VPC, and other related services. To follow the best practice of the least privileges, please refer to the official documentation.
$ echo '{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"cloudformation:CreateStack",
"cloudformation:DeleteStack",
"cloudformation:DescribeStacks",
"cloudformation:DescribeStackEvents",
"cloudformation:ListStacks",
"ec2:*",
"eks:*",
"iam:AttachRolePolicy",
"iam:CreateOpenIDConnectProvider",
"iam:CreateRole",
"iam:DetachRolePolicy",
"iam:DeleteOpenIDConnectProvider",
"iam:GetOpenIDConnectProvider",
"iam:GetRole",
"iam:ListPolicies",
"iam:PassRole",
"iam:PutRolePolicy",
"iam:TagOpenIDConnectProvider"
],
"Resource": "*"
}
]
}' > policy.json
$ aws iam put-role-policy \
--role-name eks-fargate-bastion-ec2-role \
--policy-name eks-cluster \
--policy-document file://./policy.json
Starting EC2 Instance
Start the bastion EC2 instance. You can find appropriate AMI IDs in the official documentation.
$ instanceProfileRole=$( \
aws iam list-instance-profiles-for-role \
--role-name eks-fargate-bastion-ec2-role \
| jq -r '.InstanceProfiles[0].Arn')
$ aws ec2 run-instances \
--image-id ami-0bba69335379e17f8 \
--instance-type t2.micro \
--iam-instance-profile "Arn=$instanceProfileRole" \
--subnet-id subnet-xxxxxxxxxxxxxxxxx \
--associate-public-ip-address \
--tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=eks-fargate-bastion-ec2}]"
Connecting to Instance with Session Manager
Connect to the EC2 instance with Session Manager.
After connecting, switch to ec2-user
with the following command.
sh-4.2$ sudo su - ec2-user
Updating AWS CLI to Latest Version
Update the pre-installed AWS CLI to the latest version.
$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
$ unzip awscliv2.zip
$ sudo ./aws/install --bin-dir /usr/local/bin --install-dir /usr/local/aws-cli --update
Installing kubectl
Install kubectl
on the bastion EC2 instance.
$ curl -o kubectl https://s3.us-west-2.amazonaws.com/amazon-eks/1.24.7/2022-10-31/bin/linux/amd64/kubectl
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 43.6M 100 43.6M 0 0 4250k 0 0:00:10 0:00:10 --:--:-- 4602k
$ chmod +x ./kubectl
$ mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin
$ echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc
$ kubectl version --short --client
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.24.7-eks-fb459a0
Kustomize Version: v4.5.4
Installing eksctl
Install eksctl
on the bastion EC2 instance.
$ curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
$ sudo mv /tmp/eksctl /usr/local/bin
$ eksctl version
0.123.0
EKS
Cluster
Create an EKS cluster with the --fargate
option specified.
$ eksctl create cluster \
--name eks-fargate-cluster \
--region ap-northeast-1 \
--version 1.24 \
--vpc-private-subnets subnet-xxxxxxxxxxxxxxxxx,subnet-xxxxxxxxxxxxxxxxx \
--without-nodegroup \
--fargate
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 20m
If you encounter an error like the following when you run the kubectl get svc
command, update AWS CLI to the latest version.
$ kubectl get svc
Unable to connect to the server: getting credentials: decoding stdout: no kind "ExecCredential" is registered for version "client.authentication.k8s.io/v1alpha1" in scheme "pkg/client/auth/exec/exec.go:62"
$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
$ unzip awscliv2.zip
$ sudo ./aws/install --bin-dir /usr/local/bin --install-dir /usr/local/aws-cli --update
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 20m
If you encounter an error like the following when you run the kubectl get svc
command, try updating the .kube/config
file using the following command.
$ kubectl get svc
The connection to the server localhost:8080 was refused - did you specify the right host or port?
$ aws eks update-kubeconfig \
--region ap-northeast-1 \
--name eks-fargate-cluster
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 20m
Adding IAM Users and Roles
To prevent losing access to the cluster, Adding IAM users and roles to the EKS cluster is recommended according to the official documentation.
The IAM user or role that created the cluster is the only IAM entity that has access to the cluster. Grant permissions to other IAM users or roles so they can access your cluster.
For example, you can add an IAM user to system:masters
by running the following command.
$ eksctl create iamidentitymapping \
--cluster eks-fargate-cluster \
--region=ap-northeast-1 \
--arn arn:aws:iam::000000000000:user/xxxxxx \
--group system:masters \
--no-duplicate-arns
Enabling Private Cluster Endpoint
Enable the private cluster endpoint.
$ aws eks update-cluster-config \
--region ap-northeast-1 \
--name eks-fargate-cluster \
--resources-vpc-config endpointPublicAccess=false,endpointPrivateAccess=true
Add an inbound rule for HTTPS (port 443) to allow traffic from your VPC.
You must ensure that your Amazon EKS control plane security group contains rules to allow ingress traffic on port 443 from your bastion host.
$ sgId=$(aws eks describe-cluster --name eks-fargate-cluster | jq -r .cluster.resourcesVpcConfig.clusterSecurityGroupId)
$ aws ec2 authorize-security-group-ingress \
--group-id $sgId \
--protocol tcp \
--port 443 \
--cidr 192.168.0.0/16
Test the connectivity between the bastion EC2 instance and the EKS cluster.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 153m
Fargate Profile
Create a Fargate profile for a sample application.
$ eksctl create fargateprofile \
--region ap-northeast-1 \
--cluster eks-fargate-cluster \
--name fargate-app-profile \
--namespace fargate-app
AWS Load Balancer Controller
Install the AWS Load Balancer Controller to run your application containers behind an Application Load Balancer (ALB).
IAM OIDC Provider for Cluster
By running the following command, create an IAM OIDC provider for the cluster if not exist yet.
$ oidc_id=$(aws eks describe-cluster --name eks-fargate-cluster --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5)
$ aws iam list-open-id-connect-providers | grep $oidc_id
# If no response, run the following command.
$ eksctl utils associate-iam-oidc-provider \
--region ap-northeast-1 \
--cluster eks-fargate-cluster \
--approve
IAM Service Account
Create an IAM service account for the AWS Load Balancer Controller.
$ curl -o iam_policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.4.4/docs/install/iam_policy.json
$ aws iam create-policy \
--policy-name AWSLoadBalancerControllerIAMPolicy \
--policy-document file://iam_policy.json
$ eksctl create iamserviceaccount \
--region ap-northeast-1 \
--cluster=eks-fargate-cluster \
--namespace=kube-system \
--name=aws-load-balancer-controller \
--role-name "AmazonEKSLoadBalancerControllerRole" \
--attach-policy-arn=arn:aws:iam::111122223333:policy/AWSLoadBalancerControllerIAMPolicy \
--approve
Installing Add-on
Install Helm v3 to set up AWS Load Balancer Controller add-on.
If you want to deploy the controller on Fargate, use the Helm procedure.
$ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh
$ helm version --short | cut -d + -f 1
v3.10.3
Install the AWS Load Balancer Controller add-on. You can find the ECR repository URL in the official documentation.
enableShield=false
, enableWaf=false
, and enableWafv2=false
to the command because VPC endpoints are not currently provided. For more information, please refer to the official documentation. When deploying it, you should use command line flags to set enable-shield, enable-waf, and enable-wafv2 to false. Certificate discovery with hostnames from Ingress objects isn’t supported. This is because the controller needs to reach AWS Certificate Manager, which doesn’t have a VPC interface endpoint.
$ helm repo add eks https://aws.github.io/eks-charts
$ helm repo update
$ helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set region=ap-northeast-1 \
--set vpcId=vpc-xxxxxxxxxxxxxxxxx \
--set image.repository=602401143452.dkr.ecr.ap-northeast-1.amazonaws.com/amazon/aws-load-balancer-controller \
--set clusterName=eks-fargate-cluster \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller \
--set enableShield=false \
--set enableWaf=false \
--set enableWafv2=false
$ kubectl get deployment -n kube-system aws-load-balancer-controller
NAME READY UP-TO-DATE AVAILABLE AGE
aws-load-balancer-controller 2/2 2 2 105s
Tagging Subnets
Tag the private subnets with kubernetes.io/role/internal-elb: 1
so that Kubernetes and the AWS Load Balancer Controller can identify available subnets.
Must be tagged in the following format. This is so that Kubernetes and the AWS load balancer controller know that the subnets can be used for internal load balancers.
$ aws ec2 create-tags \
--resources subnet-xxxxxxxxxxxxxxxxx subnet-xxxxxxxxxxxxxxxxx \
--tags Key=kubernetes.io/role/internal-elb,Value=1
Deploying Sample Application
FastAPI Sample Application
This post uses FastAPI to build an API.
Directory Structure
/
├── src
│ ├── __init__.py
│ ├── main.py
│ └── requirements.txt
└── Dockerfile
requirements.txt
Create requirements.txt
with the following content.
anyio==3.6.2
click==8.1.3
fastapi==0.88.0
h11==0.14.0
httptools==0.5.0
idna==3.4
pydantic==1.10.2
python-dotenv==0.21.0
PyYAML==6.0
sniffio==1.3.0
starlette==0.22.0
typing_extensions==4.4.0
uvicorn==0.20.0
uvloop==0.17.0
watchfiles==0.18.1
websockets==10.4
main.py
Create main.py
with the following code.
from fastapi import FastAPI
app = FastAPI()
@app.get('/')
def read_root():
return {'message': 'Hello world!'}
Dockerfile
Create Dockerfile
with the following content.
FROM python:3.10-alpine@sha256:d8a484baabf7d2337d34cdef6730413ea1feef4ba251784f9b7a8d7b642041b3
COPY ./src ./
RUN pip install --no-cache-dir -r requirements.txt
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "80"]
Pushing Image to ECR
Build and push the image to your ECR repository.
The example below creates a repository named api
.
$ aws ecr create-repository --repository-name api
$ uri=$(aws ecr describe-repositories | jq -r '.repositories[] | select(.repositoryName == "api") | .repositoryUri')
$ aws ecr get-login-password --region ap-northeast-1 | docker login --username AWS --password-stdin 000000000000.dkr.ecr.ap-northeast-1.amazonaws.com
$ docker build .
$ docker tag xxxxxxxxxxxx $uri\:latest
$ docker push $uri\:latest
Deploying to Fargate
Create fargate-app.yaml
.
For more information about AWS Load Balancer Controller v2.4 specifications, please refer to the official documentation.
Replace 000000000000.dkr.ecr.ap-northeast-1.amazonaws.com/api:latest
with your actual image URI.
---
apiVersion: v1
kind: Namespace
metadata:
name: fargate-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: fargate-app-deployment
namespace: fargate-app
labels:
app: api
spec:
replicas: 1
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
containers:
- name: api
image: 000000000000.dkr.ecr.ap-northeast-1.amazonaws.com/api:latest
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
nodeSelector:
kubernetes.io/os: linux
---
apiVersion: v1
kind: Service
metadata:
name: fargate-app-service
namespace: fargate-app
labels:
app: api
spec:
selector:
app: api
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: fargate-app-ingress
namespace: fargate-app
annotations:
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: ip
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: fargate-app-service
port:
number: 80
Apply the manifest file to the cluster using the kubectl apply
command.
$ kubectl apply -f fargate-app.yaml
Check all the resources.
$ kubectl get all -n fargate-app
NAME READY STATUS RESTARTS AGE
pod/fargate-app-deployment-6db55f9b7b-4hp8z 1/1 Running 0 55s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/fargate-app-service NodePort 10.100.190.97 <none> 80:31985/TCP 6m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/fargate-app-deployment 1/1 1 1 6m
NAME DESIRED CURRENT READY AGE
replicaset.apps/fargate-app-deployment-6db55f9b7b 1 1 1 6m
Testing API
Run the following command and find the DNS name of the ALB in the Address
field.
$ kubectl describe ingress -n fargate-app fargate-app-ingress
Name: fargate-app-ingress
Labels: <none>
Namespace: fargate-app
Address: internal-k8s-fargatea-fargatea-0579eb4ce2-1731550123.ap-northeast-1.elb.amazonaws.com
Ingress Class: alb
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
*
/ fargate-app-service:80 (192.168.4.97:80)
Annotations: alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: ip
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfullyReconciled 4m17s ingress Successfully reconciled
Send a request to the ALB DNS using curl
.
You should see the response from the FastAPI.
$ curl internal-k8s-fargatea-fargatea-xxxxxxxxxx-xxxxxxxxxx.ap-northeast-1.elb.amazonaws.com
{"message":"Hello world!"}
Deleting EKS Cluster
Delete the EKS cluster. If you no longer need other resources created in this post, also delete them.
$ kubectl delete -f fargate-app.yaml
$ helm uninstall aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system
$ arn=$(aws iam list-policies --scope Local \
| jq -r '.Policies[] | select(.PolicyName == "AWSLoadBalancerControllerIAMPolicy").Arn')
$ aws iam detach-role-policy \
--role-name AmazonEKSLoadBalancerControllerRole \
--policy-arn $arn
$ eksctl delete iamserviceaccount \
--region ap-northeast-1 \
--cluster eks-fargate-cluster \
--namespace kube-system \
--name aws-load-balancer-controller
$ aws eks delete-fargate-profile \
--cluster-name eks-fargate-cluster \
--fargate-profile-name fargate-app-profile
$ aws eks delete-fargate-profile \
--cluster-name eks-fargate-cluster \
--fargate-profile-name fp-default
$ arn=$(aws iam list-policies --scope AWS \
| jq -r '.Policies[] | select(.PolicyName == "AmazonEKSFargatePodExecutionRolePolicy").Arn')
$ aws iam detach-role-policy \
--role-name eksctl-eks-fargate-cluster-FargatePodExecutionRole-xxxxxxxxxxxxx \
--policy-arn $arn
$ eksctl delete cluster \
--region ap-northeast-1 \
--name eks-fargate-cluster
If you encounter issues during the deletion of the AWS Load Balancer Controller Ingress, try using the following command to remove finalizers.
$ kubectl patch ingress fargate-app-ingress -n fargate-app -p '{"metadata":{"finalizers":[]}}' --type=merge