Deploying Containers on EKS Fargate in Private Subnets Behind an ALB

Deploying Containers on EKS Fargate in Private Subnets Behind an ALB

Takahiro Iwasa
Takahiro Iwasa
14 min read
ALB EKS Fargate

This note describes how to run containers on EKS Fargate within private subnets, securely managed behind an Application Load Balancer (ALB).

Setting Up VPC

Creating VPC

Create a dedicated VPC with the following commands:

Terminal window
aws ec2 create-vpc \
--cidr-block 192.168.0.0/16 \
--tag-specifications "ResourceType=vpc,Tags=[{Key=Name,Value=eks-fargate-vpc}]"
aws ec2 modify-vpc-attribute \
--vpc-id vpc-xxxxxxxxxxxxxxxxx \
--enable-dns-hostnames
Important

Please make sure to enable DNS hostnames for VPC endpoints. For more details, please refer to an official documentation.

If you use custom DNS domain names defined in a private hosted zone in Amazon Route 53, or use private DNS with interface VPC endpoints (AWS PrivateLink), you must set both the enableDnsHostnames and enableDnsSupport attributes to true.

Adding Subnets

Create private subnets for Fargate pods and a public subnet for the bastion EC2 instance.

Terminal window
aws ec2 create-subnet \
--vpc-id vpc-xxxxxxxxxxxxxxxxx \
--availability-zone ap-northeast-1a \
--cidr-block 192.168.0.0/20 \
--tag-specifications "ResourceType=subnet,Tags=[{Key=Name,Value=eks-fargate-private-subnet-1a}]"
aws ec2 create-subnet \
--vpc-id vpc-xxxxxxxxxxxxxxxxx \
--availability-zone ap-northeast-1c \
--cidr-block 192.168.16.0/20 \
--tag-specifications "ResourceType=subnet,Tags=[{Key=Name,Value=eks-fargate-private-subnet-1c}]"
aws ec2 create-subnet \
--vpc-id vpc-xxxxxxxxxxxxxxxxx \
--availability-zone ap-northeast-1a \
--cidr-block 192.168.32.0/20 \
--tag-specifications "ResourceType=subnet,Tags=[{Key=Name,Value=eks-fargate-public-subnet-1a}]"

Adding Internet Gateway

To enable internet access for resources in the public subnet, create an Internet Gateway and attach it to your VPC:

Terminal window
aws ec2 create-internet-gateway \
--tag-specifications "ResourceType=internet-gateway,Tags=[{Key=Name,Value=igw-eks-fargate}]"
aws ec2 attach-internet-gateway \
--internet-gateway-id igw-xxxxxxxxxxxxxxxxx \
--vpc-id vpc-xxxxxxxxxxxxxxxxx

Next, create a route table and associate it with the Internet Gateway:

Terminal window
aws ec2 create-route-table \
--vpc-id vpc-xxxxxxxxxxxxxxxxx \
--tag-specifications "ResourceType=route-table,Tags=[{Key=Name,Value=rtb-eks-fargate-public}]"
aws ec2 create-route \
--route-table-id rtb-xxxxxxxx \
--destination-cidr-block 0.0.0.0/0 \
--gateway-id igw-xxxxxxxxxxxxxxxxx
aws ec2 associate-route-table \
--route-table-id rtb-xxxxxxxx \
--subnet-id subnet-xxxxxxxxxxxxxxxxx

Adding VPC Endpoints

To enable secure communication for an EKS private cluster, create the necessary VPC endpoints. Refer to the official documentation for detailed information.

TypeEndpoint
Interfacecom.amazonaws.region-code.ecr.api
Interfacecom.amazonaws.region-code.ecr.dkr
Interfacecom.amazonaws.region-code.ec2
Interfacecom.amazonaws.region-code.elasticloadbalancing
Interfacecom.amazonaws.region-code.sts
Gatewaycom.amazonaws.region-code.s3

Create a security group for the VPC endpoints:

Terminal window
aws ec2 create-security-group \
--description "VPC endpoints" \
--group-name eks-fargate-vpc-endpoints-sg \
--vpc-id vpc-xxxxxxxxxxxxxxxxx \
--tag-specifications "ResourceType=security-group,Tags=[{Key=Name,Value=eks-fargate-vpc-endpoints-sg}]"
aws ec2 authorize-security-group-ingress \
--group-id sg-xxxxxxxxxxxxxxxxx \
--protocol tcp \
--port 443 \
--cidr 192.168.0.0/16

Create the Interface VPC Endpoints:

Terminal window
for name in com.amazonaws.<REGION>.ecr.api com.amazonaws.<REGION>.ecr.dkr com.amazonaws.region-code.ec2 com.amazonaws.<REGION>.elasticloadbalancing com.amazonaws.<REGION>.sts; do \
aws ec2 create-vpc-endpoint \
--vpc-id vpc-xxxxxxxxxxxxxxxxx \
--vpc-endpoint-type Interface \
--service-name $name \
--security-group-ids sg-xxxxxxxxxxxxxxxxx \
--subnet-ids subnet-xxxxxxxxxxxxxxxxx subnet-xxxxxxxxxxxxxxxxx;
done;

Create the Gateway VPC Endpoint for S3:

Terminal window
aws ec2 create-vpc-endpoint \
--vpc-id vpc-xxxxxxxxxxxxxxxxx \
--service-name com.amazonaws.<REGION>.s3 \
--route-table-ids rtb-xxxxxxxxxxxxxxxxx

By adding these endpoints, your private cluster can securely access AWS services such as ECR, S3, and Elastic Load Balancing.

Bastion EC2

To access an EKS private cluster, you can utilize a bastion EC2 instance. This bastion host allows secure interaction with your Kubernetes API server endpoint if public access is disabled.

https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html#private-access

If you have disabled public access for your cluster’s Kubernetes API server endpoint, you can only access the API server from within your VPC or a connected network.

Creating an Instance IAM Role

To enable the bastion instance to operate securely, create an IAM role and attach the AmazonSSMManagedInstanceCore managed policy for Session Manager access.

Create an IAM role:

Terminal window
echo '{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}' > policy.json
aws iam create-role \
--role-name eks-fargate-bastion-ec2-role \
--assume-role-policy-document file://./policy.json

Create an instance profile:

Terminal window
aws iam create-instance-profile \
--instance-profile-name eks-fargate-bastion-ec2-instance-profile
aws iam add-role-to-instance-profile \
--instance-profile-name eks-fargate-bastion-ec2-instance-profile \
--role-name eks-fargate-bastion-ec2-role

Attach the AmazonSSMManagedInstanceCore policy to allow Session Manager access:

Terminal window
aws iam attach-role-policy \
--role-name eks-fargate-bastion-ec2-role \
--policy-arn arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore

For broader permissions to set up and manage EKS, EC2, and VPC services, attach an additional policy. Refer to the official documentation for best practices on least-privilege permissions.

Terminal window
echo '{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"cloudformation:CreateStack",
"cloudformation:DeleteStack",
"cloudformation:DescribeStacks",
"cloudformation:DescribeStackEvents",
"cloudformation:ListStacks",
"ec2:*",
"eks:*",
"iam:AttachRolePolicy",
"iam:CreateOpenIDConnectProvider",
"iam:CreateRole",
"iam:DetachRolePolicy",
"iam:DeleteOpenIDConnectProvider",
"iam:GetOpenIDConnectProvider",
"iam:GetRole",
"iam:ListPolicies",
"iam:PassRole",
"iam:PutRolePolicy",
"iam:TagOpenIDConnectProvider"
],
"Resource": "*"
}
]
}' > policy.json
aws iam put-role-policy \
--role-name eks-fargate-bastion-ec2-role \
--policy-name eks-cluster \
--policy-document file://./policy.json

Starting the Bastion EC2 Instance

Once the IAM role is configured, start the EC2 instance. Ensure that you use a valid AMI ID. Refer to the official documentation for the latest AMI details.

Terminal window
instanceProfileRole=$( \
aws iam list-instance-profiles-for-role \
--role-name eks-fargate-bastion-ec2-role \
| jq -r '.InstanceProfiles[0].Arn')
aws ec2 run-instances \
--image-id ami-0bba69335379e17f8 \
--instance-type t2.micro \
--iam-instance-profile "Arn=$instanceProfileRole" \
--subnet-id subnet-xxxxxxxxxxxxxxxxx \
--associate-public-ip-address \
--tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=eks-fargate-bastion-ec2}]"

Connecting to the Instance with Session Manager

To securely access the bastion EC2 instance, use AWS Session Manager. This eliminates the need for SSH key pairs and ensures secure, auditable access.

After connecting, switch to the ec2-user account using the following command:

Terminal window
sh-4.2$ sudo su - ec2-user
Important

Ensure that the instance IAM role has the AmazonSSMManagedInstanceCore policy attached for Session Manager connectivity.

Updating AWS CLI to the Latest Version

To ensure compatibility with the latest AWS services, update the AWS CLI to its latest version:

Terminal window
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install --bin-dir /usr/local/bin --install-dir /usr/local/aws-cli --update

Verify the installation:

Terminal window
aws --version

Installing kubectl

To manage your EKS cluster, install kubectl on the bastion instance.

Download the kubectl binary for your EKS cluster version:

Terminal window
curl -o kubectl https://s3.us-west-2.amazonaws.com/amazon-eks/1.24.7/2022-10-31/bin/linux/amd64/kubectl

Make the binary executable:

Terminal window
chmod +x ./kubectl

Add kubectl to your PATH:

Terminal window
mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin
echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc

Verify the installation:

Terminal window
kubectl version --short --client

Installing eksctl

Install eksctl to simplify the management of your EKS clusters.

Download and extract eksctl:

Terminal window
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp

Move the binary to a location in your PATH:

Terminal window
sudo mv /tmp/eksctl /usr/local/bin

Verify the installation:

Terminal window
eksctl version

Your bastion EC2 instance is now ready to manage and operate your EKS cluster with kubectl and eksctl installed.

EKS

Creating EKS Cluster

Create an EKS cluster using eksctl with the --fargate option specified. This cluster will use Fargate to manage pods without requiring worker nodes.

Refer to the official documentation for detailed instructions.

ℹ️ Note

Creating the cluster may take approximately 20 minutes or more.

Terminal window
eksctl create cluster \
--name eks-fargate-cluster \
--region ap-northeast-1 \
--version 1.24 \
--vpc-private-subnets subnet-xxxxxxxxxxxxxxxxx,subnet-xxxxxxxxxxxxxxxxx \
--without-nodegroup \
--fargate

After creation, verify the cluster with the following command:

Terminal window
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 20m

Appendix: Troubleshooting Cluster Access

Issue 1: Credential Error

If you encounter the error below when running kubectl get svc:

Unable to connect to the server: getting credentials: decoding stdout: no kind "ExecCredential" is registered for version "client.authentication.k8s.io/v1alpha1" in scheme "pkg/client/auth/exec/exec.go:62"

Update the AWS CLI to the latest version:

Terminal window
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install --bin-dir /usr/local/bin --install-dir /usr/local/aws-cli --update

Retry the command:

Terminal window
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 20m

Issue 2: Connection Refused

If you see the error below:

The connection to the server localhost:8080 was refused - did you specify the right host or port?

Update your Kubernetes configuration file (~/.kube/config) using the following command:

Terminal window
aws eks update-kubeconfig \
--region ap-northeast-1 \
--name eks-fargate-cluster

Retry the command:

Terminal window
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 20m

Adding IAM Users and Roles

To avoid losing access to the cluster, grant access to additional IAM users or roles. By default, only the IAM entity that created the cluster has administrative access.

Refer to the official documentation for best practices.

The IAM user or role that created the cluster is the only IAM entity that has access to the cluster. Grant permissions to other IAM users or roles so they can access your cluster.

To add an IAM user to the system:masters group, use the following command:

Terminal window
eksctl create iamidentitymapping \
--cluster eks-fargate-cluster \
--region ap-northeast-1 \
--arn arn:aws:iam::000000000000:user/xxxxxx \
--group system:masters \
--no-duplicate-arns

This ensures that additional users or roles have administrative access to your EKS cluster.

Enabling Private Cluster Endpoint

Enable the private cluster endpoint to restrict Kubernetes API access to within the VPC.

ℹ️ Note

Enabling the private cluster endpoint may take about 10 minutes.

Terminal window
aws eks update-cluster-config \
--region ap-northeast-1 \
--name eks-fargate-cluster \
--resources-vpc-config endpointPublicAccess=false,endpointPrivateAccess=true

Ensure that your EKS control plane security group allows ingress traffic on port 443 from your bastion EC2 instance.

https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html#private-access

You must ensure that your Amazon EKS control plane security group contains rules to allow ingress traffic on port 443 from your bastion host.

Terminal window
sgId=$(aws eks describe-cluster --name eks-fargate-cluster | jq -r .cluster.resourcesVpcConfig.clusterSecurityGroupId)
aws ec2 authorize-security-group-ingress \
--group-id $sgId \
--protocol tcp \
--port 443 \
--cidr 192.168.0.0/16

Test the connectivity between the bastion EC2 instance and the EKS cluster:

Terminal window
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 153m

Fargate Profile

Create a Fargate profile for your application namespace:

Terminal window
eksctl create fargateprofile \
--region ap-northeast-1 \
--cluster eks-fargate-cluster \
--name fargate-app-profile \
--namespace fargate-app

Installing AWS Load Balancer Controller

Install the AWS Load Balancer Controller to run application containers behind an Application Load Balancer (ALB).

Create an IAM OIDC provider for the cluster if it does not already exist:

Terminal window
oidc_id=$(aws eks describe-cluster --name eks-fargate-cluster --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5)
aws iam list-open-id-connect-providers | grep $oidc_id
# If no response is returned, run the following:
eksctl utils associate-iam-oidc-provider \
--region ap-northeast-1 \
--cluster eks-fargate-cluster \
--approve

Download the policy file for the AWS Load Balancer Controller:

Terminal window
curl -o iam_policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.4.4/docs/install/iam_policy.json

Create the IAM policy:

Terminal window
aws iam create-policy \
--policy-name AWSLoadBalancerControllerIAMPolicy \
--policy-document file://iam_policy.json

Create the IAM service account:

Terminal window
eksctl create iamserviceaccount \
--region ap-northeast-1 \
--cluster=eks-fargate-cluster \
--namespace=kube-system \
--name=aws-load-balancer-controller \
--role-name "AmazonEKSLoadBalancerControllerRole" \
--attach-policy-arn=arn:aws:iam::111122223333:policy/AWSLoadBalancerControllerIAMPolicy \
--approve

Installing Helm and Load Balancer Controller Add-on

Install Helm v3:

Terminal window
$ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 > get_helm.sh
$ chmod 700 get_helm.sh
$ ./get_helm.sh
$ helm version --short | cut -d + -f 1
v3.10.3

Install the Load Balancer Controller add-on:

Terminal window
helm repo add eks https://aws.github.io/eks-charts
helm repo update
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set region=ap-northeast-1 \
--set vpcId=vpc-xxxxxxxxxxxxxxxxx \
--set image.repository=602401143452.dkr.ecr.ap-northeast-1.amazonaws.com/amazon/aws-load-balancer-controller \
--set clusterName=eks-fargate-cluster \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller \
--set enableShield=false \
--set enableWaf=false \
--set enableWafv2=false
ℹ️ Note

You need to add enableShield=false, enableWaf=false, and enableWafv2=false to the command because VPC endpoints are not currently provided. For more information, please refer to the official documentation.

When deploying it, you should use command line flags to set enable-shield, enable-waf, and enable-wafv2 to false. Certificate discovery with hostnames from Ingress objects isn’t supported. This is because the controller needs to reach AWS Certificate Manager, which doesn’t have a VPC interface endpoint.

Verify the deployment:

Terminal window
$ kubectl get deployment -n kube-system aws-load-balancer-controller
NAME READY UP-TO-DATE AVAILABLE AGE
aws-load-balancer-controller 2/2 2 2 105s

With the AWS Load Balancer Controller installed, your application containers are ready to run securely behind an Application Load Balancer.

Tagging Subnets

Tag the private subnets to indicate their use for internal load balancers. This is required for Kubernetes and the AWS Load Balancer Controller to identify the subnets correctly.

Terminal window
aws ec2 create-tags \
--resources subnet-xxxxxxxxxxxxxxxxx subnet-xxxxxxxxxxxxxxxxx \
--tags Key=kubernetes.io/role/internal-elb,Value=1

Refer to the official documentation for additional details.

Must be tagged in the following format. This is so that Kubernetes and the AWS load balancer controller know that the subnets can be used for internal load balancers.

Deploying Application

Building Application

This example uses FastAPI to create a simple API for demonstration purposes.

Define the necessary dependencies for the application:

requirements.txt
anyio==3.6.2
click==8.1.3
fastapi==0.88.0
h11==0.14.0
httptools==0.5.0
idna==3.4
pydantic==1.10.2
python-dotenv==0.21.0
PyYAML==6.0
sniffio==1.3.0
starlette==0.22.0
typing_extensions==4.4.0
uvicorn==0.20.0
uvloop==0.17.0
watchfiles==0.18.1
websockets==10.4

Create a basic API endpoint:

main.py
from fastapi import FastAPI
app = FastAPI()
@app.get('/')
def read_root():
return {'message': 'Hello world!'}

Create a Dockerfile to build the application container:

Dockerfile
FROM python:3.10-alpine@sha256:d8a484baabf7d2337d34cdef6730413ea1feef4ba251784f9b7a8d7b642041b3
COPY ./src ./
RUN pip install --no-cache-dir -r requirements.txt
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "80"]

Pushing the Image to ECR

Build and push the application image to ECR:

Create an ECR repository:

Terminal window
aws ecr create-repository --repository-name api

Retrieve the repository URI:

Terminal window
uri=$(aws ecr describe-repositories | jq -r '.repositories[] | select(.repositoryName == "api") | .repositoryUri')

Authenticate Docker to ECR:

Terminal window
aws ecr get-login-password --region ap-northeast-1 | docker login --username AWS --password-stdin 000000000000.dkr.ecr.ap-northeast-1.amazonaws.com

Build, tag, and push the image:

Terminal window
docker build .
docker tag xxxxxxxxxxxx $uri:latest
docker push $uri:latest

Deploying to Fargate

Create a Kubernetes manifest file fargate-app.yaml.

Replace 000000000000.dkr.ecr.ap-northeast-1.amazonaws.com/api:latest with the actual image URI.

For more information about the AWS Load Balancer Controller v2.4 specification, refer to the official documentation.

fargate-app.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: fargate-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: fargate-app-deployment
namespace: fargate-app
labels:
app: api
spec:
replicas: 1
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
containers:
- name: api
image: 000000000000.dkr.ecr.ap-northeast-1.amazonaws.com/api:latest
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
nodeSelector:
kubernetes.io/os: linux
---
apiVersion: v1
kind: Service
metadata:
name: fargate-app-service
namespace: fargate-app
labels:
app: api
spec:
selector:
app: api
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: fargate-app-ingress
namespace: fargate-app
annotations:
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: ip
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: fargate-app-service
port:
number: 80

Apply the manifest file:

Terminal window
kubectl apply -f fargate-app.yaml

Verify the deployed resources:

Terminal window
$ kubectl get all -n fargate-app
NAME READY STATUS RESTARTS AGE
pod/fargate-app-deployment-6db55f9b7b-4hp8z 1/1 Running 0 55s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/fargate-app-service NodePort 10.100.190.97 <none> 80:31985/TCP 6m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/fargate-app-deployment 1/1 1 1 6m
NAME DESIRED CURRENT READY AGE
replicaset.apps/fargate-app-deployment-6db55f9b7b 1 1 1 6m
ℹ️ Note

Provisioning ALB may take about ten minutes or longer.

Testing the API

Retrieve the DNS name of the ALB:

Terminal window
kubectl describe ingress -n fargate-app fargate-app-ingress

Example output:

Name: fargate-app-ingress
Labels: <none>
Namespace: fargate-app
Address: internal-k8s-fargatea-fargatea-0579eb4ce2-1731550123.ap-northeast-1.elb.amazonaws.com
Ingress Class: alb
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
*
/ fargate-app-service:80 (192.168.4.97:80)
Annotations: alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: ip
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfullyReconciled 4m17s ingress Successfully reconciled

Test the API endpoint:

Terminal window
curl internal-k8s-fargatea-fargatea-xxxxxxxxxx-xxxxxxxxxx.ap-northeast-1.elb.amazonaws.com

Expected output:

{"message":"Hello world!"}

Deleting EKS Cluster

If you no longer require the EKS cluster or its associated resources, you can delete them using the steps outlined below.

Remove the deployed application and uninstall the AWS Load Balancer Controller:

Terminal window
kubectl delete -f fargate-app.yaml
helm uninstall aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system

Retrieve the ARN of the AWSLoadBalancerControllerIAMPolicy and detach it:

Terminal window
arn=$(aws iam list-policies --scope Local \
| jq -r '.Policies[] | select(.PolicyName == "AWSLoadBalancerControllerIAMPolicy").Arn')
aws iam detach-role-policy \
--role-name AmazonEKSLoadBalancerControllerRole \
--policy-arn $arn

Delete the service account associated with the AWS Load Balancer Controller:

Terminal window
eksctl delete iamserviceaccount \
--region ap-northeast-1 \
--cluster eks-fargate-cluster \
--namespace kube-system \
--name aws-load-balancer-controller

Remove Fargate profiles created during the setup:

Terminal window
aws eks delete-fargate-profile \
--cluster-name eks-fargate-cluster \
--fargate-profile-name fargate-app-profile
aws eks delete-fargate-profile \
--cluster-name eks-fargate-cluster \
--fargate-profile-name fp-default

Retrieve and detach the AmazonEKSFargatePodExecutionRolePolicy:

Terminal window
arn=$(aws iam list-policies --scope AWS \
| jq -r '.Policies[] | select(.PolicyName == "AmazonEKSFargatePodExecutionRolePolicy").Arn')
aws iam detach-role-policy \
--role-name eksctl-eks-fargate-cluster-FargatePodExecutionRole-xxxxxxxxxxxxx \
--policy-arn $arn

Use eksctl to delete the cluster:

Terminal window
eksctl delete cluster \
--region ap-northeast-1 \
--name eks-fargate-cluster

Appendix: Troubleshooting Deletion Issues

If you encounter issues with deleting the AWS Load Balancer Controller ingress, you may need to remove finalizers manually described here:

Terminal window
kubectl patch ingress fargate-app-ingress -n fargate-app -p '{"metadata":{"finalizers":[]}}' --type=merge

This command ensures that Kubernetes can finalize the ingress resource for deletion.

Takahiro Iwasa

Takahiro Iwasa

Software Developer
Involved in the requirements definition, design, and development of cloud-native applications using AWS. Japan AWS Top Engineers 2020-2023.