Deploying Wordpress Site Using AWS EKS, Fargate, EFS and RDS

Frank Ye
14 min readJan 24, 2021

--

I recently migrated a dockerized Wordpress site into AWS. The process was not as easy as I had expected. Although the AWS Documentation provides a lot of detailed information and great examples about each individual service AWS offers, they are not written in a way to facilitate the deployment of a full solution. Moreover, some documentation and examples only work for EC2 nodes but fail on Fargate, and there is no mentioning of this in the documents. I had to overcome a significant learning curve to finally be able to connect all the required pieces.

Every time I solve a challenge, I write a blog to document it. Here we go…

The original configuration of the site used the official Wordpress docker image and mapped a volume from the host into the container for persistent storage. The container then connects to a local database and serves files from the persistent volume. This design works, but it lacks flexibility for scaling. Now that the site started to gain traffic, the company wanted to make sure it can be scaled up to handle higher workload when needed. The company decided to migrate the site into AWS and take advantage of its flexible services.

The AWS deployment will use Elastic Kubernetes Service (EKS) for hosting Fargate (server-less) pods, which uses Elastic File System (EFS) volumes for persistent website files and Relational Database Service (RDS) for database.

Before the migration, I obtained a full backup of the site including its database and all files under the web folder of the persistent volume.

Create Virtual Private Cloud (VPC)

The first step is to create a VPC. I will name it “Example-VPC” and assign it a private CIDR of “172.32.0.0/16”. We also need to enable DNS and hostname support for this VPC, otherwise we will encounter issues in the EKS cluster later.

aws ec2 create-vpc \
--<YOUR_REGION> \
--cidr-block 172.32.0.0/16 \
--tag-specifications "ResourceType=vpc,Tags=[{Key=Name,Value=Example-VPC}]"

aws ec2 modify-vpc-attribute \
--<YOUR_REGION> \
--vpc-id vpc-<YOUR_VPC_ID> \
--enable-dns-support

aws ec2 modify-vpc-attribute \
--<YOUR_REGION> \
--vpc-id vpc-<YOUR_VPC_ID> \
--enable-dns-hostname

Then we need to create subnets. We will need both public subnets (for deploying load balancers later) and private subnets (as Fargate nodes can only be deployed to private subnets). So, I created one public subnet and one private subnet in each availability zone, assigning sub CIDR ranges to each one. Also, for AWS Load Balancer Controller to know where to put the load balancers, we need to tag the public subnets properly with kubernetes.io/role/elb.

# Create two public subnets
aws ec2 create-subnet \
--<YOUR_REGION> \
--vpc-id vpc-<YOUR_VPC_ID> \
--availability-zone ca-central-1a \
--cidr-block 172.32.0.0/20 \
--tag-specifications 'ResourceType=subnet,Tags=[{Key=Name,Value=public-1a},{Key=kubernetes.io/role/elb,Value=1}]'

aws ec2 create-subnet \
--<YOUR_REGION> \
--vpc-id vpc-<YOUR_VPC_ID> \
--availability-zone ca-central-1b \
--cidr-block 172.32.16.0/20 \
--tag-specifications 'ResourceType=subnet,Tags=[{Key=Name,Value=public-1b},{Key=kubernetes.io/role/elb,Value=1}]'

# Create two private subnets
aws ec2 create-subnet \
--<YOUR_REGION> \
--vpc-id vpc-<YOUR_VPC_ID> \
--availability-zone ca-central-1a \
--cidr-block 172.32.32.0/20 \
--tag-specifications 'ResourceType=subnet,Tags=[{Key=Name,Value=private-1a}]'

aws ec2 create-subnet \
--<YOUR_REGION> \
--vpc-id vpc-<YOUR_VPC_ID> \
--availability-zone ca-central-1b \
--cidr-block 172.32.64.0/20 \
--tag-specifications 'ResourceType=subnet,Tags=[{Key=Name,Value=private-1b}]'

To allow internet traffic into the VPC we need to add an Internet Gateway and a Routing Table with proper routes.

# Create Internet Gateway
aws ec2 create-internet-gateway \
--<YOUR_REGION> \
--tag-specifications 'ResourceType=internet-gateway,Tags=[{Key=Name,Value=Example-IGW}]'

# Attach Internet Gateway to VPC
aws ec2 attach-internet-gateway \
--<YOUR_REGION> \
--vpc-id vpc-<YOUR_VPC_ID> \
--internet-gateway-id igw-<YOUR_IGW_ID>

# Query the default route table of Example-VPC
aws ec2 describe-route-tables \
--<YOUR_REGION> \
--filter 'Name=vpc-id,Values=vpc-<YOUR_VPC_ID>'

# Create Internet route through Internet Gateway
aws ec2 create-route \
--<YOUR_REGION> \
--route-table-id rtb-<YOUR_DEFAULT_ROUTE_TABLE_ID> \
--destination-cidr-block 0.0.0.0/0 \
--gateway-id igw-<YOUR_IGW_ID>

aws ec2 create-tags \
--<YOUR_REGION> \
--resources 'rtb-<YOUR_DEFAULT_ROUTE_TABLE_ID>' \
--tags 'Key=Name,Value=Example-VPC-Default-RT'

Here comes the first roadblock I had to overcome. My first attempts to deploy an EKS cluster with Fargate failed miserably because I didn’t create a NAT Internet Gateway for the private subnets. Without a NAT Gateway, the Fargate nodes deployed to the private subnet will be unable to pull docker images from their repositories over the Internet and the pods will be stuck at the “Pending” state. I didn’t see this being mentioned anywhere in AWS EKS Documentation.

# Allocate a public IP for the NAT Gateway
aws ec2 allocate-address \
--<YOUR_REGION>

# Create a NAT Internet Gateway
aws ec2 create-tags \
--<YOUR_REGION> \
--resources eipalloc-<YOUR_ELASTIC_IP_ALLOCATION_ID> \
--tags 'Key=Name,Value=Example-VPC-NGW-IP'

aws ec2 create-nat-gateway \
--<YOUR_REGION> \
--subnet-id=subnet-<YOUR_PUBLIC_1A_SUBNET_ID> \
--allocation-id eipalloc-<YOUR_ELASTIC_IP_ALLOCATION_ID> \
--tag-specifications 'ResourceType=natgateway,Tags=[{Key=Name,Value=Example-VPC-NGW}]'

# Create a new Route Table for private subnets
aws ec2 create-route-table \
--<YOUR_REGION> \
--vpc-id vpc-<YOUR_VPC_ID> \
--tag-specifications 'ResourceType=route-table,Tags=[{Key=Name,Value=Example-NAT-RT}]'

# Create Internet route for private subnets through NAT Internet Gateway
aws ec2 create-route \
--<YOUR_REGION> \
--route-table-id rtb-<YOUR_PRIVATE_ROUTE_TABLE_ID> \
--destination-cidr-block 0.0.0.0/0 \
--gateway-id nat-<YOUR_NGW_ID>

# Associate private subnets with the Route Table
aws ec2 associate-route-table \
--<YOUR_REGION> \
--route-table-id rtb-<YOUR_PRIVATE_ROUTE_TABLE_ID> \
--subnet-id subnet-<YOUR_PRIVATE_1A_SUBNET_ID>

aws ec2 associate-route-table \
--<YOUR_REGION> \
--route-table-id rtb-<YOUR_PRIVATE_ROUTE_TABLE_ID> \
--subnet-id subnet-<YOUR_PRIVATE_1B_SUBNET_ID>

The last step in this section is to create the Security Groups we will need for this deployment. First, we create a group that will be assigned to each EKS node and allow full visibility among the nodes. We also add a rule to allow our local machine to connect to the nodes via SSH as later we will need to use an EC2 node to migrate our backup files into EFS using SSH and rsync.

aws ec2 create-security-group \
--<YOUR_REGION> \
--group-name 'Example-EKS-Nodes' \
--description 'Security group assigned to Example EKS nodes' \
--vpc-id vpc-<YOUR_VPC_ID> \
--tag-specifications 'ResourceType=security-group,Tags=[{Key=Name,Value=Example-EKS-Nodes}]'

aws ec2 authorize-security-group-ingress \
--<YOUR_REGION> \
--group-id sg-<YOUR_EKS_NODES_SG_ID> \
--protocol all \
--source-group sg-<YOUR_EKS_NODES_SG_ID>

aws ec2 authorize-security-group-ingress \
--<YOUR_REGION> \
--group-id sg-<YOUR_EKS_NODES_SG_ID> \
--protocol tcp \
--port 22 \
--cidr '<YOUR_LOCAL_PC_PUBLIC_IP>/32'

We then create a security group to control access to our EFS volumes. This group will allow NFS connection from any EKS node.

aws ec2 create-security-group \
--<YOUR_REGION> \
--group-name 'Example-EFS' \
--description 'Security group controlling EFS access' \
--vpc-id vpc-<YOUR_VPC_ID> \
--tag-specifications 'ResourceType=security-group,Tags=[{Key=Name,Value=Example-EFS}]'

aws ec2 authorize-security-group-ingress \
--<YOUR_REGION> \
--group-id sg-<YOUR_EFS_SG_ID> \
--protocol tcp \
--port 2049 \
--source-group sg-<YOUR_EKS_NODES_SG_ID>

We will need another security group for controlling access to the RDS instances. Besides allowing all EKS nodes to connect we will also allow our local machine, so we can migrate our backup into it.

aws ec2 create-security-group \
--<YOUR_REGION> \
--group-name 'Example-RDS' \
--description 'Security group controlling RDS access' \
--vpc-id vpc-<YOUR_VPC_ID> \
--tag-specifications 'ResourceType=security-group,Tags=[{Key=Name,Value=Example-RDS}]'

aws ec2 authorize-security-group-ingress \
--<YOUR_REGION> \
--group-id sg-<YOUR_RDS_SG_ID> \
--protocol tcp \
--port 3306 \
--source-group sg-<YOUR_EKS_NODES_SG_ID>

aws ec2 authorize-security-group-ingress \
--<YOUR_REGION> \
--group-id sg-<YOUR_RDS_SG_ID> \
--protocol tcp \
--port 3306 \
--cidr '<YOUR_LOCAL_PC_PUBLIC_IP>/32'

And finally, we need a security group to control the traffic to our Load Balancer. For the testing I am only enabling the traffic from my local machine. If you don’t want to impose this type of restrictions, you can allow ‘0.0.0.0/0’.

aws ec2 create-security-group \
--<YOUR_REGION> \
--group-name 'Example-LB' \
--description 'Security group controlling Load Balancer access' \
--vpc-id vpc-<YOUR_VPC_ID> \
--tag-specifications 'ResourceType=security-group,Tags=[{Key=Name,Value=Example-LB}]'

aws ec2 authorize-security-group-ingress \
--<YOUR_REGION> \
--group-id sg-<YOUR_LB_SG_ID> \
--protocol tcp \
--port 443 \
--cidr '<YOUR_LOCAL_PC_PUBLIC_IP>/32'

Configure EFS Volume

Now it’s time to create an EFS volume and move our web folder backup into it. Our EKS nodes will later mount this volume to access the persistent web folders and files. We will name this volume as ‘www.example.com’ so we know the files are for our website.

aws efs create-file-system \
--<YOUR_REGION> \
--performance-mode generalPurpose \
--throughput-mode bursting \
--encrypted \
--tags 'Key=Name,Value=www.example.com'

In each availability zone we will need to create a mount target for the EFS. We assign the Example-EFS security group to these mount targets. As for the subnet, we specify the two public subnets to make migrating backup files easier.

aws efs create-mount-target \
--<YOUR_REGION> \
--file-system-id fs-<YOUR_EFS_VOLUME_ID> \
--subnet-id subnet-<YOUR_PUBLIC_1A_SUBNET_ID> \
--security-groups sg-<YOUR_EFS_SG_ID>

aws efs create-mount-target \
--<YOUR_REGION> \
--file-system-id fs-<YOUR_EFS_VOLUME_ID> \
--subnet-id subnet-<YOUR_PUBLIC_1B_SUBNET_ID> \
--security-groups sg-<YOUR_EFS_SG_ID>

Now that the EFS file system and its mounting points have been created, we create a temporary EC2 instance and mount the volume into it (see the bash command below for mounting EFS). We then use rsync to move the backup contents into the EFS volume. The EC2 instance is deleted after the files are transferred.

Here we used a small trick to transfer the files. Please note the --rsync-path="sudo rsync" option. This allows rsync to connect SSH using a normal user (ec2-user) but transfer the file with root privilege.

aws ec2 run-instances \
--<YOUR_REGION> \
--image-id ami-<AWS_LINUX_AMI_ID_FOR_YOUR_REGION> \
--instance-type t3.micro \
--count 1 \
--subnet-id subnet-<YOUR_PUBLIC_1A_SUBNET_ID> \
--key-name <YOUR_EC2_KEY_PAIR_NAME> \
--security-group-ids sg-<YOUR_EKS_NODES_SG_ID> \
--associate-public-ip-address

ssh ec2-user@<YOUR_EC2_PUBLIC_IP> 'sudo yum install -y amazon-efs-utils; sudo mkdir -p /mnt/efs; sudo mount -t efs -o tls fs-<YOUR_EFS_VOLUME_ID>:/ /mnt/efs'

rsync -av -e 'ssh' --rsync-path='sudo rsync' /backup/. ec2-user@<YOUR_EC2_PUBLIC_IP>:/mnt/efs/

ssh ec2-user@<YOUR_EC2_PUBLIC_IP> 'sudo chown -R 33:33 /mnt/efs/*'

aws ec2 terminate-instances \
--<YOUR_REGION> \
--instance-ids i-<YOUR_EC2_INSTANCE_ID>

Create RDS Database

First we need to create a Subnet Group in the VPC.

aws rds create-db-subnet-group \
--<YOUR_REGION> \
--db-subnet-group-name example-db-subnets \
--db-subnet-group-description 'Subnet group for database instances' \
--subnet-ids subnet-<YOUR_PUBLIC_1A_SUBNET_ID> subnet-<YOUR_PUBLIC_1B_SUBNET_ID> \
subnet-<YOUR_PRIVATE_1A_SUBNET_ID> subnet-<YOUR_PRIVATE_1B_SUBNET_ID>

Then we query available database engine versions and pick one to create our database instance.

aws rds describe-db-engine-versions \
--<YOUR_REGION> \
--engine mysql \
--query "DBEngineVersions[].EngineVersion"

aws rds create-db-instance \
--<YOUR_REGION> \
--db-instance-identifier example-db \
--db-subnet-group-name example-db-subnets \
--vpc-security-group-ids sg-<YOUR_RDS_SG_ID> \
--publicly-accessible \
--db-instance-class db.t3.micro \
--engine mysql \
--engine-version <YOUR_CHOSEN_DB_VERSION> \
--auto-minor-version-upgrade \
--allocated-storage 5 \
--master-username <DB_ROOT_USER> \
--master-user-password <DB_ROOT_PASSWORD> \
--db-name example_db \
--storage-encrypted \
--deletion-protection

Now that the database is ready, we will restore the backup into it. Everyone has different tools installed so below is just one example. I used a mysql docker image and mapped my backup file into it.

docker run --rm -v "/backups/mysql-backup:/var/lib/mysql-files:rw" mysql:<YOUR_CHOSEN_DB_VERSION> /bin/bash -c \
"mysql --host=<RDS_ENDPOINT> --user=<DB_ROOT_USER> --password -C example_db </var/lib/mysql-files/example_db_backup.sql"

Set Up EKS Cluster

Now the fun part comes! We are finally ready to create the EKS cluster.

I will assume that you have already created the required services roles for the EKS cluster (using the AmazonEKSClusterPolicy) and the Fargate pods (using the AmazonEKSFargatePodExecutionRolePolicy). Also, if you want to use a KMS key to encrypt your Kubernetes secrets, you will need to create a key in KMS too.

Let’s create the cluster itself first. In the command below, we tell AWS to create an EKS cluster that includes all 4 subnets we created. We assign the Example-EKS-Nodes group to the cluster, allow both public and private control plane endpoints, and restrict the public access point to be only accessible from our local machine. A KMS key is specified to encrypt all Kubernetes secrets in this cluster.

aws eks create-cluster \
--<YOUR_REGION> \
--name Example-EKS \
--role-arn arn:aws:iam::<YOUR_AWS_ACCOUNT>:role/<YOUR_EKS_CLUSTER_SERVICE_ROLE> \
--resources-vpc-config 'subnetIds=subnet-<YOUR_PUBLIC_1A_SUBNET_ID>,subnet-<YOUR_PUBLIC_1B_SUBNET_ID>,subnet-<YOUR_PRIVATE_1A_SUBNET_ID>,subnet-<YOUR_PRIVATE_1B_SUBNET_ID>,securityGroupIds=sg-<YOUR_EKS_NODES_SG_ID>,endpointPublicAccess=true,endpointPrivateAccess=true,publicAccessCidrs=<YOUR_LOCAL_PC_PUBLIC_IP>/32' \
--encryption-config 'resources=secrets,provider={keyArn=arn:aws:kms:<YOUR_REGION>:<YOUR_AWS_ACCOUNT>:key/<YOUR_KMS_KEY_ID>}'

Here comes the second pitfall. When AWS creates an EKS cluster, it will automatically create another security group to make sure full visibility across the cluster nodes even when we have already specified we want to use our ‘EKS-Nodes’ group. As a result, the cluster’s primary security group is the default one and not our EKS-Nodes. This means the request from the cluster nodes will not be able to pass the EFS mount points…

We need to add this default EKS security group to the Example-EFS group. Same goes for the Example-RDS group as well.

# Query the id of the default security group of Example-EKS cluster
aws eks describe-cluster \
--<YOUR_REGION> \
--name Example-EKS \
| jq '.cluster.resourcesVpcConfig.clusterSecurityGroupId'

# Add it to the Example-EFS group to allow access
aws ec2 authorize-security-group-ingress \
--<YOUR_REGION> \
--group-id sg-<YOUR_EFS_SG_ID> \
--protocol tcp \
--port 2049 \
--source-group sg-<AWS_GENERATED_EKS_SG_ID>

# Add it to the Example-RDS group to allow access
aws ec2 authorize-security-group-ingress \
--<YOUR_REGION> \
--group-id sg-<YOUR_RDS_SG_ID> \
--protocol tcp \
--port 3306 \
--source-group sg-<AWS_GENERATED_EKS_SG_ID>

Now the EKS cluster is ready, we will need to create a Fargate profile to allow pods to be scheduled to Fargate. This profile will schedule all pods in the kube-system and default namespace to Fargate. If your deployment uses another namespace, make sure you add it to this profile or create a separate profile for it. For example, we will deploy our website to the www-example-com namespace, so we create another Fargate profile for it too.

aws eks create-fargate-profile \
--<YOUR_REGION> \
--cluster-name Example-EKS \
--fargate-profile-name default \
--pod-execution-role-arn arn:aws:iam::<YOUR_AWS_ACCOUNT>:role/<YOUR_EKS_FARGATE_PODS_SERVICE_ROLE> \
--subnets subnet-<YOUR_PRIVATE_1A_SUBNET_ID> subnet-<YOUR_PRIVATE_1B_SUBNET_ID> \
--selectors namespace=kube-system namespace=default

aws eks create-fargate-profile \
--<YOUR_REGION> \
--cluster-name Example-EKS \
--fargate-profile-name default \
--pod-execution-role-arn arn:aws:iam::<YOUR_AWS_ACCOUNT>:role/<YOUR_EKS_FARGATE_PODS_SERVICE_ROLE> \
--subnets subnet-<YOUR_PRIVATE_1A_SUBNET_ID> subnet-<YOUR_PRIVATE_1B_SUBNET_ID> \
--selectors namespace=www-example-com

We now add a Kubernetes context to our local machine, so we can issue commands to our EKS cluster.

# Add kubernetes context
aws eks --<YOUR_REGION> update-kubeconfig --name Example-EKS

# Confirm that we are using the EKS cluster
kubectl config get-contexts

If you issue a kubectl -n kube-system get pods command, you will see that the coredns pods are all in “pending” state and will never be “ready”. This is because the default coredns deployment has an annotation saying the pods need to be scheduled to EC2 nodes. Since we do not have any EC2 node, they fail to start. We issue the following commands to fix this.

kubectl -n kube-system patch deployment coredns --type json \
-p='[{"op": "remove", "path": "/spec/template/metadata/annotations/eks.amazonaws.com~1compute-type"}]'

kubectl -n kube-system rollout restart deployments

Install Useful Services and Controllers

After fixing the scheduling issue for coredns, we will deploy a performance metrics server, so we can use it to auto-scale our website deployments.

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.4.1/components.yaml

The last step for preparing the EKS cluster is to install the AWS Load Balancer Controller to manage application load balancers and its listeners and target groups. For this step, I mainly followed the AWS Documentation here.

WARNING: following this documentation from beginning to the end will fail to deploy the LB Controller on a Fargate-only EKS cluster. This is because some required dependencies (such as cert-manager) do not fully support Fargate-only configurations (at the time of writing).

I spent a few days experimenting different solutions and finally completed the LB Controller deployment following Y. Spreen’s example here.

The first half of the official documentation can be followed without issue, up to the creating AWS LB Controller Service Account section.

# Download an example IAM policy for the Controller
curl -o aws-lb-controller-iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.1.0/docs/install/iam_policy.json

# Create IAM policy
aws iam create-policy \
--<YOUR_REGION> \
--policy-name FocalEKSLoadBalancerControllerIAMPolicy \
--policy-document file://aws-lb-controller-iam-policy.json

# Create service role
eksctl create iamserviceaccount \
--cluster=Example-EKS \
--namespace=kube-system \
--name=aws-load-balancer-controller \
--attach-policy-arn=arn:aws:iam::<AWS_ACCOUNT_ID>:policy/<AWSLoadBalancerControllerIAMPolicy> \
--override-existing-serviceaccounts \
--approve

cat <<EOF >aws-load-balancer-controller-service-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/name: aws-load-balancer-controller
name: aws-load-balancer-controller
namespace: kube-system
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::<AWS_ACCOUNT_ID>:role/AmazonEKSLoadBalancerControllerRole
EOF

kubectl apply -f aws-load-balancer-controller-service-account.yaml

When it’s time to deploy the LB Controller, follow Y. Spreen’s example instead. Basically, he removed the cert-manager dependency and used a pre-generated key pair instead.

# Generate cert key pair
openssl req -x509 -newkey rsa:4096 -sha256 -days 36500 -nodes \
-keyout tls.key -out tls.crt -subj "/CN=aws-load-balancer-webhook-service.kube-system.svc" \
-addext "subjectAltName=DNS:aws-load-balancer-webhook-service.kube-system.svc,DNS:aws-load-balancer-webhook-service.kube-system.svc.cluster.local"

# Add key pair as Kubernetes secrets
kubectl -n kube-system create secret generic aws-load-balancer-webhook-tls \
--from-file=./tls.crt \
--from-file=./tls.key

# Get his modified version of the manifest file
curl -o aws-lb-controller.yml https://raw.githubusercontent.com/yspreen/Fargate-AWS-HTTPS-Ingress/master/kube_configs/alb-controller.yml

# Make some edits to replace $CLUSTER_NAME etc.
nano aws-lb-controller.yml

# Create the LB Controller
kubectl apply -f aws-lb-controller.yaml

Deploy Wordpress Pods

For confidentiality reasons I will not share the actual deployment manifest files I used to deploy our service. Instead, here is a modified version from Y. Spreen’s example. I added the manifest file for mounting the EFS volume into the Fargate pods.

First, create a storage-class.yml with the following content and install it into the EKS cluster using the commands below.

# storage-class.yml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: efs-sc
provisioner: efs.csi.aws.com
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/master/deploy/kubernetes/base/csidriver.yaml

kubectl apply -f storage-class.yml

Then create the example manifest file k8s-manifest.yml as following. Note how we declared a PersistentVolume and a PersistentVolumeClaim and how we use it in the Volume section of pod specification.

# k8s-manifest.yml

apiVersion: v1
kind: Namespace
metadata:
name: www-focalalert-com

---

apiVersion: v1
kind: PersistentVolume
metadata:
name: app-efs-pv
namespace: www-example-com
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
csi:
driver: efs.csi.aws.com
volumeHandle: fs-<YOUR_EFS_VOLUME_ID>

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: app-efs-claim
namespace: www-example-com
spec:
accessModes:
- ReadWriteMany
storageClassName: efs-sc
resources:
requests:
storage: 5Gi

---

apiVersion: v1
kind: ConfigMap
metadata:
name: proxy-nginx-conf
namespace: www-example-com
data:
nginx.conf: |-
user nginx;
worker_processes 1;

pid /var/run/nginx.pid;

events {
worker_connections 1024;
}

http {
include /etc/nginx/mime.types;
default_type application/octet-stream;

sendfile on;
keepalive_timeout 65;

index index.html index.htm;

server {
listen 81;

client_max_body_size 50M;

location / {
alias /var/www/;
}
}
}

---

apiVersion: v1
kind: ConfigMap
metadata:
name: index-html-conf
namespace: www-example-com
data:
index.html: |-
<meta charset="UTF-8">Hi.<br>– yspreen

---

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: www-example-com
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
volumes:
- name: nginx-proxy-config
configMap:
name: proxy-nginx-conf
- name: index-html-config
configMap:
name: index-html-conf
- name: app-persist
persistentVolumeClaim:
claimName: app-efs-claim
terminationGracePeriodSeconds: 5
containers:
- name: nginx
image: nginx:alpine
resources:
requests:
memory: "25Mi"
cpu: "25m"
ports:
- containerPort: 81
volumeMounts:
- name: nginx-proxy-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
- name: index-html-config
mountPath: /var/www/index.html
subPath: index.html
- name: app-persist
subPath: web
mountPath: /var/www/web

---

apiVersion: v1
kind: Service
metadata:
annotations:
alb.ingress.kubernetes.io/target-type: ip
name: "nginx-service"
namespace: www-example-com
spec:
ports:
- port: 82
targetPort: 81
protocol: TCP
type: NodePort
selector:
app: nginx

---

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "nginx-ingress"
namespace: www-example-com
annotations:
alb.ingress.kubernetes.io/actions.ssl-redirect:
'{"Type": "redirect", "RedirectConfig":
{ "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:<YOUR_REGION>:<YOUR_AWS_ACCOUNT>:certificate/<YOUR_CMS_CERT_ID>
alb.ingress.kubernetes.io/group.name: backend-ingress
alb.ingress.kubernetes.io/healthcheck-interval-seconds: "15"
alb.ingress.kubernetes.io/healthcheck-path: /
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: "5"
alb.ingress.kubernetes.io/healthy-threshold-count: "2"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/success-codes: 301,302,200
alb.ingress.kubernetes.io/unhealthy-threshold-count: "2"
kubernetes.io/ingress.class: alb
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: ssl-redirect
servicePort: use-annotation
- path: /*
backend:
serviceName: nginx-service
servicePort: 82

Now we can deploy these resources into your EKS cluster.

kubectl apply -f k8s-manifest.yml

We can also set up auto-scaling for this example deployments.

kubectl -n www-example-com autoscale deployment/nginx-deployment --cpu-percent=70 --min=1 --max=10

Summary

Deploying a Fargate-only EKS cluster is a fairly new topic, with AWS announcing its general availability just one year ago. Although I feel this new infrastructure has a great potential there is currently very limited learning resources for this new infrastructure. I hope this blog article could be of some help to people who are interested in experimenting with this.

Please give me a clap if you like this article, and feel free to leave comments below.

--

--

Frank Ye
Frank Ye

Written by Frank Ye

CTO with broad interest in technology topics. Quick learner and problem solver.

Responses (2)