Build Microservices App with AWS EKS and RDS

Ahmed Ghonem
19 min readMay 28, 2023

--

Recently, I was building a Microservice Backend that utilized AWS EKS, an AWS RDS instance, So I want to go through the deployment plan with you guys, So let’s get started

Getting Started

Before we dive deep into our deployment plan let’s discuss some concepts and prerequisites for getting started, The main purpose of this article is to see how we can deploy an EKS Cluster that utilizes Amazon RDS instance (We will be using Postgres, but you can choose whatever you want) and how they communicate with each others.

Prerequisites

We need a few tools to set up our production-ready app. Ensure you have each of the following tools in your working environment:

To interact with various AWS services, it is essential to have the necessary permissions in AWS Identity and Access Management (IAM). Please refer to the following information for more details:

Now after we’ve installed these tools, Let’s discuss the deployment plan of our Microservices App.

Deployment Plan

Our deployment plan will be divided into multiple parts, Some of these will only be executed once (like creating a cluster, Provisioning the database instance on AWS), And others actions will be executed more than once (For example, After each update to our source code we want to redeploy our application)

High Level Architecture

This is a very high level architecture of what we will be building today (We’re gonna discuss each step in more details later on), I’m gonna stick with Shared Database Pattern for this microservices app

⚠️ In this documentation, we will utilize shell variables to simplify the process of substituting the actual names relevant to your deployment. Whenever you come across placeholders such as NAME=<your xyz name>, This approach will make it more convenient to personalize the instructions according to your deployment needs.

1. Set up AWS EKS cluster

First we need to setup AWS EKS cluster, We will use eksctl to create it for us, We will go with a free tier node group (EC2 instance types)

EKS_CLUSTER_NAME="myapp-cluster"
REGION="eu-west-1" #The EKS regions that we will run our cluster on, Feel free to change it to your prefered region
NODE_GROUP_NAME='linux-nodes'
NODE_TYPE='t2.micro' #The EC2 instance type for each node (<https://aws.amazon.com/ec2/instance-types>)
MIN_NUMBER_OF_NODES=2
DESIRED_NUMBER_OF_NODE=2
MAX_NUMBER_OF_NODES=6 # we will set the desired and minimum nodes count to 2 and expand when needed until we reach the max (6)


eksctl create cluster \\
--name "${EKS_CLUSTER_NAME}" \\
--region "${REGION}" \\
--with-oidc \\
--node-type "${NODE_TYPE}" \\
--nodes "${DESIRED_NUMBER_OF_NODE}" \\
--nodes-min "${MIN_NUMBER_OF_NODES}" \\
--nodes-max "${MAX_NUMBER_OF_NODES}"

By executing this command with the appropriate values substituted, you will create an EKS cluster in the specified region with the desired configuration for worker nodes, including their instance type, number, and minimum/maximum limits.

☕ This command might take 10–20 minutes to provision the Amazon EKS cluster, Therefore, you can take this opportunity to relax and enjoy a cup of coffee or engage in any other activity of your choice.

After the command completes execution, You can insure that our nodes are ready by running the following command

kubectl get nodes

If you got something like the following output, Then you’re good to go (You might get more results than the following based on your desired number of nodes that you set earlier)

NAME                                          STATUS ROLES  AGE      VERSION
ip-xxx-xxx-xxx-xxx.eu-west-1.compute.internal Ready <none> 2m17s v1.21.5-eks-9017834
ip-xxx-xxx-xxx-xxx.eu-west-1.compute.internal Ready <none> 2m16s v1.21.5-eks-9017834

2. Create a Namespace

In Kubernetes, namespaces serve as a means to segregate and organize groups of resources within a single cluster. They provide isolation and help categorize resources based on their associated applications.

When a cluster is created for the first time, Kubernetes automatically generates a namespace named default. However, it is not recommended to utilize this default namespace in a production environment.

Kubernetes Namespaces

To view the existing namespaces within a cluster, you can execute the following command:

#this command will list all the namespaces you have in your cluster
kubectl get namespaces

So let’s create our own namespace that will have resources related to our app

APP_NAMESPACE=myapp-ns #choose whatever namespace you want
kubectl create ns "${APP_NAMESPACE}" #this command will create a namespace called onemile-app

You can verify that its created by running the above command one more time kubectl get namespaces and you should see your recently created namespace

3. Set up Database Networking

So before we create and provision our database instance, We must setup networking so that our Pods (containers that runs our services) can communicate with our database (Database will be created in step 4)

The following snippet finds the subnets that the Amazon EKS cluster is using. It then generates a Kubernetes manifest for a DBSubnetGroup custom resource with the list of subnets to add to the DB subnet group:

RDS_SUBNET_GROUP_NAME="myapp-db-subnet"
RDS_SUBNET_GROUP_DESCRIPTION="A subnet used by our cluster and other services that connect with it like RDS"
EKS_VPC_ID=$(aws eks describe-cluster --name="${EKS_CLUSTER_NAME}" \\
--query "cluster.resourcesVpcConfig.vpcId" \\
--output text)
EKS_SUBNET_IDS=$(aws ec2 describe-subnets \\
--filters "Name=vpc-id,Values=${EKS_VPC_ID}" \\
--query 'Subnets[*].SubnetId' \\
--output yaml
)

cat <<-EOF > db-subnet-groups.yaml #this will create a file called db-subnet-groups in the working directory that you're running terminal from
apiVersion: rds.services.k8s.aws/v1alpha1
kind: DBSubnetGroup
metadata:
name: ${RDS_SUBNET_GROUP_NAME}
namespace: ${APP_NAMESPACE}
spec:
name: ${RDS_SUBNET_GROUP_NAME}
description: ${RDS_SUBNET_GROUP_DESCRIPTION}
subnetIDs:
$(printf "%s\\n" ${EKS_SUBNET_IDS})
tags: []
EOF

The above command performs the following actions to create a Kubernetes manifest file for a DBSubnetGroup custom resource:

  1. It assigns a value to the RDS_SUBNET_GROUP_NAME variable, which represents the desired name for the DB subnet group.
  2. It assigns a value to the RDS_SUBNET_GROUP_DESCRIPTION variable, which provides a description for the DB subnet group.
  3. It uses the AWS CLI command aws eks describe-cluster to retrieve the VPC ID (Created automatically when creating the cluster) associated with the specified EKS cluster name (${EKS_CLUSTER_NAME}).
  4. It uses the AWS CLI command aws ec2 describe-subnets to retrieve the subnet IDs of the subnets within the VPC identified by ${EKS_VPC_ID}.
  5. It creates a file named db-subnet-groups.yaml in the current working directory, which contains the Kubernetes manifest for the DBSubnetGroup custom resource.
  6. The manifest specifies the name, description, and subnetIDs for the DBSubnetGroup custom resource, using the values previously assigned to the variables.
  7. The subnetIDs section is populated with the subnet IDs obtained from the AWS CLI command, using a printf statement to format the subnet IDs properly.
  8. The manifest file also includes metadata such as the resource name, namespace (${APP_NAMESPACE}), and any applicable tags.

This generated manifest file will be used to create the DBSubnetGroup custom resource in the Kubernetes cluster. It ensures that the desired subnets are associated with the DBSubnetGroup, allowing communication between the services running in the cluster and the RDS database.

The above command should create a file called db-subnet-groups.yaml in your working directory that looks something like this

#db-subnet-groups.yaml (infra/rds/db-subnet-groups.yaml)

apiVersion: rds.services.k8s.aws/v1alpha1
kind: DBSubnetGroup
metadata:
name: myapp-db-subnet
namespace: myapp-ns #Notice its being created in our namespace
spec:
name: myapp-db-subnet
description: A subnet used by our cluster and other services that connect with it like RDS
subnetIDs: # the following subnet list might need to be idented correctly, So make sure it has the right identations
- subnet-xxxxxxxxxxxxxxxxx
- subnet-xxxxxxxxxxxxxxxxx
- subnet-xxxxxxxxxxxxxxxxx
- subnet-xxxxxxxxxxxxxxxxx
- subnet-xxxxxxxxxxxxxxxxx
- subnet-xxxxxxxxxxxxxxxxx
tags: []

Now lets apply this K8s manifest file to our cluster, To run this file just run the following command with the path to the manifest file

kubectl apply -f db-subnet-groups.yaml

When applied, Kubernetes creates this DBSubnetGroup custom resource in our myapp-app namespace. The ACK service controller for Amazon RDS detects the new DBSubnetGroup resource, and then interfaces with the Amazon RDS API to create the subnet group.

One last step, We now need to create the security group that allows the Pods in this Amazon EKS cluster to access your Amazon RDS database instance. You can do this with the following commands:

RDS_SECURITY_GROUP_NAME="myapp-pods-rds-security-group"
RDS_SECURITY_GROUP_DESCRIPTION="the security group that allows the Pods in this Amazon EKS cluster to access your provisioned Amazon RDS databases."
#In order to make the connection between our cluster and RDS, They must be in the same VPC, And allow inbound access from the services to RDS
#Retrieve the CIDR block range of an AWS EKS VPC in a particular region, So that we can use it in our security group below
EKS_CIDR_RANGE=$(aws ec2 describe-vpcs \\
--vpc-ids $EKS_VPC_ID \\
--query "Vpcs[].CidrBlock" \\
--output text
)
#Create the security group in AWS with the same VPC
RDS_SECURITY_GROUP_ID=$(aws ec2 create-security-group \\
--group-name "${RDS_SUBNET_GROUP_NAME}" \\
--description "${RDS_SUBNET_GROUP_DESCRIPTION}" \\
--vpc-id "${EKS_VPC_ID}" \\
--output text
)
#Adds the specified inbound (ingress) rules to a security group.
aws ec2 authorize-security-group-ingress \\
--group-id "${RDS_SECURITY_GROUP_ID}" \\
--protocol tcp \\
--port 5432 \\
--cidr "${EKS_CIDR_RANGE}"

The overall purpose of the above commands is to create an AWS security group that enables communication between the Pods in the EKS cluster and the RDS databases. By allowing inbound access on the specified port (5432) from the CIDR range of the EKS VPC, the Pods can establish connections to the RDS databases for data exchange and other interactions:

  1. It assigns a value to the RDS_SECURITY_GROUP_NAME variable, which represents the desired name for the security group.
  2. It assigns a value to the RDS_SECURITY_GROUP_DESCRIPTION variable, providing a description for the security group.
  3. It uses the AWS CLI command aws ec2 describe-vpcs to retrieve the CIDR block range of the AWS EKS VPC associated with the specified VPC ID ($EKS_VPC_ID), and assigns it to EKS_CIDR_RANGE variable
  4. It uses the AWS CLI command aws ec2 create-security-group to create a security group in AWS with the specified name, description, and associated VPC ID ($EKS_VPC_ID). and stores the security group id in RDS_SECURITY_GROUP_ID variable
  5. It uses the AWS CLI command aws ec2 authorize-security-group-ingress to add an inbound (ingress) rule to the security group. This rule allows TCP traffic on port 5432 (used for PostgreSQL and might be changed for other database engines) from the CIDR range of the EKS VPC ($EKS_CIDR_RANGE).

4. Provision a RDS for PostgreSQL Instance

With the ACK service controller for Amazon RDS, we can provision an Amazon RDS for PostgreSQL database instance using the Kubernetes API. We can do this by creating a DBInstance custom resource. The DBInstance custom resource definition follows the Amazon RDS API, so you can also use that as a reference while constructing your custom resource.

Before we create a DBInstance custom resource, we must first create a Kubernetes Secret that contains the primary database user name and password. Both DBInstance and our services must know and use these services in order to access the database. Provide your desired username and password and create the Secret:

RDS_DB_USERNAME="postgres"
RDS_DB_PASSWORD="<your secure password>" #make sure to replace this to your password
RDS_SECRET_NAME="myapp-postgres-creds"
kubectl create secret generic -n "${APP_NAMESPACE}" "${RDS_SECRET_NAME}" \\
--from-literal=username="${RDS_DB_USERNAME}" \\
--from-literal=password="${RDS_DB_PASSWORD}"

Now we can create the Amazon RDS database instance! The following manifest provisions a high availability Amazon RDS for PostgreSQL Multi-AZ database, with backups, enhanced monitoring, and encrypted storage:

  1. Run the following command in your terminal to create a manifest file in your working directory which specifies the needed configuration for the database instance
#path in our git repository /backend/infra/rds/onemile-db.yaml
RDS_DB_INSTANCE_NAME="myapp-db"
RDS_DB_INSTANCE_CLASS="db.t3.micro"
RDS_DB_STORAGE_SIZE=20
cat <<-EOF > myapp-db.yaml
apiVersion: rds.services.k8s.aws/v1alpha1
kind: DBInstance
metadata:
name: ${RDS_DB_INSTANCE_NAME}
namespace: ${APP_NAMESPACE}
spec:
allocatedStorage: ${RDS_DB_STORAGE_SIZE}
autoMinorVersionUpgrade: true
backupRetentionPeriod: 7
dbInstanceClass: ${RDS_DB_INSTANCE_CLASS}
dbInstanceIdentifier: ${RDS_DB_INSTANCE_NAME}
dbName: myapp
dbSubnetGroupName: ${RDS_SUBNET_GROUP_NAME}
performanceInsightsEnabled: true
engine: postgres
engineVersion: "13"
masterUsername: ${RDS_DB_USERNAME}
masterUserPassword:
namespace: ${APP_NAMESPACE}
name: ${RDS_SECRET_NAME}
key: password
multiAZ: true
publiclyAccessible: false
storageEncrypted: true
storageType: gp2
vpcSecurityGroupIDs:
- ${RDS_SECURITY_GROUP_ID}
EOF

The created YAML file should look something like this

# myapp-db.yaml
apiVersion: rds.services.k8s.aws/v1alpha1
kind: DBInstance
metadata:
name: myapp-db #instance name
namespace: myapp-app #our namespace
spec:
allocatedStorage: 20
autoMinorVersionUpgrade: true
backupRetentionPeriod: 7
dbInstanceClass: db.t3.micro #our database type
dbInstanceIdentifier: myapp-db
dbName: myapp #the created database name
dbSubnetGroupName: myapp-db-subnet #the created database name
performanceInsightsEnabled: true
engine: postgres
engineVersion: '13'
masterUsername: postgres
masterUserPassword:
namespace: myapp-app
name: myapp-postgres-creds
key: password
multiAZ: true
publiclyAccessible: false
storageEncrypted: true
storageType: gp2
vpcSecurityGroupIDs:
- sg-xyzfbdahxhs

Let’s create and provision the Amazon RDS database instance within the Kubernetes cluster.

  • name: Specifies the name of the database instance as myapp-db.
  • namespace: Specifies the namespace where the database instance will be created.
  • allocatedStorage: Sets the allocated storage size for the database instance to 20 GB.
  • dbInstanceClass: Defines the instance type as db.t3.micro.
  • dbInstanceIdentifier: Specifies the unique identifier for the database instance.
  • dbName: Sets the name of the database as myapp.
  • dbSubnetGroupName: Specifies the name of the DB subnet group associated with the database instance.
  • masterUsername: Defines the username for the master user of the database.
  • masterUserPassword: Specifies the reference to the Kubernetes Secret containing the password for the master user.
  • vpcSecurityGroupIDs: Specifies the IDs of the security groups associated with the database instance, where <RDS_SECURITY_GROUP_ID> is the actual ID.

Now let’s create and provision the Amazon RDS database instance within the Kubernetes cluster.

kubectl apply -f myapp-db.yaml

You can view the details of your Amazon RDS for PostgreSQL instance using the following command (This might take a while).

kubectl describe dbinstance -n "${APP_NAMESPACE}" "${RDS_DB_INSTANCE_NAME}"

🫖 This command might also take about 10–15 minutes to provision an RDS instance, Therefore, you can take another opportunity to relax and enjoy a cup of tea this time.

5. Deploy Ingress-Nginx Controller

For this step we will be using Ingress-Nginx Controller to provision an ELB instance and create an Ingress Controller used for routing requests to a specific pod that will have our service container later on

5.1 Install Ingress-Nginx Controller

First we need to run this command to install or upgrade (Its an idempotent command) the Ingress-Nginx controller in our Kubernetes cluster using Helm:

helm upgrade --install ingress-nginx ingress-nginx \\
--repo <https://kubernetes.github.io/ingress-nginx> \\
--namespace ingress-nginx --create-namespace

This command uses Helm to install or upgrade the Ingress-Nginx controller, pulling the chart from the specified repository and installing it in the ingress-nginx namespace. The --create-namespace flag ensures that the namespace is created if it doesn't exist, Let's break down the command and explain some of the importent parameters:

  • helm upgrade: This command is used to upgrade an existing release or install a new release using Helm, which is a package manager for Kubernetes.
  • -install: This flag indicates that the command should perform an installation if the specified release does not exist.
  • ingress-nginx: This is the name of the Helm release, which represents the installation of the Ingress-Nginx controller.
  • -repo <https://kubernetes.github.io/ingress-nginx:> This specifies the repository from which Helm should fetch the chart. In this case, the Ingress-Nginx chart is hosted in the repository located at the provided URL.
  • -namespace ingress-nginx: This sets the namespace in which the Ingress-Nginx controller will be installed. Namespaces are a way to logically isolate resources within a Kubernetes cluster.
  • -create-namespace: This flag instructs Helm to create the specified namespace if it does not already exist. In this case, it will create the ingress-nginx namespace if it is not present.

You can verify that its working by running the following command

kubectl get pods --namespace=ingress-nginx

5.2 Install Environment-Specific (AWS) Resources

In AWS, we use a Network load balancer (NLB) -A type of ELP) to expose the Ingress-Nginx Controller behind a Service of Type=LoadBalancer.

kubectl apply -f <https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.7.1/deploy/static/provider/aws/deploy.yaml>

When this command is executed, Kubernetes will retrieve the deploy.yaml file from the provided URL and apply the resources defined in the manifest to the cluster. The manifest includes all the necessary components and configurations required to deploy the Ingress-Nginx controller on an AWS cluster, such as Deployment, Service, ConfigMap, and RBAC (Role-Based Access Control) resources.

The deploy.yaml file contains a collection of Kubernetes resources defined in YAML format. These resources include:

  • Deployment: It specifies how many replicas of the Ingress-Nginx controller should be created, along with other deployment-related settings.
  • Service: It defines a Kubernetes Service that exposes the Ingress-Nginx controller pods internally within the cluster.
  • ConfigMap: It provides configuration data to the Ingress-Nginx controller, such as customizing the controller behavior or enabling certain features.
  • RBAC (Role-Based Access Control) resources: These resources define the necessary roles, role bindings, and service accounts to grant the Ingress-Nginx controller the required permissions to operate within the cluster.

By applying this manifest file to the cluster, Kubernetes will create or update the specified resources, effectively deploying the Ingress-Nginx controller for AWS. Once deployed, the Ingress-Nginx controller will be responsible for managing Ingress resources and handling incoming traffic, allowing you to configure and manage routing and load balancing for your services within the Kubernetes cluster.

6. Let’s Create our First Service

Now its time to create our first very simple service which will be an express service with only one endpoint that returns Hello from Auth Service!, This is /auth/me route!

6.1. Create an Express Server

Step 1: Create a new directory for your project:

mkdir auth-service
cd auth-service

Step 2: Initialize a new Node.js project and install Express:

npm init -y
npm install express

Step 3: Create an index.js file and add the following code:

const express = require('express');
const app = express();
app.get('/auth/me', (req, res) => {
res.send('Hello from Auth Service!, This is /auth/me route!');
});
const port = 3000;
app.listen(port, () => {
console.log(`Server is running on port ${port}`);
});

6.2. Dockerizing the Express Server

Now, let’s create a Dockerfile to package our Express server into a Docker image.

Step 1: Create a new file called Dockerfile in the project directory.

touch Dockerfile

Step 2: Open the Dockerfile in a text editor and add the following content:

# Use the official Node.js 14 base image
FROM node:14
# Set the working directory in the container
WORKDIR /app
# Copy package.json and package-lock.json to the container
COPY package*.json ./
# Install project dependencies
RUN npm install
# Copy the rest of the application code
COPY . .
# Expose port 3000
EXPOSE 3000
# Start the Express server
CMD ["node", "index.js"]

Step 3: Run the following command to build the Docker image:

docker build -t myapp-auth .
# Note: The -t flag is used to specify a tag for the image, in this case, auth-service.

Step 4: Run the following command to start a container from the image:

docker run -p 3000:3000 -d myapp-auth

Step 5: Open your web browser and visit http://localhost:3000/auth/me. You should see the message Hello from Auth Service!, This is /auth/me route! displayed.

6.3. Push your Docker Image to DockerHub

Now one last step before creating our deployment in the K8s cluster is to push the image to DockerHub so that we can use it in our cluster

Step 1: Log in to DockerHub

To push your image to DockerHub, you need to have an account. If you don’t have one, create a DockerHub account by visiting the DockerHub website. Once you have an account, open your terminal and log in to DockerHub using the following command:

docker login

Enter your Docker Hub username and password when prompted.

Step 2: Tag Your Docker Image
To push your image to Docker Hub, you need to tag it with your DockerHub username and the repository name. The repository name typically follows the format username/repository. Use the following command to tag your image:

docker tag myapp-auth <username>/myapp-auth:latest
# `username` with your Docker Hub username.

Step 3: Push Your Docker Image
Once you’ve tagged your Docker image, you can push it to Docker Hub using the following command:

docker push <username>/myapp-auth:latest
# Again, replace `username`
# This command uploads your image to Docker Hub, and the process may take some time depending on the image size and your internet connection speed.

Step 4: Verify the Push
After the push is complete, you can verify that your image has been successfully pushed to DockerHub by visiting your DockerHub profile on the Docker Hub website. You should see your repository listed along with the pushed image and its associated tags.

👏 Congratulations! You have successfully created a simple Express server and containerized it with Docker and pushed it to DockerHub. Now you can easily distribute and run your container image on any Docker-compatible environment like our K8s cluster.

7. Create a Deployment for our Service

Now its time to deploy our Auth service to our EKS cluster, so let’s get started

K8s Deployment

Create Deployment

We will start with creating the deployment manifest, We will work on the Auth service but all services are defined with the same criteria.

  1. Create a separate directory in your project for your deployment files
  2. Create a deployment file that describes how the pod will be created
# infra/k8s/services/auth-depl.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
namespace: myapp-ns #our working namespace that this resource will be created at
spec:
replicas: 1 #The number of pods to be created (Fixed to one at the moment)
selector: #how to find all the pods that its going to create
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth #name of the service
image: <dockerhub-username>/myapp-auth #<your-dockerhub-username>/<image-name>
ports:
- containerPort: 3000 #The port that the container runs on
---
# K8s Service
apiVersion: v1
kind: Service
metadata:
name: auth-srv
namespace: myapp-ns
spec:
type: ClusterIP #allow to communicate to other pods (inside the cluster) and this is what ingress controller reaches to
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000 #The port that the container runs on
targetPort: 3000

3. Apply the service manifest file to the cluster by running the following command

#This will create a Deplyoment (Your app container) and cluster ip service (Essential for enabling communication and connectivity between different pods and services within the cluster.)
kubectl apply -f path/to/your/deployment/file

You can verify that the pod is created and running by running the following command:

kubectl get pods -n myapp-ns

Once the pod is finally created and running, We still have one final step

8. Create our Ingress Controller (Redirection Rules)

Now its time to create an ingress controller to distribute the requests based on the path to its required service.

8.1 Create your Routing Rules

1. Create your ingress controller service and add the following code in your ingress-srv.yaml file (Its gonna be explained below, don’t worry)

# infra/ingress/ingress-srv.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
namespace: myapp-ns
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: api.myapp.io #you can change this to whatever you domain is
http:
paths:
- path: /auth/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000

This Ingress resource configures the NGINX Ingress Controller to route HTTP traffic with the host api.myapp.io and paths starting with /auth/ to the auth-srv service running on port 3000. It enables more advanced routing and load balancing for the specified endpoint within the cluster. Let's break it down:

  • apiVersion and kind specify the API version and resource type, indicating that this is an Ingress resource.
  • metadata contains information about the Ingress, including its name and namespace.
  • annotations provide additional configuration settings for the Ingress. In this case, it specifies that the Ingress should be managed by the NGINX Ingress Controller and enables the use of regular expressions for path matching.
  • spec defines the desired state of the Ingress resource.
  • rules specify the routing rules based on the incoming request's host.
  • host defines the domain or hostname for which the Ingress will handle traffic.
  • http contains the HTTP-specific configuration for the Ingress.
  • paths define the URL paths that should be matched and routed to the specified backend service.
  • path specifies the path pattern to match. In this case, any path starting with /auth/ followed by any characters is matched.
  • pathType specifies the type of matching for the path. Prefix means the path should be matched as a prefix.
  • backend specifies the backend service that should receive the matched requests.
  • name is the name of the service (auth-srv) to which the traffic should be routed.
  • port specifies the port number (3000) on which the service is listening.

2. Now let’s apply this file to create Ingress Resource in our cluster

kubectl apply -f infra/ingress/ingress-srv.yaml

3. Verify the creation of the Ingress resource by running the following command

$ kubectl get ing -n myapp-ns 

NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
myapp-ns ingress-srv nginx api.myapp.io xyz.eu-west-1.elb.amazonaws.com 80, 443 10s

4. Verify the creation of the ingress nginx service by running the following command and you should see an output like the shown below

$ kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer xx.xxx.xx.xxx xyzadw.eu-west-1.elb.amazonaws.com 80:32025/TCP,443:30417/TCP 10s

⚠️ It might take a while before you see a DNS value in the Address or External IP columns but keep waiting until you see something like this xyz.<your-selected-region>.elb.amazonaws.com

[Optional] You can then use The External IP record to create a CNAME record for your own domain (We will cover this in another article)

8.2 Test it Locally

  1. Get the IP address of the LoadBalancer by running the following command
#replace this with the output you got when running `kubectl get svc -n ingress-nginx` for External-IP columns  
$ nslookup xyzadw.eu-west-1.elb.amazonaws.com

Server: 192.168.1.1
Address: 192.168.1.1#53
Non-authoritative answer:
Name: xyzadw.eu-west-1.elb.amazonaws.com
Address: xx.xxx.xxx.xxx
Name: xyzadw.eu-west-1.elb.amazonaws.com
Address: yy.yyy.yyy.yyy
Name: xyzadw.eu-west-1.elb.amazonaws.com
Address: zz.zz.zzz.zzz

2. Take any of the addresses listed above (zz.zz.zzz.zzz for example) and inject it to the following command to test it out

$ curl --resolve api.myapp.io:80:<zz.zz.zzz.zzz> <http://api.myapp.io/auth/me>

"Hello from Auth Service!, This is /auth/me route!"

By running this curl command, you are sending an HTTP GET request to http://api.myapp.io/auth/me with a specific IP address and port resolution for api.myapp.io. The actual IP address will be resolved based on the custom resolution provided with the --resolve option. This allows you to test the connectivity and response from the specified endpoint.

If you saw Hello from Auth Service!, This is /auth/me route! after running the command above, Congrats 🥳! You made it and created your first service.

Adding New Services to our Cluster

We can now create new services, And just add them to your cluster using the same steps that we had earlier

Adding new services to your EKS cluster
  1. Create & Dockerize your service (a NodeJS server or any service of any kind)
  2. Create & Apply deployment manifest file (Step #7), service-depl.yaml
  3. Add the redirection rules to your ingress-controller by updating the ingress-srv.yaml file to include the new paths of your service
  4. Apply the changes by running to your cluster by running kubectl apply -f path/to/your/ingress-srv/file

Conclusion

In this article, we explored the process of building a microservices application using AWS EKS (Elastic Kubernetes Service) and RDS (Relational Database Service). We discussed the deployment plan, including the necessary prerequisites and tools required for the setup.

Feel free to connect with me on social media for more tech-related content and discussions:

Github: @3ba2ii LinkedIn: @Ahmed Ghonem

Stay tuned for more articles and tutorials on building scalable and resilient applications with modern technologies. Happy coding!

--

--

Ahmed Ghonem
Ahmed Ghonem

Written by Ahmed Ghonem

Software Engineer @Microsoft. Backend Enthusiast

No responses yet