Tech

Kubernetes and AWS EKS Partners In Production


A prerequisite to connecting the dots between a tool like Kubernetes and AWS EKS is to get to know Docker. We will brief you in on its general-purpose, but we strongly advise you to use it and have fun with it to better understand this blog’s topic.

Docker – Brief Introduction

Docker is an open-source containerization tool that enables you to create isolated environments for easier application deployment. The most straightforward way of explaining it is that you bundle up a bunch of commands that you would usually execute on a server into a Dockerfile to create an image. You then use the image to containerize it. For example, you can build out the following stack:

  • One container serving the front of your app ( e.g. React, Vue )
  • Second container serving your API or back-end ( NodeJS, Django, etc. ) and
  • Yet another container running your database ( MongoDB, MySQL, etc. )

So what happens when we introduce Kubernetes into the mix?

kubernetes-architecture-ilustration

Kubernetes – Clustering up

Kubernetes – K8s is an open-source orchestration system that allows you to efficiently automate and scale up deployment and services across multiple instances. It was made to support compound architectures made with Docker containers that are actively used in production environments. And it does so by offering security, mouldability, and constant availability.

It allows you to manipulate different microservices by connecting ( grouping ) them into clusters that we can control. Say no to setting up load balancers and configuring them all day long to be able to serve high throughput of requests. With K8s, it gets as simple as one command.

Architecture

A cluster consists of master nodes and worker nodes. A master node can be either a physical or virtual machine. The same goes for worker nodes. The master node is commonly referred to as the control plane node. All nodes have proper tools ( such as kubectl, kubeadm ) installed to allow the control plane to actively “speak” to others in the cluster.

Plane also schedules on which Node will a Pod run on by using kube-scheduler. Scheduling doesn’t refer to time, but rather to making the best decision of

“Where should this pod run at given how much CPU resources it can utilize ?”

Pod represents a grouping of containers. Containers in a pod run on the same node. A node can have multiple pods. Pods are not directly created in 90% of the time, but rather used as workload resources. You can have pods running a single container ( mostly used ) or have a pod running multiple containers that can share resources with each other.

Kubernetes uses services ( custom-defined yaml / json templates ) to open network access for pods, externally or internally. It uses deployments ( custom-defined yaml/json templates ) to provide a state of your app and launch pods on your cluster.

Now that we’ve covered this, we can start clustering our way up with Kubernetes and AWS EKS

Dev Environment for Kubernetes and AWS EKS

Instead of setting up everything manually, we are going to let AWS do things for us. AWS, among many other cloud providers ( Linode, GCP, Azure, DigitalOcean ) offers a service called AWS EKS ( Elastic Kubernetes Service ), that creates control plane and worker nodes for us. We can create it using Management Console or via eksctl. After creation, we control our pods, services, and deployments by using kubectl.

For the purpose of this topic, you should have an IAM user on AWS that can access AWS EKS. That user might need admin permissions. Make sure that you have ACCESS_KEY and CLIENT_ID for a user you create.

Let’s quickly create a docker container to execute commands :

docker run -dit --name eks-handler --platform linux/amd64 ubuntu:latest

Once that’s done, let’s bin-bash into it

docker exec -it eks-handler /bin/bash

In order to use eksctl, we have to install and configure awscli – a cli tool that works for AWS services. Lets set up curl and unzip packages that are needed for this installation

apt-get update && apt-get install curl unzip -y

Proceed to install awscliv2. Instructions followed from here

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
./aws/install

If everything is successfully executed, let us configure it:

aws configure

You will be prompted to enter the access key ID, client key ID, region, and output format.
For region enter “us-east-1” to create a cluster in this region and for output format type “json”.

Now, that we have awscli installed, we can set up kubectl and eksctl. We are installing kubectl especially distributed for AWS EKS. More info here

curl -o kubectl https://s3.us-west-2.amazonaws.com/amazon-eks/1.22.6/2022-03-09/bin/linux/amd64/kubectl
chmod +x ./kubectl
mv kubectl /bin/kubectl
kubectl version --short --client

You should get the output similar to this → “Client Version: v1.22.6-eks-7d68063”

Let us install eksctl:

curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_Linux_amd64.tar.gz" | tar xz -C /tmp
mv /tmp/eksctl /bin/eksctl
eksctl version

The output should be like this → “0.100.0”

Cluster Creation with Kubernetes and AWS EKS

We use eksctl to create a cluster. First create config.yaml file which contains the following lines:

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: blog-cluster
  region: us-east-1

managedNodeGroups:
  - name: nodegroup-workers
    instanceType: t3.micro
    desiredCapacity: 3
    minSize: 1
    maxSize: 3
    ssh:
      allow: false

Our cluster is named blog-cluster and will be created in us-east-1. We are using a managed node group which means AWS is going to manage our servers. There are also node groups that have to be self-managed if you are not using AWS, which means you have to manage them yourself. We also set allow for ssh to false since we don’t need it for this purpose.

Save this config and execute the following command

eksctl create cluster -f config.yaml

If you encounter any errors, try changing the name of the cluster, and repeat the command above. It takes approximately 15-20 minutes to complete provisioning. After completion, we should connect kubectl with EKS by exposing kubeconfig with awscli :

aws eks update-kubeconfig --name dev --region us-east-1

Creating a deployment

To create a deployment in our new cluster, we will deploy nginx as an example. Here is an nginx deployment configuration

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 3 
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

This config tells EKS cluster to create 3 pods that will use nginx image. Have in mind that we are setting our selectors to match with the label “nginx” which is going to be the same in our service as well. This is so we can target those specific pods. Go ahead and execute

kubectl apply -f ./nginx-deployment.yaml

Deploying a service

Now let’s create a service that will use our newly deployed pods:

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  labels:
    app : nginx
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: nginx

Save this into nginx-service.yaml and execute:

kubectl apply -f ./nginx-service.yaml

If everything was executed properly, you should have nginx exposed on your cluster’s IP address. You can get it by running

kubectl get service

and copy-pasting external DNS hostname in your browser!

That’s everything folks regarding clustering on this blog. In our opinion, AWS EKS and Kubernetes work swiftly together and by doing so they save you time with automation of cluster creation.

If this blog sparked your interest in AWS and you would like to learn more about the services they offer, check out our Lambda and DynamoDB tools that we covered previously. Stay tuned for more!

Share this post

Share this link via

Or copy link