EKS - Create an automatic way to build a K8s Cluster and expose it on Internet

 


AWS gives you the ability to create an EKS cluster inside a VPC, but it is so important to expose the web service on Internet trying to automate as much as possible.

To expose on internet, the best way is to use the Application Load Balancer that points to an internal Service to EKS.

To create it with a CI/CD it is important of course to have as much script automation as possible. So let's explore each stage.


THE VPC

The first step is of course to create a VPC inside your account. To be as fast as possible, it is essential to have a VPC with 2 Public Subnet and a security group that allows the machines to access and be accessed from all. This is intended of course for just testing purpose, it means that you need to create the rules properly in the feature, allow each subnet to be accessed only by your Control Node of EKS

So create it and take apart the value of the subnet, they will be used in the future


THE EKS

To create the EKS we will use of course the existing AWS cli feature, we will pass the subnets value to it and a role for create it. The role of course is created and has an attached policy which allow you to interact with EKS: AmazonEKSClusterPolicy

The policy is a managed one and it fits best.


aws eks create-cluster --name $cluster_name --role-arn arn:aws:iam::ACCOUNT:role/eksClusterRole --resources-vpc-config subnetIds=subnet-XXXX,subnet-YYYY,securityGroupIds=sg-ZZZ,endpointPublicAccess=true --logging '{"clusterLogging":[{"types":["api","audit","authenticator","controllerManager","scheduler"],"enabled":true}]}'

The xxxx and yyyy are the two subnets we defined before. The security Group is the one (or more) that you have defined in your VPC. The ACCOUNT is of course your account and the eksClusterRole is the role we defined at the start of this paragraph, the one with the policy we mentioned before

The cluster_name is passed as variable of course

To assure that the cluster is created, just ping for that with the describe



THE EKS - GROUP NODE

To create the group node that the K8s needs, we need to invoke the aws cli dedicated, by using the existing subnets. The role we will use is a role that has the ability to interact with EKS and EC2, with managed policy like AmazonEC2FullAccess and AmazonEKSWorkerNodePolicy

aws eks create-nodegroup --cluster-name $cluster_name --nodegroup-name "$cluster_name"NodeGroup  --subnets subnet-XXX subnet-YYYY --node-role arn:aws:iam::ACCOUNT:role/CreateGroup-RoleName --ami-type AL2_x86_64 --instance-types t2.micro --capacity-type ON_DEMAND --scaling-config minSize=6,maxSize=6,desiredSize=6 --disk-size 20 --update-config maxUnavailable=1


Again subnet and Account are the ones mentioned before. Here we are using t2.micro because we want free tier as much as possible!

To assure that the group node is created, just ping for that with the describe



THE ALB

To install the ALB we need to follow what is present in the AWS guide, where we need to download file, perform "sed" on that, change values, create roles and so on. 

I created a script which did it easily.

Firs we need to create and open id connector with the thumbprint of the OIDC server created in our EKS. You can do it by GUI but i prefer this way



As you see it retrieve the value, create the url to be invoked, retrieve the jwks_ur, perform the curl to it, extract the certificate chain and extract only the certificate we need (the last one). Then it create the fingerprint with openssl and remove ":". Now it is able to create the open id connect provider


Second we need to create the policy and the role for creating the load balancer controller. You need to download specific yaml from the AWS guide and perform some modification. If you download it in a folder you can access it locally



It create the role we need extracting also the correct account id (which is important for binding the roles)

Third we need to apply and create the controller in our K8s installation, after performing change to the yaml to insert the role we previously created


The sleep are essential to wait each service to be ready to be called (the cert manager first of all)

To be able to interact with the cluster I added also the modification for config map of authorization:


This create a user for each line in user.txt where user are saved with arn and username. It create the awsAuth.yaml started from the original aws-auth configmap. You can then replace it with 

kubectl replace -f awsAuth.yaml


TESTING IT

Now the cluster is ready to accept also a simple deployment with an application that expose something on 8080 port and an ALB ingress which is able to espose the service on internet with 80 port



This is a very simple deployment of an app. The application expose on port 8080 while the service expose it on 80. The alb is able to expose it forwarding to the original service on 80 port. As you can see the ingress has a spec with ingressClassName "alb" which tells to kubernetes to interact with AWS to create a load balancer. If you perfrom a search of ing in that namespace you will see finally an Internet Address.


CONCLUSION

you can build it using the CI/CD pipeline in AWS, bringing it in a CodeBuild which refer to the code committed in a repo. It is important to give to pipeline and CodeBuild the role to interact with each element and to upgrade its IAM profile.

The script are interactive, but in pipeline you can cat the value of a file and pass it to the script like cat cluster.txt | sh createCluster.sh

Commenti

Post popolari in questo blog

HTML and AI - A simple request, and a beautiful response for ElasticSearch without Kibana

A simple CD using AWS Pipeline

Websocket Chat with Lambda (but all in Java)