Skip to content

Set up EKS Cluster

Information about this guide

Kubernetes

In this guide we will setup an EKS cluster using eksctl.

Then we will use GitLab to deploy our app, and Rancher to manage k8s.

  1. Set up eksctl.
  2. Set up a new EKS Cluster in a new VPC.
  3. Set up AutoScaler (for EKS worker nodes - EC2 instances)
  4. Integrate the cluster with GitLab.
  5. Install up Rancher.
  6. Deploy a demo app.
  7. Expose the app publicly and serve requests with ingress controller.
  8. Set up POD AutoScaling (HPA) for your deployment.
  9. Install Prometheus from Rancher.
  10. Install Grafana from Rancher.

Requirements

Requirements for this Setup

  • AWS Account with admin access
  • GitLab Account with admin access

Note about AWS Resources

Note that AWS resources have costs 💵

Make sure you remove them when you are done with the project:

This includes any resource such as: nodes, volumes, load-balancers...

About this Guide

There is no point to copy official guides vendors such as GitLab or AWS.

Therefor, some parts in this guide will point you to the official guides at their documentation websites.

Follow these updated guides and continue.

Getting started

Make sure you have admin access for your AWS & GitLab accounts.

Install and test aws-cli

Install eksctl on your local machine.

Prepare cluster configuration

prepare your cluster configuration yaml file

While it is possible to just create the cluster with a command,

If you want more customization, you can use a cluster.yml configuration file.

This will allow you to have a single file (or multiple files) which contains your cluster and node groups configurations.

cluster.yml

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
name: your-cluster-name
region: us-east-1

availabilityZones: ["us-east-1f", "us-east-1b" ,"us-east-1d"]

# vpc:
#   publicAccessCIDRs: ["0.0.0.0/0"]

iam:
withOIDC: true
serviceAccounts:
- metadata:
    name: cluster-autoscaler
    namespace: kube-system
    labels: {aws-usage: "cluster-ops"}
    attachPolicy: # inline policy can be defined along with `attachPolicyARNs`
    Version: "2012-10-17"
    Statement:
    - Effect: Allow
        Action:
        - "autoscaling:DescribeAutoScalingGroups"
        - "autoscaling:DescribeAutoScalingInstances"
        - "autoscaling:DescribeLaunchConfigurations"
        - "autoscaling:DescribeTags"
        - "autoscaling:SetDesiredCapacity"
        - "autoscaling:TerminateInstanceInAutoScalingGroup"
        Resource: '*'

#  privateNetworking: true

nodeGroups:
- name: ng-public
    availabilityZones: ["us-east-1d"]
    minSize: 1
    maxSize: 5
    instancesDistribution:
    maxPrice: 0.017
    instanceTypes: ["t3.small", "t3.medium"] # At least one instance type should be specified
    onDemandBaseCapacity: 0
    onDemandPercentageAboveBaseCapacity: 25
    spotInstancePools: 2
    # ssh: 
    # publicKeyName: your-pre-existing-ssh-key-pair-name-in-aws
    allow: false
    labels:
    service: your-service-name
    network: public
    iam:
    withAddonPolicies:
        autoScaler: true
    tags:
    k8s.io/cluster-autoscaler/enabled: "true"
    k8s.io/cluster-autoscaler/your-cluster_name: "owned"
    k8s.io/cluster-autoscaler/node-template/label/service: your-service-name
    beta.kubernetes.io/os: "Linux" 

- name: ng-private
    availabilityZones: ["us-east-1d"]
    minSize: 1
    maxSize: 3
    desiredCapacity: 1
    privateNetworking: true
    instancesDistribution:
    maxPrice: 0.0550
    instanceTypes: ["r5ad.large"] 
    onDemandBaseCapacity: 0
    onDemandPercentageAboveBaseCapacity: 25
    spotInstancePools: 2
    # ssh: 
    # publicKeyName: your-pre-existing-ssh-key-pair-name-in-aws
    allow: false
    labels:
    service: your-service-name
    network: private  
    # taints:
    # app: "your-tainted-app-name:NoSchedule"
    # iam:
    # withAddonPolicies:
    #     autoScaler: false
    tags:
    beta.kubernetes.io/os: "Linux"
    network: "private"

Carefully review the cluster.yml configuration

Make sure you adjust it according to your needs.

There are many examples available Here

Create the cluster

Now we are ready to create the cluster

eksctl create cluster --config-file=./cluster.yml

This will take some 🕰 ...

Once eks completed you should be able to get your cluster and nodes:

Get cluster

eksctl get clusters --region=us-east-1

Get node groups:

eksctl get nodegroups --cluster=your_cluster  --region=us-east-1
CLUSTER NODEGROUP   CREATED                           MIN SIZE  MAX SIZE    DESIRED CAPACITY    INSTANCE TYPE   IMAGE ID
your-cluster-name   ng-private  2020-02-02T16:47:17Z  1         1           1                   r5ad.large      ami-0d960646974cf9e5b
your-cluster-name   ng-public   2020-02-02T16:47:18Z  1         5           3                   t3.small        ami-0d960646974cf9e5b

Your cluster is ready !

Install kubectl

Install and configure kubectl

In order to interact with your new EKS cluster you will need kubectl

Follow this Guide, and continue with the next part when you are done.

Install Auto Scaler

Deploy Cluster AutoScaler

While this is optional, it is highly recommended.

This will allow k8s to launch/terminate nodes in your AutoScaling groups you created with eksctl.

Follow this Guide and you know the rest...

About AutoScaling

If you used the provided cluster.yml file when setting up the EKS cluster,

Then you already created this required policy:

name: cluster-autoscaler
namespace: kube-system
labels: {aws-usage: "cluster-ops"}
attachPolicy: # inline policy can be defined along with `attachPolicyARNs`
Version: "2012-10-17"
Statement:
- Effect: Allow
    Action:
    - "autoscaling:DescribeAutoScalingGroups"
    - "autoscaling:DescribeAutoScalingInstances"
    - "autoscaling:DescribeLaunchConfigurations"
    - "autoscaling:DescribeTags"
    - "autoscaling:SetDesiredCapacity"
    - "autoscaling:TerminateInstanceInAutoScalingGroup"
    Resource: '*'

This will allow the deployed autoScaler to manage your nodes capacity

(manage Auto Scale Groups)

GitLab Integration

Integrate your EKS cluster with GitLab

This is an optional step but recommended if you use GitLab as your codebase,

Or if you are thinking about starting with GitLab.

This will allow you to securely deploy your code directly from your Gitlab project (PipeLine) to your EKS cluster.

Follow this Guide to add the EKS cluster to GitLab.

EKS application settings in GitLab

Tiller, Ingress Controller & Cert Manager should can be installed directly from GitLab in the EKS Application Settings.

From your Project/Group (depending on your cluster scope) go to:

Operations --> Kubernetes --> Your Cluster Click on your cluster and go to the Applications tab.

Install Tiler

Tiller

For Gitlab to be able to deploy pods, create services etc... Install Tiller.

This is the server side of helm, which allows Gitlab to interact with k8s.

Helm Documentation.

Install Ingress Controller

Easily install Ingress Controller with one click from GitLab

Ingress controller will allow you to expose access to your public services (API, Website, etc...) in HTTP & HTTPS.

Kubernetes

Install GitLab Runner

Install GitLab Runner in your cluster with one click

Kubernetes

Install Cert Manager

Install Cert Manager in your cluster with one click

Kubernetes

Install Rancher

Install Rancher / RKE, choose the best option for you

When installing rancher / RKE, there are multiple options available for you.

For Highly Available production environment, Rancher recommends creating a dedicated k8s cluster, just for running rancher itself.

Then you can import your k8s clusters into rancher and manage them.

Choose the option that best fit your requirements:

Rancher Docs

All done !

Now we are ready to deploy our application

Deploy demo application

Deployment options

In this guide we will use helm to deploy our application.

If you prefer to work with kubectl follow this guide:

Deploy app with GitLab

To be continued...

Comments