Skip to content

AWS EKS Cluster

Amazon EKS Cluster Installation

Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that makes it easy for you to run Kubernetes on AWS without needing to stand up or maintain your own Kubernetes control plane.

The first step is to install the AWS command line client. On Ubuntu, use snap:

> sudo snap install aws-cli --classic
aws-cli 1.16.266 from Amazon Web Services (aws✓) installed

You can check the installation by displaying the aws client version:

> aws --version
aws-cli/1.16.266 Python/3.5.2 Linux/4.15.0-76-generic botocore/1.13.2

Then open your AWS management console and create a new IAM user dedicated to EKS:

  1. Open the IAM console at https://console.aws.amazon.com/iam/,
  2. In the navigation pane, choose Users, Create user,
  3. Select eksctl as the user name and Programmatic access for the AWS access type,
  4. Click on Next: Permissions,
  5. On the Set permissions page, select Attach existing policies directly and select AdministratorAccess in the policy list.
  6. Click on Next: Tags, Next: Review and Create user,
  7. Show and copy the Access key ID and Secret access key.

Now configure your AWS client to use this role:

> aws configure
AWS Access Key ID [None]: Previously copied Access key ID 
AWS Secret Access Key [None]: Previously copied Secret access key
Default region name [None]: us-west-2
Default output format [None]: json

Then install and configure the Amazon EKS client: eksctl.

> curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
> sudo mv /tmp/eksctl /usr/local/bin
> eksctl version
[]  version.Info{BuiltAt:"", GitCommit:"", GitTag:"0.13.0"}

Using eksctl, create a cluster named kraken-test in the region us-west-2:

> eksctl create cluster \
> --name kraken-test \
> --version 1.14 \
> --region us-west-2 \
> --nodegroup-name standard-workers \
> --node-type t3.medium \
> --nodes 3 \
> --nodes-min 3 \
> --nodes-max 3 \
> --managed
[]  eksctl version 0.13.0
[]  using region us-west-2
[]  setting availability zones to [us-west-2c us-west-2a us-west-2d]
[]  subnets for us-west-2c - public:192.168.0.0/19 private:192.168.96.0/19
[]  subnets for us-west-2a - public:192.168.32.0/19 private:192.168.128.0/19
[]  subnets for us-west-2d - public:192.168.64.0/19 private:192.168.160.0/19
[]  using Kubernetes version 1.14
[]  creating EKS cluster "kraken-test" in "us-west-2" region with managed nodes
[]  will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
[]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --cluster=kraken-test'
[]  CloudWatch logging will not be enabled for cluster "kraken-test" in "us-west-2"
[]  you can enable it with 'eksctl utils update-cluster-logging --region=us-west-2 --cluster=kraken-test'
[]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "kraken-test" in "us-west-2"
[]  2 sequential tasks: { create cluster control plane "kraken-test", create managed nodegroup "standard-workers" }
[]  building cluster stack "eksctl-kraken-test-cluster"
[]  deploying stack "eksctl-kraken-test-cluster"
[]  building managed nodegroup stack "eksctl-kraken-test-nodegroup-standard-workers"
[]  deploying stack "eksctl-kraken-test-nodegroup-standard-workers"
[]  all EKS cluster resources for "kraken-test" have been created
[]  saved kubeconfig as "/home/ubuntu/.kube/config"
[]  nodegroup "standard-workers" has 3 node(s)
[]  node "ip-192-168-57-19.us-west-2.compute.internal" is ready
[]  node "ip-192-168-75-34.us-west-2.compute.internal" is ready
[]  node "ip-192-168-8-214.us-west-2.compute.internal" is ready
[]  waiting for at least 3 node(s) to become ready in "standard-workers"
[]  nodegroup "standard-workers" has 3 node(s)
[]  node "ip-192-168-57-19.us-west-2.compute.internal" is ready
[]  node "ip-192-168-75-34.us-west-2.compute.internal" is ready
[]  node "ip-192-168-8-214.us-west-2.compute.internal" is ready
[]  kubectl command should work with "/home/ubuntu/.kube/config", try 'kubectl get nodes'
[]  EKS cluster "kraken-test" in "us-west-2" region is ready

Warning

The process is quite long. It takes about 15 minutes to start the cluster.

Check that kubectl is configured to communicate with the created cluster and display available nodes:

kubectl get nodes
NAME                                          STATUS   ROLES    AGE     VERSION
ip-192-168-57-19.us-west-2.compute.internal   Ready    <none>   5m43s   v1.14.7-eks-1861c5
ip-192-168-75-34.us-west-2.compute.internal   Ready    <none>   5m45s   v1.14.7-eks-1861c5
ip-192-168-8-214.us-west-2.compute.internal   Ready    <none>   5m29s   v1.14.7-eks-1861c5

Ingress Controller

Your EKS cluster being ready and accessible locally, you can now install an NGINX Ingress using Helm:

> kubectl create namespace nginx-ingress
namespace/nginx-ingress created
> helm install --namespace nginx-ingress nginx-ingress stable/nginx-ingress --set rbac.create=true --set controller.publishService.enabled=true
NAME: nginx-ingress
LAST DEPLOYED: Fri Jan 31 11:46:33 2020
NAMESPACE: nginx-ingress
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
[...]

Verify that the Ingress controller is created with the following command:

> kubectl get service nginx-ingress-controller -n nginx-ingress
NAME                       TYPE           CLUSTER-IP      EXTERNAL-IP                                                              PORT(S)                      AGE
nginx-ingress-controller   LoadBalancer   10.100.201.31   af4f1d0d0441611eabb6802297c6b675-585786825.us-west-2.elb.amazonaws.com   80:32717/TCP,443:31415/TCP   99s

Proceed to the Kraken helm chart installation and your Kraken application will be available at the IP address given in the EXTERNAL-IP column.

For example, open the administration UI at http://af4f1d0d0441611eabb6802297c6b675-585786825.us-west-2.elb.amazonaws.com/administration.

Note

Your can delete your cluster with the command eksctl delete cluster kraken-test --region us-west-2.