Ron Rivera

6 minute read

Back in January this year, I made a post about deploying Kubernetes on AWS using kubespray and have had my fair share of running into some challenges managing the cluster. Anyone who have bootstrapped a vanilla kubernetes cluster would agree that installing it is one thing but looking after it is a full-time job.

Amazon recently announced the general availability of its Managed Kubernetes service offering called EKS. This is a federated service where the Kubernetes control plane is managed by Amazon while the Worker nodes are managed by the customer. This is a promising proposition so I wanted to get my hands dirty on it. Apparently a new CLI tool called eksctl is also now available for creating and managing EKS clusters.

In this post, I will go through the provisioning of an AWS EKS cluster using eksctl.

Prerequisites

In order to follow along, you need to:

  1. Sign-up for an AWS account.
  2. Install and configure the AWS CLI.
  3. Install eksctl CLI.
  4. Install kubectl CLI.

Once the prereqs are sorted out, let’s get rolling.

Create the EKS cluster

Let’s create our own EKS cluster using this command:

eksctl create cluster --name=roncrivera-k8s

This command will create:

  • a cluster named roncrivera-k8s
  • 2x m5.large EC2 instances
  • 1 elastic IP
  • 1 dedicated VPC for the cluster
  • kubeconfig
$ eksctl create cluster --name=roncrivera-k8s
[ℹ]  using region ap-southeast-1
[ℹ]  setting availability zones to [ap-southeast-1b ap-southeast-1c ap-southeast-1a]
[ℹ]  subnets for ap-southeast-1b - public:192.168.0.0/19 private:192.168.96.0/19
[ℹ]  subnets for ap-southeast-1c - public:192.168.32.0/19 private:192.168.128.0/19
[ℹ]  subnets for ap-southeast-1a - public:192.168.64.0/19 private:192.168.160.0/19
[ℹ]  nodegroup "ng-f141e4bf" will use "ami-019966ed970c18502" [AmazonLinux2/1.11]
[ℹ]  creating EKS cluster "roncrivera-k8s" in "ap-southeast-1" region
[ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup
[ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=ap-southeast-1 --name=roncrivera-k8s'
[ℹ]  creating cluster stack "eksctl-roncrivera-k8s-cluster"
[ℹ]  creating nodegroup stack "eksctl-roncrivera-k8s-nodegroup-ng-f141e4bf"
[✔]  all EKS cluster resource for "roncrivera-k8s" had been created
[✔]  saved kubeconfig as "/Users/ron/.kube/config"
[ℹ]  nodegroup "ng-f141e4bf" has 0 node(s)
[ℹ]  waiting for at least 2 node(s) to become ready in "ng-f141e4bf"
[ℹ]  nodegroup "ng-f141e4bf" has 2 node(s)
[ℹ]  node "ip-192-168-16-204.ap-southeast-1.compute.internal" is ready
[ℹ]  node "ip-192-168-94-211.ap-southeast-1.compute.internal" is ready
[ℹ]  kubectl command should work with "/Users/ron/.kube/config", try 'kubectl get nodes'
[✔]  EKS cluster "roncrivera-k8s" in "ap-southeast-1" region is ready

The provisioning process will take around 15-20 minutes so your patience will be tested here. ;-)

Once you’ve got the command prompt back, you can list the cluster by running:

$ eksctl get cluster --region=ap-southeast-1
NAME		REGION
roncrivera-k8s	ap-southeast-1

Communicating with the cluster

Now that the cluster has been provisioned, we can use kubectl to communicate with the cluster.

$ kubectl cluster-info
Kubernetes master is running at https://405EFA85912B6D1F2900B9591CF70B26.sk1.ap-southeast-1.eks.amazonaws.com
CoreDNS is running at https://405EFA85912B6D1F2900B9591CF70B26.sk1.ap-southeast-1.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

$ kubectl get nodes
NAME                                                STATUS    ROLES     AGE       VERSION
ip-192-168-16-204.ap-southeast-1.compute.internal   Ready     <none>    21m       v1.11.5
ip-192-168-94-211.ap-southeast-1.compute.internal   Ready     <none>    21m       v1.11.5

$ kubectl get all --all-namespaces
NAMESPACE     NAME                          READY     STATUS    RESTARTS   AGE
kube-system   pod/aws-node-l4fqr            1/1       Running   1          24m
kube-system   pod/aws-node-s6bjb            1/1       Running   1          24m
kube-system   pod/coredns-85846f7c4-scpsf   1/1       Running   0          30m
kube-system   pod/coredns-85846f7c4-z2tc7   1/1       Running   0          30m
kube-system   pod/kube-proxy-76z4j          1/1       Running   0          24m
kube-system   pod/kube-proxy-kbktc          1/1       Running   0          24m

NAMESPACE     NAME                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)         AGE
default       service/kubernetes   ClusterIP   10.100.0.1    <none>        443/TCP         30m
kube-system   service/kube-dns     ClusterIP   10.100.0.10   <none>        53/UDP,53/TCP   30m

NAMESPACE     NAME                        DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
kube-system   daemonset.apps/aws-node     2         2         2         2            2           <none>          30m
kube-system   daemonset.apps/kube-proxy   2         2         2         2            2           <none>          30m

NAMESPACE     NAME                      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns   2         2         2            2           30m

NAMESPACE     NAME                                DESIRED   CURRENT   READY     AGE
kube-system   replicaset.apps/coredns-85846f7c4   2         2         2         30m

Deploying Kubernetes Dashboard

It’s always a good idea to have the ability to visualise the infrastructure health so let’s deploy the kubernetes dashboard and some useful applications that we can use to monitor our cluster.

Deploy kubernetes dashboard

$ kubectl apply -f \
https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created

Deploy heapster

$ kubectl apply -f \
https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/heapster.yaml
serviceaccount/heapster created
deployment.extensions/heapster created
service/heapster created

Deploy influxdb backend for heapster

$ kubectl apply -f \
https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml
deployment.extensions/monitoring-influxdb created
service/monitoring-influxdb created

Create heapster cluster role binding

$ kubectl apply -f \
https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/rbac/heapster-rbac.yaml
clusterrolebinding.rbac.authorization.k8s.io/heapster created

Create eks-admin service account

$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: eks-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: eks-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: eks-admin
  namespace: kube-system
EOF

Connect to the Dashboard

The dashboard should now be ready but let’s get the service account token first.

$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep eks-admin | awk '{print $1}')
Name:         eks-admin-token-fvhts
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name=eks-admin
              kubernetes.io/service-account.uid=b23f85a3-215c-11e9-b13a-0acbc0eb6bae

Type:  kubernetes.io/service-account-token

Data
====
namespace:  11 bytes
token:      <authentication_token>
ca.crt:     1025 bytes

Now run the proxy that will forward our requests to the EKS cluster.

$ kubectl proxy
Starting to serve on 127.0.0.1:8001

Open the following link with your web browser:

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login

Choose Token, paste the token from the output of the previous command into the Token field, and choose SIGN IN.

This will bring you inside the dashboard which looks like this:

Scaling the Cluster

Let’s say our platform became so popular and user’s are now demanding of increased workloads. This means we needed to horizontally scale our platform to meet the demand.

No problem. eksctl makes it easy to scale the cluster.

Here’s how we can scale from 2 worker nodes to 3.

Let’s get the nodegroup:

$ eksctl get nodegroup --cluster=roncrivera-k8s
CLUSTER		NODEGROUP	CREATED			MIN SIZE	MAX SIZE	DESIRED CAPACITY	INSTANCE TYPE	IMAGE ID
roncrivera-k8s	ng-f141e4bf	2018-09-15T09:59:07Z	2		2		2			m5.large	ami-019966ed970c18502

Increase the number of worker nodes to 3:

$ eksctl scale nodegroup --cluster=roncrivera-k8s --nodes=3 ng-f141e4bf --region=ap-southeast-1
[ℹ]  scaling nodegroup stack "eksctl-roncrivera-k8s-nodegroup-ng-f141e4bf" in cluster eksctl-roncrivera-k8s-cluster
[ℹ]  scaling nodegroup, desired capacity from 2 to 3, max size from 2 to 3

Verify the worker count:

$ eksctl get nodegroup --cluster=roncrivera-k8s --region=ap-southeast-1
CLUSTER		NODEGROUP	CREATED			MIN SIZE	MAX SIZE	DESIRED CAPACITY	INSTANCE TYPE	IMAGE ID
roncrivera-k8s	ng-f141e4bf	2018-09-15T09:59:07Z	2		3		3			m5.large	ami-019966ed970c18502

$ kubectl get nodes
NAME                                                STATUS    ROLES     AGE       VERSION
ip-192-168-16-204.ap-southeast-1.compute.internal   Ready     <none>    3h        v1.11.5
ip-192-168-32-47.ap-southeast-1.compute.internal    Ready     <none>    3m        v1.11.5
ip-192-168-94-211.ap-southeast-1.compute.internal   Ready     <none>    3h        v1.11.5

Wrapping up

After this exercise, I am blown away by how simple it is to bootstrap an AWS EKS cluster. And given the fact that the master is managed by AWS is one less thing to worry about when looking after this orchestration platform. eksctl is also key to this simplicity as it abstracts the provisioning of all the required resources in setting up a cluster.

comments powered by Disqus