Skip to main content

Deploying Kubernetes on AWS with Kubespray

Written on January 18, 2018 by Ron Rivera.

4 min read
––– views

There are various options available for creating a Kubernetes cluster on AWS e.g. kops, kubeadm and kubespray. kops is originally designed for AWS and provides the end-to-end management of provisioning, cluster creation and upgrade. kubeadm provides a simple way to bootstrap a cluster on a set of machines that are already provisioned. Being a DevOps Engineer, I am a strong proponent of treating infrastructure as code and so would prefer a tool that codifies the provisioning process. This is where kubespray comes into play.

In this post, I will illustrate how I deployed a Kubernetes cluster on AWS using kubespray.

Prerequisites

  1. python, pip, git, ansible, terraform
  2. AWS IAM user and access key
  3. AWS EC2 key pair

Clone the kubespray repo

git clone https://github.com/kubernetes-incubator/kubespray.git

Install the required Python modules

cd to the cloned repo and run the following:

sudo pip install -r requirements.txt

Setup the cluster

The following resources in AWS will be provisioned as part of the cluster creation process.

  • 3 EC2 instances for the master node
  • 3 EC2 instances for etcd
  • 4 EC2 instances for worker nodes
  • 2 EC2 instances for bastion
  • 1 AWS Elastic Load Balancer
  • 1 VPC with Public and Private Subnet

Let's get started.

Create credentials.tfvars.

Create contrib/terraform/aws/credentials.tfvars with the following contents:

AWS_ACCESS_KEY_ID = ""
AWS_SECRET_ACCESS_KEY = ""
AWS_SSH_KEY_NAME = ""
AWS_DEFAULT_REGION = "ap-southeast-1"

Check contrib/terraform/aws/credentials.tfvars.example for inspiration.

Create terraform.tfvars.

Create contrib/terraform/aws/terraform.tfvars with the following contents:

#Global Vars
aws_cluster_name = "roncrivera"

#VPC Vars
aws_vpc_cidr_block = "10.250.192.0/18"
aws_cidr_subnets_private = ["10.250.192.0/20","10.250.208.0/20"]
aws_cidr_subnets_public = ["10.250.224.0/20","10.250.240.0/20"]

#Bastion Host
aws_bastion_size = "t2.medium"

#Kubernetes Cluster
aws_kube_master_num = 3
aws_kube_master_size = "t2.medium"

aws_etcd_num = 3
aws_etcd_size = "t2.medium"

aws_kube_worker_num = 4
aws_kube_worker_size = "t2.medium"

#Settings AWS ELB
aws_elb_api_port = 6443
k8s_secure_api_port = 6443
kube_insecure_apiserver_address = "0.0.0.0"

Check contrib/terraform/aws/terraform.tfvars.example for inspiration.

Initialize terraform and create the plan.

terraform init

Then create the plan.

terraform plan -out=kubernetes_plan

Apply the generated plan to proceed with provisioning.

terraform apply kubernetes_plan

This command initially failed with the following error:

Error: output.default_tags: invalid variable syntax: "default_tags". Did you mean 'var.default_tags'?
 
If this is part of inline `template` parameter then you must escape the interpolation with two dollar signs. For example: ${a} becomes $${a}.

I figured this was due to the incorrect access to the default_tags variable in the terraform files.

So I submitted a Pull Request with the recommended solution to kubespray's project repo, it got approved and merged. There goes my contribution to the Open Source community. :-)

Sorry, I digress.

If everything goes well, all the AWS resources defined in terraform will be created, e.g.

terraform_apply

Terraform will also automatically create an Ansible inventory file called hosts which includes the created EC2 instances. Ansible will also generate ssh-bastion.conf which can be used to connect to the hosts via ssh.

Run the Ansible playbook to create the Kubernetes cluster.

ansible-playbook -i ./inventory/hosts ./cluster.yml \
-e ansible_user=centos -e bootstrap_os=centos \
-e ansible_ssh_private_key_file=/path/to/key_pair.pem \
-e cloud_provider=aws -b --become-user=root --flush-cache

The cluster installation process takes around ~20 minutes so be patient. :-)

Verify the cluster

Once the installation is complete, log-in to the Kubernets master and run the following commands:

$ kubectl cluster-info
$ kubectl get nodes
$ kubectl get all --all-namespaces

To destroy the cluster, just run:

terraform destroy

Conclusion

Kubespray provides a consistent way of creating a Kubernetes cluster via infrastructure-as-code. This process can be further integrated into a Jenkins Pipeline to automate the provisioning process.

Tweet this article

Join my newsletter

Get notified whenever I post, 100% no spam.

Subscribe Now