Five Ways to Run Kubernetes on AWS

If you have decided that Amazon Web Services (AWS) is the place you want to host your Kubernetes deployments, you have two primary AWS-native options – push the easy button and let AWS create and manage your clusters with Elastic Kubernetes Service (EKS) or roll up your sleeves and sweat the details with the self-hosted Kubernetes on EC2. In between these two levels of complexity are a number of install tools that abstract away some of the complexity of getting a Kubernetes cluster running on AWS. In this article we will look at the most popular AWS-compatible tools: Kubernetes Operations (kOps), kubeadm and kubespray.  

In this article we’ll cover the options for running Kubernetes on AWS in greater detail, provide some insight into prerequisites and provide resources to help you get up and running:

  • [easy] Creating a Kubernetes cluster with Elastic Kubernetes Service (EKS)
  • [less easy] Creating a Kubernetes cluster on AWS with kOps
  • [less easy, more control ] Creating a Kubernetes cluster on AWS with kubeadm
  • [less easy, more control, Ansible-centric ] Creating a Kubernetes cluster on AWS with kubespray
  • [hard, all the control] Manually creating a Kubernetes cluster on AWS with EC2 instances

Creating a Kubernetes Cluster on AWS with Elastic Kubernetes Service (EKS)

This is really the easy button when it comes to the options for running Kubernetes on AWS.  With this option, AWS simplifies cluster setup, creation, patches and upgrades. With EKS you get an HA system with three master nodes for each cluster across three AWS availability zones.

Although the simplest way to get a Kubernetes up and running on AWS, there are still some prerequisites:

  • An AWS account
  • An IAM role with appropriate permissions to allow Kubernetes to create new AWS resources
  • A VPC and security group for your cluster (one for each cluster is recommended)
  • kubectl installed (you may want the Amazon EKS-vended version)
  • AWS CLI installed

If you have your prerequisites in place, the following resources will guide you to getting your first EKS cluster up and running:

Creating a Kubernetes Cluster on AWS with kOps

Using Kubernetes Operations (kOps) abstracts away some of the complexity of managing Kubernetes clusters on AWS. It was specifically designed to work with AWS, and integrations with other public cloud providers are available. In addition to fully automating the installation of your k8s cluster, kOps runs everything in Auto-Scaling Groups and can support HA deployments.  It also has the capability to generate a Terraform manifest, that could be used in version control or could be used to have Terraform to actually create the cluster.

If you wish to use kOps, there are a number of prerequisites before creating and managing your first cluster:

  • have kubectl installed.
  •  install kOps on a 64-bit (AMD64 and Intel 64) device architecture.
  • setup your AWS prerequisites
  • set up DNS for the cluster, e.g. on Route53, (or, for a quickstart trial, a simpler alternative is to create a gossip-based cluster)

Once you’ve checked off the prerequisites above, you are ready to follow the instructions in one of the resources below:

Creating a Kubernetes Cluster on AWS with kubeadm

Kubeadm is a tool that is part of the official Kubernetes project.  While kubeadm is powerful enough to use with a production system, it is also an easy way to simply try getting a K8s cluster up and running. It is specifically designed to install Kubernetes on existing machines. Even though it will get your cluster up and running, you will likely still want to integrate provisioning tools like Terraform or Ansible to finish building out your infrastructure. 


  • kubeadm installed
  • one or more EC2 machines running a deb/rpm-compatible Linux OS(e.g. Ubuntu or CentOS), 2GB+ per machine and at least 2 CPUs on the master node machine.
  • full network connectivity (public or private) among all machines in the cluster. 

The following resources will help you get started with building a K8s cluster with kubeadm:

Creating a Kubernetes Cluster on AWS with kubeadm

Another installer tool that leverages Ansible playbooks to configure and manage the Kubernetes environment.  One benefit of Kubespray is the ability to support multi-cloud deployments, so you are looking to run your cluster across multiple providers or on bare metal, this may be of interest.  Kubespray actually builds on some kubeadm functionality and may be worth considering adding to your toolkit if already using kubeadm.


  • uncomment the cloud_provider option in group_vars/all.yml and set it to ‘aws’
  • IAM roles and policies for both “kubernetes-master” and “kubernetes-node”
  • tag the resources in your VPC appropriately for the aws provider
  • VPC has both DNS Hostnames support and Private DNS enabled
  • hostnames in your inventory file must be identical to internal hostnames in AWS.

The following resources will help you get your Kubernetes cluster up and running on AWS with Kubespray:

Manually Creating a Kubernetes Cluster on EC2 (aka, Kubernetes the Hard Way)

If EKS is the “easy button,” installing on EC2 instances is the opposite. If you need full flexibility and control over your Kubernetes deployment, this may be for you. If you’ve spent any time with Kubernetes, you’ve almost certainly heard of “Kubernetes the Hard Way.” While KTHW originally targeted Google Cloud Platform, AWS instructions are included in the AWS and Kubernetes section.  Running through the instructions provides a detailed, step by step process of manually setting up a cluster on EC2 servers that you have provisioned. The title, by the way, is not a misnomer and if you do run through this manual process you will reap the rewards of having a deep understanding of how Kubernetes internals work. 

If you are actually planning to use your Kubernetes on EC2 system in production, you will likely still want some level of automation, and a functional approach would be to use Terraform with Ansible. While Terraform is much more than a K8s install too, it allows you to manage your infrastructure as code by scripting tasks and managing them in version control.  There is a Kubernetes-specific Terraform module that helps to facilitate this.  Ansible complements Terraform’s infrastructure management prowess with software management functionality for scripting Kubernetes resource management tasks via the Kubernetes API server.

The following resources will help you get started with creating a self-managed Kubernetes cluster on EC2 instances:


In this article, we considered five common ways to get a Kubernetes cluster running on Amazon Web Services.  Which one you choose will really depend on how much control you need over the infrastructure you are running the cluster on and what your use case is.  If you are just trying out Kubernetes or doing a dev environment to just try something out, a quick and repeatable solution is likely preferable. In a production system, you’ll want tools that simplify administrative tasks like rolling upgrades without needing to tear down the entire system.

The tools we covered are the most popular solutions for deploying on AWS. You may have noticed that there is a degree of overlap and integration among several of the approaches, so using kOps with Terraform to then install on self-hosted EC2 instances is a possibility. Kubernetes is known for being a challenge to manage manually, and the tools we covered are under active development to simplify that process. More tools are constantly being created to address specific use cases. For example, Kubicorn is an unofficial, golang-centric K8s infrastructure management solution.  While not all of the tools listed are AWS specific, you can explore the CNCF installer list from the CNCF K8s Conformance Working Group to get a better sense of the diversity of options available.