Docker containers produced a seismic shift in how applications are built and managed. They vastly simplified the implementation, and thus accelerated the adoption of containerized microservices application architecture. Containers are now readily accepted to run mission-critical applications to increase speed, reliability, and availability. However, containers on their own are in simplest terms a means of process isolation. Managing the multiple containers for a single application, especially when microservices are being scaled in relation to application load, requires a management or orchestration solution.

At present, the leading technologies in the container orchestration space are Kubernetes and Amazon’s ECS (Elastic Container Service). Kubernetes is an open-sourced container orchestration platform that was originally developed by Google. Amazon ECS is Amazon Web Services’ (AWS) proprietary managed container orchestration service.

Amazon developed ECS as a specific solution for running and managing Docker containers in its public cloud. In 2017, AWS announced support for Kubernetes with its EKS offering (Elastic Kubernetes Service). Amazon EKS provides an AWS-managed Kubernetes service that simplifies the non-trivial task of deploying a high-available, scalable, and fully managed Kubernetes control-plane. 

As Amazon ECS and Kubernetes (EKS) are both offering similar container orchestration services for running and managing containers on AWS, an engineering team may be puzzled over which to choose. In this blog post, I’ll share the differences between both platforms and highlight the advantages of each one. So, you will be better able to choose the best option for your application management use case.

Who needs container orchestration?

While the idea of containing processes has a long history (see chroot jails) more modern container solutions like Ubuntu’s LXC/LXD and Google’s lmctfy were not particularly simple to implement. The benefit of isolating processes, rather than virtual machines, however, is twofold. Much like one can pack more virtual machines onto a physical server, one can pack even more containers onto a physical (or even virtual server). This granularity enables better resource packing, in other words, more apps on a given resource with less waste – in theory. Containers, being smaller resources, are also much faster to provision, and thus make scaling an application up or down in response to load much more performant on the scale of 100’s of milliseconds for a container app versus minutes for a VM-based equivalent.

At the point that we are dealing with response times of milliseconds, we can no longer expect that an alert system with human intervention will make the best use of keeping a system in an ideal state. This is where a container orchestration system comes into play. Its role is to automate container management: the deployment, scaling, networking, and availability of the container clusters. Thankfully, container orchestrators can manage provisioning and deployment of containers on instances, scaling in response to variation in load, monitoring and responding to container and node health, application upgrades, and sensible resource allocation.

Amazon ECS

Amazon Elastic Container Service (Amazon ECS) was developed to easily run and scale Docker container-based applications on AWS. ECS is highly scalable, offers high availability and security, and is deeply integrated with a variety of AWS services, including Amazon ELB, Amazon VPC, AWS IAM, and many more. While the default model is to deploy containers to EC2 instances, Amazon ECS also leverages AWS Fargate, so you can deploy containers without the need to provision servers. Among the pros of using ECS are its simplicity to use, integration with AWS services, cost savings (compared to EKS). Because of its out-of-the-box integration with AWS IAM (Identity and Access Management), it is more secure than an EKS implementation that did not have appropriate security measures installed.

Amazon EKS

Kubernetes has, without any doubt, become the defacto open-source solution to container orchestration. Along with it, a vibrant ecosystem of (mostly open-source) tools has grown to extend Kubernetes functionality. While Kubernetes is a remarkable and popular container orchestrator, it still requires infrastructure to run on.

Amazon Elastic Container Service for Kubernetes (Amazon EKS) makes it easy to set up Kubernetes on AWS. By enabling container orchestration in AWS with Kubernetes, all EKS-managed apps are fully compatible with applications managed by any standard Kubernetes environment. This includes the potential of having an EKS control plane to integrate/manage a Kubernetes system in another cloud environment (aka multi-cloud deployments). Because the application-level management structure is defined by Kubernetes, this option further avoids vendor (AWS) locking. If your app runs on EKS, it should run on Azure’s AKS, Google’s GKE, or a privately hosted Kubernetes system.

Which is cheaper Amazon ECS or EKS?

Given that cost is often paramount in decision making, let’s look at cost differences as ECS and EKS have different pricing models. With ECS, there is no additional charge for EC2 virtual machines. You only pay for the AWS resources you create to store and run applications. You only pay for what you use and there are no additional pricing concerns.

With EKS, the same costs apply. However, the EKS control plane requires master nodes that run across multiple availability zones. This is a minimal cost, $75 a month for each Amazon EKS cluster with a standard 3-node configuration, but if you are running multiple clusters, this may still add up.

How easy is it to deploy?

There is no question that ECS is the simpler container orchestration solution. With EKS, Kubernetes is a separate control plane that is layered on top of AWS services. With ECS, the control plane is not a user’s concern. Also, because ECS was specifically designed for the AWS environment, an AWS user will be able to set it up and then interact with their application via the familiar AWS management console.

In contrast, with EKS, a user will need to interface with Kubernetes using kops to configure and deploy Pods. 

To summarize, orchestrating containers via Kubernetes requires more expertise from DevOps engineers, whereas in ECS it’s considered to be easier.

A note on networking and EC2 Container Capacity

There is a difference between how Elastic Network Interfaces (ENI, think virtual NIC) are allocated in ECS and EKS that can result in limits on the number of tasks/pods per EC2 instance. In ECS it is possible to assign an ENI directly to a task by using aws vpc mode. This will limit the number of containers that can be run per instance. The actual number will depend on the specific instance and will range from 24-120 ENI’s.  

In EKS, a dedicated network interface mapped to Pod. The result is that the same internal network and public IP is effectively assigned to all the containers running in that Pod. An ENI can also be shared among several pods. This allows a user to load up to 750 Pods per instance as compared to a maximum of 120 tasks per instance for ECS.

Comparative Security

This is a matter of implementation specifics, as properly set up security is, to a large extent, the responsibility of the system’s operators. The out-of-the-box IAM integration possibly tips ease of configuration to ECS, but let’s look at some of the options.

Networking plays into security as assigning Security Groups is a fundamental network security function. The ability to assign ENIs per task\Pod enables users to assign a Security Group to those tasks/Pods, rather than keeping all ports open on the host EC2 instance. While assigning specific Security Group policies to a given ENI gives granular security control, the use of Security Groups in association with a load balancer is an alternative approach that may level the playing field for both orchestration options.

Amazon EC’s deep IAM integration is more of a differentiator. By default, this service allows users to assign granular access permissions per container and service. It can also limit the accessibility of resources such as S3, DynamoDB, Redshift, SQS, etc.  to a container. Thanks to the rapidly expanding universe of tools that support Kubernetes, there are options (e.g. KIAM) that allow similar functions in an EKS environment, but this then comes with the cost of additional system complexity.

Does multi-cloud need to be an option?

Multi-cloud has become a bit of a hot topic as it can provide several benefits that even a vendor as large as AWS does not. One example is “cloud-bursting” to quickly provision additional external resources to augment the current cloud system. Multi-cloud systems also provide disaster recovery or high availability assurance that goes beyond replication across availability zones or regions. It also allows systems to leverage services that are unique to specific cloud vendors. 

If you go the ECS route, the ease of implementation needs to be balanced against the reality of vendor lock-in. If you are an AWS-only shop, this may not be an issue. If you are looking to spread risk, take advantage of better pricing opportunities, or provide better service to areas that are limited by AWS offerings, EKS may provide a greater range of options.

So which to chose Amazon ECS or EKS?

As with all architecture choices, the critical component is comparing your architectural needs to the options provided by available solutions. If you are happy with being tied to AWS and don’t long for the portability/flexibility across multiple infrastructure vendors or a private cloud, the ECS is a sensible choice. If you want the flexibility to integrate externally with the open-source Kubernetes community, spending the additional effort on setting up EKS may be the better option. If you are already running workloads on Kubernetes, EKS is going to be a familiar and simple route to moving to an AWS environment.

Regardless of which choice you make, orchestration tools, ECS and EKS are only as smart as their most recent set of configurations. The amazing abilities that both orchestrators provide in maintaining a desired system state are limited by the need for monitoring and updating resource limits and performance and cost targets.

Opsani’s AI-driven optimization engine lets you focus on delivering the core values of your business by automating away the toil – the repetitive and manual tasks – associated with tuning systems that are constantly changing. Opsani leverages artificial intelligence and machine learning, particularly deep reinforcement learning, to predict traffic spikes and resource requirements will accurately predict the best moment to scale up or down, and seamlessly integrates with AWS tooling to iteratively automate systems tuning processes. No toil necessary. To find out how Opsani can reduce your AWS spend, check out AWS does not Equal Cloud Optimization.