Containers – Meeting Performance Objectives – SAP-C02 Study Guide

Containers

If, instead of deploying your workloads directly on virtual machines, you prefer to deploy them using containers, multiple options are offered to you. First, do you need to manage and have control over the virtual machines running the containers? If you don’t, you can opt for AWS Fargate, which provides a serverless environment to run containers. If you do, you must pick and manage the EC2 instances that will best support your workload. How do you decide which instance, then? Well, for that, refer to the previous section discussing EC2 instance selection and also cross-check whether there is any incompatibility with your container orchestrator service.

Further, you have to also decide on the container orchestrator service of your choice, either ECS or EKS. So, how do you choose?

Briefly, ECS is an AWS native container orchestrator. It brings the benefits of simplicity of use and integration with other AWS services, such as IAM and AWS Elastic Load Balancing (ELB), to leverage either an Application Load Balancer (ALB) or a Network Load Balancer (NLB). Native integration means that no additional layer of abstraction is required to integrate with the specific service at hand.

EKS, on the other hand, brings the flexibility of Kubernetes with its vibrant user community and broad ecosystem. It also integrates well with AWS services such as IAM and ELB, for instance, but it typically relies on a layer of abstraction to integrate with cloud services. If you are familiar with and already using Kubernetes, then EKS will be the natural choice. If that is not the case, especially if you are just starting with containers on AWS, then ECS will definitely be easier to grasp and certainly offers a faster learning curve for the newbie. And, unless you really plan to exploit the flexibility provided by Kubernetes, EKS may well be overkill.

Last but not least, don’t forget that AWS offers a serverless container service, Fargate, that supports both ECS and EKS. Unless you need to keep full control over the underlying servers running your containers, Fargate will make your life easier by managing and controlling that part of the infrastructure. That way, you can focus on building, deploying, and managing containers without worrying about the servers behind.

Whichever container service you end up choosing, you will have to configure it to meet your performance requirements. The configuration options will obviously vary per service, but again, exactly like for EC2 instances, you will need to run some experiments to find your optimal configuration. For instance, ECS lets you run your containers inside tasks (a task runs one or more containers). Tasks are allocated CPU and memory to do their job. When you deploy ECS tasks using EC2 instances (EC2 launch type), you have great flexibility when it comes to how you configure allocated CPU and memory to the tasks, and you can even overcommit resources on the same underlying EC2 instance. When you deploy ECS tasks on Fargate (Fargate launch type), you also specify how much CPU and memory are allocated to a task, but you have less flexibility and granular control over things, in particular, because Fargate manages the infrastructure on your behalf (and, for instance, does not overcommit resources). With EKS, containers are deployed on Pods (instead of tasks on ECS), and you control how much memory and CPU are allocated to Pods in a similar fashion. Going any deeper into Kubernetes goes far beyond the scope of this book, so it will suffice to just mention that Kubernetes provides additional flexibility to control and tune the underlying infrastructure configuration (for instance, setting limits not only at the pod level but also at the namespace level). And again, if you are not interested in managing the infrastructure, just let AWS do it for you by leveraging Fargate with EKS.

For monitoring the performance of your containers, AWS provides a solution called AWS CloudWatch Container Insights that gathers metrics for containerized workloads running on either ECS or EKS, including Fargate support for both container platforms. As you have already seen for EC2 instances, CloudWatch collects metrics about various resources, including CPU, memory, disk, and network utilization. These metrics typically provide visibility at the ECS or EKS cluster level. For more granular visibility, Container Insights adds insight into container-level resources such as instances (ECS), nodes (EKS), tasks (ECS), and Pods (EKS) regarding CPU, memory, storage, and network utilization. It also provides diagnostic information, such as container restart failures (EKS), to help spot issues. Additionally, you can integrate CloudWatch with an open-source solution such as Prometheus, an immensely popular monitoring toolkit from the Cloud Native Computing Foundation (CNCF). Integrating Prometheus with CloudWatch allows you to reduce the number of tools being used for monitoring while being able to collect either pre-defined sets of metrics or custom metrics from your container workloads.