Spot – Designing for Cost Efficiency – SAP-C02 Study Guide

Spot

Perhaps the most cost-effective pricing model out there, Spot Instances can provide up to 90% cost savings compared to the on-demand pricing model. However, this drastic price reduction does come with some caveats. Spot pricing is adjusted based on a particular instance’s supply and demand in a particular Region, as well as an AZ. If the demand for a particular instance family is low, the instance price is significantly reduced and made available to all to use. However, if the demand for the instance is high across the AZ or Region, its pricing can be more or less similar to an on-demand instance’s price as well.

So, when is the right time to use a Spot Instance, and for which workloads? Well, Spot Instances and pricing are best suited for workloads that are stateless and extremely flexible regarding when they are started or stopped. This is because if the demand for a particular instance type is high within an AZ, AWS can reclaim this instance back with a two-minute termination warning. As a result, such types of instances are best suited for batch processing workloads, or even stateless container workloads that are designed to handle flexible start and stop times.

Imagine that you are running a data analysis pipeline that processes large datasets periodically. Instead of using expensive on-demand instances, you can launch Spot Instances during data processing. These instances handle compute-intensive tasks, such as running machine learning models, statistical analyses, or Extract, Transform, Load (ETL) jobs. This can mean significant cost savings, especially for non-urgent tasks where interruptions are acceptable.

Although the three pricing models discussed apply to Amazon EC2, AWS Fargate, and AWS Lambda, there is still a difference in the way costs are calculated across the three services. For example, in the case of an Amazon EC2 instance, the pricing is pretty straightforward and based on the instance type that you select. Each instance type comes with its own set of resources, such as CPU, memory, and OS flavor. However, this pricing mechanism changes a bit with both AWS Fargate and AWS Lambda functions. For example, AWS Lambda charges customers based on the following three parameters:

  • The time taken to execute the Lambda function, rounded off to the nearest millisecond
  • The number of requests made to the function
  • The amount of memory allocated to the function

Similarly, for containers running on AWS Fargate, customers pay for the underlying CPU, memory, and storage for the duration of the container’s lifetime. Now that we have discussed compute pricing models, it is time to look at another important part of your enterprise infrastructure: storage.