Functions – Meeting Performance Objectives – SAP-C02 Study Guide

Functions

Functions go a step further with the level of abstraction they provide. AWS Lambda functions let you focus on developing the application code while they manage and scale the underlying infrastructure for you. You don’t need to package your code in a container, although you can do so if you really want to. Lambda supports several programming languages through runtimes. You can, for instance, develop code in Java, Python, JavaScript, .NET, and more and leverage one of Lambda’s built-in runtimes. If your preferred programming language is not supported by any of the built-in Lambda runtimes, you can check whether one of the community-supported Lambda runtimes can help. And, as a last resort, you can also bring your own custom runtime.

Lambda works hand in hand with Amazon API Gateway to publish and share your functions’ code as services through a set of APIs. API Gateway is also entirely serverless and manages and scales the underlying infrastructure for you.

When deploying a Lambda function, you specify how much memory you want Lambda to allocate at runtime. Lambda automatically allocates an amount of CPU that is proportional to the amount of memory you specified. The more memory (and thus CPU) you allocate, the faster your Lambda function will execute, until it cannot make use of additional memory and CPU power anymore. The optimal performance memory setting is somewhere just before the performance gains start plateauing. Also, since Lambda functions’ costs depend on their allocated memory and execution duration time, there is a point where the cost over performance gain ratio obtained by adding more memory (and thus CPU) to a Lambda function will start becoming less and less interesting.

An additional consideration for Lambda functions, which long-time Lambda users are familiar with, is cold starts. AWS Lambda functions run on infrastructure managed by AWS. So-called cold starts are experienced when an incoming request requires new infrastructure to be provisioned resulting in an extra delay before executing the function at hand. For a long time, there was no proper solution to that problem. Some tried to keep the infrastructure supporting their Lambda functions warm by generating sufficient synthetic traffic, but that was empirical and far from optimal. Nowadays, although you cannot completely eliminate the issue (in the case of very spiky traffic, for instance), you can greatly reduce its effects by provisioning sufficient capacity for your Lambda functions when you know that cold starts could potentially have a great impact.

As always on AWS, improving performance starts with collecting metrics about your workload. In the case of AWS Lambda, AWS CloudWatch Lambda Insights conveniently collects metrics from your Lambda functions on CPU, memory, disk, and network usage. It also provides additional diagnostic information, for instance, about cold starts or worker shutdowns, to help you spot anomalies or issues and fix them more easily.