Understand the architecture of AWS CloudFront. Know that cached data is stored at edge locations based on demand. Regional edge caches are intermediate data stores for the local edge location to refresh content without having to go back to the originating source.
Know the details of CloudFront invalidations and the protocols supported. Know that CloudFront supports SSL/TLS session termination and uses the Web Application Firewall in front of the edge locations to protect against denial-of-service attacks. Know that CloudFront can support additional services such as processing at the edge using the Lambda serverless services from AWS.
Understand how Global Accelerator works. Know that it is used to receive traffic from the Internet into the AWS global network as close to the source as possible to offer better performance and lower latency than traversing the Internet to the intended AWS region. Know that Anycast IP addresses are used and that they can be assigned to more than one endpoint and advertised to the Internet using BGP. This allows users to connect to the nearest edge Global Accelerator access location. Understand how custom routing accelerators are different from global accelerators in that they offer more specific endpoint policies and configurations.
Remember the different types of load balancers. Application load balancers operate at layer 7 of the OSI model and can switch content based on information in the HTTP headers and URL. The listener supports unencrypted HTTP or encrypted HTTPS SSL/TLS traffic. On the backend, targets can include microservices such as Lambda, containers such as Kubernetes or Docker, EC2 virtual servers, IP addresses, and both local and remote services inside and external to the AWS cloud.
Understand the details of ELB listeners. Target groups and health checking are basic load balancer configuration settings. Listeners are the entry point from the Internet, target groups define the backend services such as web servers, and health checks make sure the targets are healthy and able to receive connections. Sticky connections ensure that all source to destination connections terminate on the same target for the duration of the connection and are also referred to as session affinity.
Understand that network load balancers are used for very high-performance use cases. Network load balancers operate at the network layer, layer 4, of the OSI model. An NLB can handle millions of connections per second and is used in demanding and large implementations that benefit from high-throughput, low-latency connections. Targets can include EC2 instances, microservices, Lambda, and containers such as ECS and EKS. Both TCP and UDP IP connections are supported.
Know API Gateway in detail. Understand how all the features fit together. Know the differences between the RESTful and HTTP API types and what the WebSocket protocol is and where it can be used.