For completeness, and to avoid any confusion if you come across the classic load balancer (CLB), it will be briefly mentioned here. The classic load balancer was the first offering within the ELB family of load balancers from AWS. The CLB was deployed to balance EC2 instances before VPCs came into existence. The VPC architecture was introduced in 2013 and quickly became the dominant architecture in AWS. Original EC2 instances predate VPCs, and the classic load balancer was used before the current ELB offerings were released.
In 2022 the classic load balancer was retired. All classic deployments are now retired and no longer available in the ELB console. Just remember that if you see anything related to the classic load balancer, it is legacy and no longer in use.
ELB nodes are placed in availability zones, and they scale inside of each AZ. If a node fails, AWS will replace it as part of a managed service. AWS will also scale the nodes as required to meet your workloads and maintain all ongoing maintenance on the compute and load balancer application’s software updates.
When the ELB gets created, a node is placed in the subnet, and a single DNS A record is created. The DNS name resolves to all the individual nodes, so all incoming requests are evenly distributed across all of the load balancer nodes.
ELBs support both IPv4 and IPv6 addressing for flexibility in your designs. As we discuss in this chapter, the ELB family is integrated with many other AWS services to add features, flexibility, and capabilities. These include DNS through Route 53, monitoring with CloudTrail and CloudWatch, scalability with autoscaling groups, security with the Web Application Firewall, and IAM security groups.
For high connection counts and throughput where a massive number of connections is normal, the network firewall is usually the best solution. The application load balancer operates at layer 7 and does processing to look in the payload to perform its functions. This offers a great amount of flexibility and capabilities with the trade-off being less throughput than the network load balancer.
The gateway load balancer offers inline transparent flows that allow third-party service providers to insert virtual appliances into the data flow for functions such as data analytics, security, and intrusion detection/prevention services.
Elastic load balancers are an integral part of an AWS high availability design. Amazon offers the various ELB architectures as a fully managed service, which means that they are responsible for the underlying hardware and software. There are a great deal of redundancy and recovery mechanisms deployed by AWS to maximize the uptime. This saves your organization from managing, upgrading, and configuring the actual load balancers themselves and enables you to focus on the operations side when deploying high availability load balancing designs.
Without a load balancer, traditional network designs used DNS to resolve the domain names directly to the web servers sitting in a public VPC. An alternative approach was to configure Route 53 with multivalue responses, as you learned about in Chapter 3, “Hybrid and Multi-Account DNS.” In this case, a DNS query response would return multiple IP addresses of the servers and could optionally use health checks to ensure the servers were operational.
The client could then initiate a connection to the servers in the IP address pool that you configured using multivalue routing in Route 53.
Using the features available with ELBs integrated with other AWS services, achieving high availability with an AWS ELB architecture is highly resilient, fully featured, offers many design options, and supports automated recovery and failover scenarios.