Redis – Implementing Scalability and Elasticity – SOA-C02 Study Guide

Redis

The other engine supported by ElastiCache is Redis, a fully-fledged in-memory database. Redis supports much more complex datasets such as tables, lists, hashes, and geospatial data. Redis also has a built-in push messaging feature that can be used for high-performance messaging between services and chat. Redis also has three operational modes that give you advanced resilience, scalability, and elasticity:

Single node: A single Redis node, not replicated, nonscalable, and nonresilient. It can be deployed only in a single availability zone; however, it does support backup and restore.

Multinode, cluster mode disabled: A primary read-write instance with up to five read replicas. The read replicas are near synchronous and offer resilience and read offloading for scalability. Multinode clusters with cluster mode disabled can also be deployed in one or more availability zones.

 Multinode, cluster mode enabled: One or more shards of multinode deployments, each with one primary and up to five read replicas. By enabling cluster mode, you retain the resilience and scalability but also add elasticity because you can always reshard the cluster and horizontally add more write capacity. Cluster mode always requires the database nodes to be deployed across multiple availability zones. However, sharding means that multiple primary nodes are responsible for multiple sets of data. Just like with database sharding, this increases write performance in ideal circumstances, but it is not always guaranteed and can add additional complexity to an already-complex solution. Figure 4.11 illustrates Redis multinode cluster mode, both disabled and enabled.

FIGURE 4.11 Redis multinode, cluster mode disabled vs. enabled

The features of Redis give you much more than just a caching service. Many businesses out there use ElastiCache Redis as a fully functional in-memory database because the solution is highly resilient, scalable, elastic, and can even be backed up and restored, just like a traditional database.

ExamAlert

If an exam question indicates the database is overloaded with reads, caching should be the main strategy to enable scalability of reads. The benefit of caching is that the data in the cache, when configured correctly, is highly likely to be current and synchronous with the database. However, always be careful when scaling the caching cluster because too many caching instances can add additional unnecessary cost to the application.

Amazon CloudFront

Now that the database and server-side caching are sorted, you need to focus on edge caching. CloudFront is a global content delivery network that can offload static content from the data origin and deliver any cached content from a location that is geographically much closer to the user. This reduces the response latency and makes the application feel as if it were deployed locally, no matter which region or which physical location on the globe the origin resides in.

CloudFront also can terminate all types of HTTP and HTTPS connections at the edge, thus offloading the load to servers or services. CloudFront also works in combination with the Amazon Certificate Manager (ACM) service, which can issue free HTTPS certificates for your domain that are also automatically renewed and replaced each year.

CloudFront enables you to configure it to accept and terminate the following groups of HTTP methods:

GET and HEAD: Enables standard caching for documents and headers. Useful for static websites.

GET, HEAD, and OPTIONS: Enables you to cache OPTIONS responses from an origin server.

GET, HEAD, OPTIONS, PUT, PATCH, POST, and DELETE: Terminates all HTTP(S) sessions at the CloudFront Edge Location and can increase the performance of both the read and write requests.

One of the better features is the ability to rewrite the caching settings for both the cache held within the CloudFront CDN as well as the client-side caching headers defined within servers. This means that you can customize the time-to-live of both the edge and client-side cache. CloudFront distributions can be configured with the following options for setting TTL:

Min TTL: Required setting when all HTTP headers are forwarded from the origin server. It defines the minimum cache TTL and also determines the shortest interval for CloudFront to refresh the data from the origin.

Max TTL: Optional setting. It defines the longest possible cache TTL. It is used to override any cache-control headers defined at the origin server.

Default TTL: Optional setting. It allows you to define cache behavior for any content where no TTL is defined at the origin server.