On Amazon ElastiCache. Q&A with Jim Gallagher

“When you use a cache, you unlock the elasticity of these compute resources by allowing them to retrieve commonly accessed data as fast as possible, without having to maintain any local copies of this information.”

Q1. What is a distributed cache and what is it useful for? 

You can think of a distributed cache as a pool of shared memory accessed by compute resources (e.g. servers, containers, functions) over a network. When you use a cache, you unlock the elasticity of these compute resources by allowing them to retrieve commonly accessed data as fast as possible, without having to maintain any local copies of this information.

Ideally in distributed computing, your compute instances don’t maintain local state information, as they are usually considered ephemeral resources. For example, Amazon EC2 instances may come and go as part of an auto-scaling group, so intuitively these instances shouldn’t be storing any data at the instance level. Similarly with containers or functions, these compute components come and go depending on load demands. It is recommended to store your data in a distributed data store outside of your compute instances. 

Q2. How does caching work? 

You can cache in conjunction with another query-able resource like a database, API, or object store. These other resources either aren’t as fast as your cache or can’t sustain the same amount of aggregate throughput as your cache. Caching optimizes for speed rather than capacity like other data stores, so it’s typical to store only a subset of your overall data — your “hot data” — in cache. Hot data is typically your most frequently accessed data. By caching your hot data, you alleviate read pressure from your backend data store and improve your overall application performance – often by orders of magnitude.

Q3. What are the key differences between in-memory data stores and disk-based databases? 

The primary differences between in-memory data stores and disk-based databases are performance and scale. By definition, computer memory is faster than disk, so querying an in-memory data store will offer lower latency and higher throughput than a disk-based database. However, disk capacity is much less expensive than memory, so developers who prioritize speed and scale over data capacity and cost might opt for an in-memory data store.

Q4. With the explosive growth of business-critical, real-time applications, performance is one of the top considerations for companies across industries. What do you recommend to use to improve performance? 

There are many approaches to optimizing performance across all levels of an application stack, so it is important to take a holistic approach and know the drivers of end-to-end latency.

It is commonly recognized that databases can be a bottleneck and key contributor to overall latency across your applications. Since in-memory data stores offer the fastest performance, you should consider using an in-memory cache for use cases that require a real-time experience. At AWS, we offer two fully managed, in-memory data storage services. Amazon ElastiCache is a fully managed caching service that unlocks microsecond latency. ElastiCache is compatible with both Redis and Memcached, two popular open source in-memory engines. Amazon MemoryDB for Redis is a Redis-compatible, durable, in-memory database service. 

Q5. Can you explain how to improve application responsiveness by reducing latency and increasing throughput using Amazon ElastiCache? 

There are two common approaches here: either insert ElastiCache into your architecture as a caching layer or leverage ElastiCache as the primary data store. 

You can also use the approaches in conjunction with each other in your uses cases. 

For example, imagine an e-commerce platform. In this platform, you have a fully-rendered landing page that is refreshed daily. Without using a cache, your website would request too many calls to a backend database to efficiently load today’s trending products, news stories, etc. Using a cache to access this frequently-requested data reduces pressure on your database and allows your website to load these options faster for your viewers. You could also use ElastiCache to store the session information for all users, which typically includes login credentials, browsing history, and shopping cart contents, without storing this data on your backend database. In this example we are using both approaches – caching and primary data store.

Q6. The most common ElastiCache use case is as a caching layer on top of a primary database, like Amazon RDS, Amazon Aurora, or Amazon DynamoDB. When is it appropriate to use a a caching layer on top of a primary database and when it is better to use the primary database directly? 

We recommend you take the approach of choosing the right tool for the job. For lowest latency, highest throughput workloads, ElastiCache is the best tool. So the question becomes a business decision: is ElastiCache the best-fit tool for my specific use case? The benefits of caching with ElastiCache include alleviating pressure on an otherwise maximally scaled database, or perhaps lessening the number of read-replicas.

We also hear from customers that implementing ElastiCache tremendously impacts the bottom line of application performance. Specifically, a number of studies, like this one from Akamai in 2017, show that users respond negatively to latency, which can have real impacts on bottom line business metrics like conversion rates. Caching data for performance becomes imperative to keep customers engaged. 

You also have to consider the costs of implementing a caching layer. While ElastiCache as a managed service greatly alleviates the management burden of running a caching layer, it includes the commercial cost of maintaining cache nodes and the complexity cost of managing another component in your application stack. Additionally, code changes are usually required at the application level to implement caching patterns. These costs need to be weighed against the benefits of adding caching. 

Another aspect to consider is the overall data access pattern. Caching typically has the biggest impact on read-heavy workloads, where the underlying data doesn’t change very often. For write-heavy use cases, with data that changes frequently, caching may not have a significant impact on overall application performance. 

By weighing your performance requirements, bottom-line goals, costs, and data access patterns, you can determine if ElastiCache is the best tool for your application. 

Q7. Does ElastiCache work for relational, as well for NoSQL databases and for data warehouses? 

Yes! You can use ElastiCache with anything that is query-able. The results can be represented as a binary-safe string and stored in ElastiCache. It is important to note that access to ElastiCache is mediated by an application— there is no direct connection between it and any other data store. This unlocks the flexibility to cache many disparate data stores in one common cache. 

Q8. Talking about relational databases. When it comes to scalability and low latency there’s only so much you can do to improve performance. What is the most effective strategy for coping with that limit? 

There are a few dimensions to consider for scaling relational databases. Examples include scaling up compute resources, or allocating more memory. Depending on your queries, you can potentially optimize further using query tuning techniques like hints, and indexes. You may also choose to improve the storage access such as the disks used, and scaling reads across read replicas. In-memory databases allow you to retrieve with lower latency by accessing your data without storage access limitations.

As we’ve discussed, you’re never going to be able to overcome the laws of physics-based reality: disk is slower than RAM, and in-memory databases allow you to control what’s stored in memory at all times. We recently presented a webinar on a similar topic. In this webinar, we cover how caching works, the differences between in-memory data stores and disk-based databases, and how caching can turbocharge your workloads. Check it out here

Q9. Let’s consider in-memory data stores. Why use ElastiCache instead of self-managed Redis? 

Our customers have shared some key reasons why they adopt ElastiCache over self-managing their Redis workloads. First, ease of management compared to the undifferentiated heavy lifting of managing your own open source Redis environment. Customers tell us it is difficult to deploy, manage, and scale their Redis workloads. With ElastiCache, you get ease of deployment, four-way scaling (in/out/up/down), automatic backups, integrated Amazon CloudWatch monitoring, and more. Additionally, ElastiCache for Redis has some enhancements that are not available in Open Source Redis, including Enhanced I/O and support for Graviton2powered instances.

Q10. What are microservices and what are they useful for? How do you use Amazon ElastiCache for Redis in a scalable microservices architecture? 

Microservices are a modern software architecture pattern in which application components are broken out into loosely coupled services. In a traditional “monolithic” approach, all components are integrated. Microservices are useful for a number of reasons, largely that they can accelerate delivery of application improvements and free developers to use the right tool for the job depending on the use case.

Think of our previous e-commerce example.  With a monolithic approach, this application might be delivered with a typical three-tier architecture: 

  1. Web servers
  2. Application servers
  3. A single, common relational database that has multiple tables depending on the functions of the application (e.g. orders, products, customers, etc.).

In a microservices approach, each function of the app may have its own “microstack” of components. For example, you may choose a document database for the product catalog and a graph database for orders to help build a recommendation engine. ElastiCache for Redis, with its 10 built in data structures, unlocks a variety of use cases where the microservice in question would need low latency and/or high throughput.

Take this example – recording “clickstreams” is a very common pattern across web applications. In this scenario, you log all user activities across the application. In a monolithic approach, where all functions of the application are using a common relational database, we might have a dedicated table to store this information. However, in a high volume scenario (say 1 million concurrent users), it may be challenging for a relational database to handle the write traffic throughput. This could be even worse when a traffic spike occurs and the database is not prepared to handle such demands. In a microservices approach with ElastiCache, you could instead leverage Redis Streams for recording clickstreams. ElastiCache can dynamically scale to handle millions of requests per second, leaving your relational database free to process other requests such as payment processing. 

It all comes back to the maxim of using the right tool for the job, which microservices as an architecture pattern enable by allowing development teams to pick the most appropriate technologies for their workloads. 

……………………………………………….

Jim Gallagher is an Amazon ElastiCache Specialist Solutions Architect based in Austin, TX. He helps AWS customers across the world best leverage the power, simplicity, and beauty of Redis. Outside of work he enjoys exploring the Texas Hill Country with his wife and son.

Sponsored by AWS

You may also like...