In-Memory is Everywhere
By Gagan Mehra, Software AG.
The last few years have seen drastic changes in the in-memory market with almost every software vendor utilizing in-memory, in some capacity, in their offering. The two key market drivers behind this change are – the growing importance of processing high velocity, high volume data with low latency and the dropping prices of RAM.
But just adding in-memory capability does not make everything work faster. There are a set of features that need to implemented right to get the full value from in-memory.
1. Ability to Scale
In-Memory is like an addictive harmless drug. Once a business starts seeing the benefits of in-memory, they want to use more of it. This means that the data volumes maintained in-memory are bound to go up. Hence ‘ability to scale’ is critical. This means not just scaling up to utilize all the RAM on a server but also scaling out to unlimited scale to support those petabyte stores in-memory, if required by the business.
2. Support for High Availability
Just because data is being maintained in-memory, the requirement to be highly available does not go away. In fact, the leading in-memory offerings are used in several mission critical environments that need to be up 24/7. An in-memory system needs to have fail over built-in to avoid any impact on the end user. It also needs to persist the data on disk in case the data in memory is lost because of a server crash. Additionally, if the business is using multiple data centers, it should automatically sync up data between different instances of the in-memory solution across data centers, to support disaster recovery.
3. Predictable Latency
If an in-memory solution supports scalability but cannot maintain low latency as the data volume goes up, it will not work. The solution needs to be designed to maintain latency at the same level regardless of the scale. This has to be done by distributing the data load across the different servers in the cluster and rebalancing the load on a continuous basis to avoid any impacts on the user experience.
4. Expose Interfaces for Other Applications
Once an in-memory solution is deployed and it is working well, the natural instinct is to make it available to other applications to also reduce their latency. This is done if different applications are using the same data sets that are maintained in-memory or another application wants to use the single instance of in-memory to store more data. An API that allows easy connectivity or has special connectors built-in for the most commonly seen applications in customer environments is highly useful.
5. Low Total Cost of Ownership
RAM has become significantly cheaper but it is still more expensive than disk. An in-memory solution needs to make optimal use of RAM i.e. it should use the least amount of RAM for its internal functions and make the rest available for use by different customer applications. For applications that do not require the lowest possible latency, it should allow storing data across RAM and SSD, and move data seamlessly between the two based on application requirements. All this should be possible using any commodity hardware. And lastly, it should have low operational costs. Once deployed in a Production environment, it should just run without requiring a large operations team or frequent software maintenance. Only with this low cost of ownership will an in-memory solution be considered successful in an enterprise environment.
What are your thoughts on in-memory capability?