Major corporate initiatives, including digital business transformation, omnichannel marketing and real-time regulatory compliance, take a variety of forms, including web-scale applications, IoT projects and mobile apps. Each of these activities requires organizations to support real-time speed and massive scalability for mission-critical applications. Many of these organizations have turned to in-memory computing (IMC) to solve their speed and scale challenges.

As a result, IMC is seeing surging adoption. We are also seeing an important evolution in IMC technology — the memory-centric architecture — which offers greater flexibility and improved ROI for a range of data-intensive use cases and has the potential to further accelerate the adoption of IMC.

A Brief History of Database Technology

Abe Kleinfeld, President and CEO, GridGain Systems
In the past three decades Abe Kleinfeld has led several companies through successful liquidity events including three mergers and two IPOs. Kleinfeld currently serves as President and CEO of GridGain, developer of the leading open source in-memory computing platform, based on Apache Ignite. Since joining GridGain in 2013, the company has averaged triple-digit annual sales growth and been ranked the second fastest growing private company in Silicon Valley. Before GridGain, he was CEO of nCircle, a network security company he led through 10 consecutive years of growth and its acquisition by Tripwire in 2013. Mr. Kleinfeld began his career as a software engineer with Raytheon Data Systems, and later held marketing and sales positions at Wang Laboratories and Oracle Corporation. He holds a B.A. degree in Computer Science from SUNY-Oswego.

The limitations of disk-based platforms became evident decades ago. Attempts to analyze data in transactional databases could greatly impact database performance. As a result, separate analytical databases (OLAP) were developed. Data in transactional databases (OLTP) had to go through a periodic ETL process and be imported into OLAP databases.

Read More:   Comparison of R vs Python For Data Science

Over the last five years, companies began launching digital transformation initiatives, deploying omnichannel marketing solutions and responding to real-time regulatory requirements. They needed to begin analyzing and responding in real-time to the opportunities and challenges facing them. However, real-time decision making is not achievable with the delays inherent in ETL processes. As a result, a need emerged for IMC solutions that enable hybrid transactional/analytical processing (HTAP), which enables real-time analyses on the operational data set.

When servers were slower, and RAM was expensive, the ability to cache and rapidly process data in RAM to eliminate latency caused by disk access was severely limited. Distributed processing solutions, such as in-memory data grids deployed across clusters of commodity servers, were used to scale the available RAM and CPU power, but the cost of RAM was still challenging. Recently, the cost of RAM has continued to fall, and 64-bit processors and APIs enable in-memory data grids to be easily integrated with existing application and data layers, delivering in-memory speed plus massive scalability and high availability. Separately, new in-memory databases were developed that can completely replace existing disk-based databases. While these developments were positive steps forward, they created a complex and fragmented IMC market.

Over the last few years, in-memory computing platforms have emerged, bringing together in-memory data grids, in-memory databases, streaming analytics, machine learning, and ACID transaction and ANSI-99 SQL support in a single, integrated platform. In-memory computing platforms are easier to deploy and use than point solutions offering only one product capability. These platforms have driven down implementation and operating costs and made it dramatically simpler to speed up and scale out existing applications and build new memory-driven applications for use cases in financial services, fintech, IoT, software, SaaS, retail, healthcare and more.

Further, open source solutions have increased accessibility and affordability of IMC platforms, which also makes HTAP more practical for a wider variety of use cases. Today, the maturity and reliability of IMC platforms enable them to serve as the system of record in production environments for a wide range of use cases, including banking and investment management.

IMC in Action

Workday, the leading enterprise cloud solution for financials and HR, has approximately 1,800 customers, including a third of the Fortune 500 and a third of the Fortune 50. The company currently has about 26 million workers under management. A SaaS solution provider, Workday uses its in-memory computing platform to process about 189 million transactions per day, with a peak of about 289 million per day. For comparison, Twitter does about 500 million tweets per day.

Read More:   a Decentralized Alternative to GitHub for Web3 – InApps Technology 2022

Sberbank, Russia’s largest bank, faced a significant digital transformation challenge. The bank needed to support 24/7 online and mobile banking, store and process against 1.5 petabytes of data in real time, and support thousands of transactions per second by 135 million customers. The bank also required high availability, as well as ACID transaction support to ensure all monetary transactions would be accurately tracked. IMC enabled Sberbank to develop a new web-scale architecture. Its 2,000-node IMC platform, which can handle up to 1.5 petabytes of data, will be on par with the largest supercomputers in the world, with more data storage capability and similar compute power.

Wellington Management has more than $1 trillion in client assets under management and offers a broad range of investment approaches. The firm’s investment book of record (IBOR) is the single source of truth for investor positions, exposure, valuations and performance. This means all real-time trading transactions, all related account activity, and all related back-office activity flow through the IBOR in near real time. In addition to handling these transactions, the IBOR is a key source of analytics that support performance analysis, risk assessments, regulatory compliance and more.

To design a system that could handle transactions and analytics while controlling costs, Wellington deployed an IMC platform that supports a memory-centric architecture. The platform offers unlimited horizontal scalability, allows full SQL queries, and supports HTAP. In various tests, the new platform performed at least 10 times faster than the legacy Oracle database. To control costs, Wellington is planning to use the “persistent store” feature of its IMC platform to store older data on SSDs. This strategy will save money but still enable the firm to satisfy its aggressive SLAs for newer data.

Memory-Centric Architectures

One important limitation of many IMC solutions is that all data must fit in memory. Since memory is still more expensive than disk, many companies may choose not to put ALL data into memory but rather persist the full dataset to disk. Memory-centric architectures solve this by providing the technology to support the use of other memory and storage types, such as solid-state drives (SSDs), Flash memory, 3D XPoint and other similar storage technologies, and even spinning disks. Memory-centric architectures are designed as “memory-first,” and the most important or most recent data resides both on disk and in memory to provide in-memory speeds. But with a memory-centric architecture, the data set can exceed the amount of RAM — with the entire dataset residing on disk and the system able to process against data in memory or on the underlying disk store — while delivering tremendous performance.

Read More:   Increase Business Efficiency and Productivity with AI Chatbot Application

It is important not to think of this as the same as caching disk-based data in memory. A memory-centric architecture provides companies with far more flexibility and control to balance performance and cost. The ability to exceed the amount of memory means data can be optimized so all the data resides on disk but the higher-demand, higher-value data also resides in-memory, while low-demand, low-value data resides only on disk. This strategy, only available with memory-centric architectures, can deliver optimal performance while minimizing infrastructure costs.

Another key advantage of a memory-centric architecture is that it eliminates the need to wait for all the data to be reloaded into RAM in the event of a reboot. This delay, which can take hours depending on the size of the dataset and the speed of the network, can easily violate SLAs. The ability to process data from disk while the system warms up and the memory is reloaded, enables fast recovery. While initial system performance will be similar to disk-based systems, it will quickly speed up as the data is reloaded into memory and the system can once again perform all or most operations at in-memory speeds.

The In-Memory Future

In-memory computing is solving the real-time speed and massive scalability requirements resulting from digital transformation, omnichannel marketing, and demands for real-time regulatory compliance. These complex corporate initiatives may take the form of web-scale applications, IoT projects, or mobile apps. IMC platforms bring together in-memory data grids, in-memory databases, and streaming analytics into a single, unified platform that helps reduce development and operational costs through standardization on a common computing platform.

A memory-centric architecture can deliver the speed and scalability benefits of IMC but with improved economics. It can be used to balance cost and performance, ensure availability in the face of rapid growth, and accelerate recovery in the event of a system crash. As the demand for greater speed and scale becomes imperative at your organization, a memory-centric architecture may be the key for cost-effectively moving toward an in-memory future.

Feature image via Pixabay.