Benefits of in-memory database platforms

 

The old database system – traditional computing – is based on disk storage, first generation in-memory databases, and mainly relational databases.

Developers and system integrators spend substantial development time on optimising databases, constructing indexes and aggregates, designing cubes and star schemas, data modeling, and query analysis.

With the traditional system, accessing data takes too much time. Let’s say you have an online store – you would want to give the customer suggestions on products they might like while they are shopping, and not days after they have left the page.

With the traditional system, considering the time to access and process data, it would be very complex and hardware intense to do so.

Why the in-memory platform is disruptive

The evolution of hardware has been a steady pace of decreasing size and reducing the distance for signaling. In doing so, it has constantly given us higher performance.

When you look at software, the basic principles have remained the same for decades, however, as with everything else, the software industry is making improvements.

>See also: Building the data foundations for smart cities

In-memory platforms with in-memory databases are emerging and are becoming the new standard.

In-memory computing is, unlike traditional computing, a technology for both application and database in the memory.

Accessing data stored in-memory is much faster, up to as much as 10,000 times faster compared to the traditional system. This also minimises the need for performance tuning and maintenance by developers and system integrators and provide a much faster experience for the end user.

In-memory computing allows data to be analyzed in real-time, enabling real-time reporting and decision-making for businesses.

According to Gartner, deploying business intelligence tools on them traditional system can take as much as 17 months_, and many vendors therefore choose the in-memory technology to speed up the implementation time.

Since in-memory databases utilise the server’s main memory as primary storage location, besides improvement in speed, the size and cost is significantly reduced.

Traditional systems keep a lot of redundant data as the system needs to create a copy of the data for each component that are added to the system, such as additional database, server, integrator, or middleware to increase the volume or performance.

>See also: Why cloud is killing traditional ERP systems

For every component you add to the system, the more complex it becomes. By continuously adding hardware, you have; a never-ending hardware cost, an increasing need for storage space to store the hardware and a continuous work on integration and maintenance.

The more hardware you add, the more copies of the data will be created and the more it needs to travel, which with time results in a decrease in performance.

Hence, creating a slippery slope of reduced performance and added hardware and increased cost. With the in-memory system – since data is stored in-memory – it entails a single data transfer and doesn’t share the traditional system’s challenge of signaling and decreased performance.

Because of this, the system would be able to handle everything with one server, where it would have required the traditional system 100 servers and databases.

Hence, reducing the cost of hardware and integration and maintenance need. In-memory databases are from the start designed to be more streamlined, with the optimization goals of reducing memory consumption and CPU cycles.

In-memory databases also have added persistence and the ability to survive a disruption to their hardware or software environment. This is possible due to tools known as:

Transaction logging

Here a periodic snapshots of the in-memory database (called “save points”) are written to non-volatile media. If the system fails and must be restarted, the database either “rolls back” to the last completed transaction, or “rolls forward” to complete any transaction that was in progress when the system went down.

>See also: Top 8 trends for big data in 2016

Database replication

This is where one or more copies of the database are replicated. This solution allows the system to continue using a standby database. The “master” and replica databases can be maintained by multiple processes or threads within the same hardware instance.

Who benefits from in-memory databases?

In-memory databases are commonly used in applications that demand very fast data access, storage and manipulation, and systems that must manage vast quantities of data.

Notable use cases include:

  • Real-time banking
  • Insurance advisory systems
  • Real-time retail system
  • Real-time ad platforms
  • Real-time analysis
  • Online interactive gaming
  • Hyper-local advertising
  • Geospatial/GIS processing
  • Real-time medical analytics
  • Natural language processing & cognitive computing
  • Real-time machine learning
  • Complex event processing of streaming sensor data

By looking at the use cases we can clearly see that it’s not defined by industries, but rather the underlying technical need, i.e. the need to get the best performance and scalability for a given task.

Software is taking the same path as hardware and gaining performance by reducing complexity. We are also seeing the end of costly complexity, hardware, licenses, maintenance and personnel in businnesses.

 

Sourced by Kristoffer Lundegren, CEO of Starcounter

Avatar photo

Nick Ismail

Nick Ismail is a former editor for Information Age (from 2018 to 2022) before moving on to become Global Head of Brand Journalism at HCLTech. He has a particular interest in smart technologies, AI and...

Related Topics