Living in memory

The database is the workhorse of corporate IT systems and for over 35 years the hard disk has been its stable. But some technologists believe that as a platform for enterprise databases, disk-based storage has had its day.
For one thing, they argue, modern business applications demand response times measured in nanoseconds, but the process of reading and writing data to and from a disk puts the brake on database performance.

What is more, to ensure adequate throughput and to relieve stress on individual disk heads, database administrators must scatter data across volumes of individual disks. According to IDC analyst Carl Olofson, individual disks used in this way typically have utilisation rates of just 40% to 50%. This increases the energy needs of a storage array, compromising the overall energy efficiency of the data centre.

Add to this the physical bulk of disk media, and it is easy to see why businesses are looking for an alternative database platform.

As it happens, an alternative has long been available.

In-memory databases, or IMDBs, reside entirely in the main memory of a server. This removes the mechnical process of reading and writing data to disk, therefore accelerating database operations.

IMDBs are not uncommon. They are often used for database caching (storing frequently requested data in memory for faster access), in specialised high-speed applications for the telecommunications, financial and defence markets and, increasingly, in certain business intelligence (BI) systems.

But three recent developments in server technology have boosted the viability of IMDBs in mainstream IT. These are 64-bit computing, which makes more of a server’s main memory addressable; multi-core servers; and, above all, the falling cost of RAM [random access memory] chips. Today, 1GB of RAM can cost less than $35, compared to $150 just a few years ago. “All this means that the economics of computing have swung in favour of in-memory databases for many workloads,” says Olofson.  

In-memory intelligence

Beyond specialised applications, the most common application for IMDBs has so far been BI and analytics systems.

Different in-memory BI tools apply the technology in different ways. Some extract data from a source system or data warehouse in advance before loading it into memory for manipulation by the user.

Others, such as Qlikview from Qliktech, eliminate disk access altogether by building entire data marts in memory. This dramatically accelerates the speed of reports and analyses, which, according to Anthony Deighton, senior vice president of products at QlikTech, translates to accelerated decision-making in business.

Continued

Page 2 of 3

“For business users, it’s a real shift from the days when in order to ask a new question about the business, they were forced to take a trip to the IT department and tell them what they needed – and then wait days or weeks for the data to be produced,” he says. “We see in-memory as just an enabler, but its value for businesses that want to make better decisions faster is pretty powerful.”

Another in-memory analytics product is TM1, originally developed in 1984 by Applix, but now owned by IBM following its acquisition by Cognos in 2007 and IBM’s subsequent purchase of Cognos just a few months later. Oil and gas company Premier Oil uses TM1 for financial analysis, loading financial data from servers in the UK, Norway, Pakistan and South-East Asia into a central system in London. From there, finance staff in any part of the world can run ad hoc queries, such as comparing sales in one region to another, at high speed.

“The speed of analysis and transparency of information has been invaluable for a company that had to budget carefully during the global economic crisis and constantly take into account fluctuating oil prices,” says Martin Richmond Coggan, EMEA lead for financial performance management and analytics at IBM Cognos.    

Transactional applications

These examples show that for online analytical processing (OLAP), the class of database system that supports BI tools, IMDBs are nothing new, although shifting economics are making them more accessible. In the context of online transaction processing (OLTP), used to support transactional applications such as ERP or CRM, they are still an emerging technology.

One proponent of this use of the technology is enterprise applications vendor SAP. Having already incorporated in-memory analytics into its Business Warehouse Accelerator BI product, it is now talking up the potential of using in-memory databases to underpin its core business applications.  

SAP’s in-memory ambitions are fuelled in part by research undertaken at company founder Hasso Plattner’s eponymous research institute at the University of Potsdam and in part by its May 2010 acquisition of database company Sybase, itself a major proponent of in-memory database technology.
   
According to Henrik Rasmussen, head of industry business solutions at SAP EMEA, the company’s Business ByDesign, which hosts applications for medium-sized companies, due for release in selected markets later this year, will incorporate in-memory technology for both transactions and analytics. “I foresee a full SAP Business Suite for on-premise implementations using in-memory databases being available by the end of 2011,” he adds. 
 
But using an in-memory database as a transactional data store raises a number of significant challenges. Part of the problem is that, unlike disk-based databases, IMDBs are volatile, meaning that they lose data when the server’s power unexpectedly shuts down. In recent years, many IMDB vendors have sought to counteract this through a variety of approaches that typically involve writing at least some data, at intervals, to disk or to another form of non-volatile memory.  

More importantly, end-user companies that have already made massive investments in disk-based relational database management systems (RDBMs) are unlikely to be persuaded to migrate to an IMDB in the short term, regardless of the potential performance improvements.  

Oracle CEO Larry Ellison, whose success is based on the database market, is not convinced that IMDBs are appropriate for supporting applications. “Get me the name of their pharmacist,” he responded to an audience question about SAP’s in-memory plans. “There’s no in-memory database technology anywhere near ready to take the place of the relational database. It’s a complete fantasy. It’s just whacko.”

However, these comments might have something to do with the fact that Oracle’s relational database underpins more SAP implementations than any other.

Continued

Page 3 of 3

A more balanced view comes from Axel Goris, a BI specialist at management advisory PA Consulting. “For companies, it’s really a trade-off between the investment they’d be required to make and the business benefit they’d gain from it,” he says.

His colleague, Paul Craig, a database architecture specialist, takes up the theme: “Most business applications do not require the kind of super-performance promised by IMDBs. For those applications, it would be like buying a car that can travel at 150 miles per hour, when you only ever drive at 50 miles per hour.”  

However, there are some areas where PA Consulting is seeing some traction for IMDBs, Craig adds, such as e-commerce applications that have to contend with huge volumes of ‘look-ups’. 

“Take, for example, a rail enquiries site, which is expected to deliver data about train times, stations, routes and so forth in split-second response times,” he says. “Increased traffic to the website can be a nightmare for the company that runs it and any delay can easily result in lost revenue, so a massive increase in performance, provided by an in-memory database, could really help here.”

The same rules apply, he says, to social media applications such as Twitter or Facebook, which dynamically compile content based on user profiles. According to a recent paper from the Department of Computer Science at California’s Stanford University, web companies like these are leading the in-memory database charge.

“In recent years, there has been a surge in the use of DRAM [dynamic random access memory], driven by the performance requirements of large-scale web applications,” the paper reads. “For example, both Google and Yahoo! store their search indices entirely in DRAM.”  

Leading database pioneer Michael Stonebraker also has his eye firmly on fast-growing Web 2.0 companies, as well as massively multiplayer online (MMO) gaming environments, as targets for his recently commercialised in-memory OLTP database, VoltDB (see box-out, Dancing with Elephants).

Stonebraker also sees great potential for the database in underpinning high-volume systems in the financial services industry: VoltDB’s leading investor and inaugural customer is Chicago-based Global Electronic Trading Company. “Anyone that really cares about performance in high-volume transaction environments is starting to weigh up IMDBs,” he contends.  

But are disk-based databases really destined for extinction? One man helping them on their way is Winfried Wilcke, a database research scientist based at IBM’s Almaden Research Centre in Silicon Valley.

Wilcke and his team have recently been focusing their efforts on what they call ‘storage-class memory’ (SCM), in which spinning disks are entirely replaced with solid-state, non-volatile RAM that offers very low latencies (“tens to hundreds of nanoseconds”), low cost per bit and high physical durability.

“Using SCM as a disk drive replacement, storage system products will have input/output (I/O) performance that is orders of magnitude better than that of comparable disk-based systems and will require much less space and power in the data centre,” he predicts.   

In time, he adds, SCM will provide an effective storage system that offers between 50 and 1,000 times the I/O performance of disk drive-based systems. Availability of SCM could be as close as 2012, he says, but the modification of software in order to make explicit use of the technology could be further away, perhaps around 2015.

However, Wilcke does not believe that disks will completely disappear any time soon. “I used to think that SCM would replace disks entirely beyond 2020, but now I’m not so sure,” he says. “In 2030, we’ll still see some disks, but they’ll be used almost entirely for archival purposes.”

But, he adds: “In the face of growing data volumes and the need for fast performance, memory has to be the way forward.”

Related Topics