How in-memory is revolutionising the economics of computing

When database technology was conceived it was used for well-defined tasks like payroll and customer orders. It consisted of three components; disk storage for holding the data, a database engine for processing queries, and a process for getting data from disk into memory, so that the engine could process it.

Every query would require the database engine to spin up the disk, read data, process it and send it back to the client. Processors were slow, disk access was even slower, but it worked well enough for well-defined processing on what today would be considered small data sets.

As time moved on, data sets became bigger, the processes became more complex, and disk access became the bottleneck. Technologies, which are still considered mainstream 30 years on, such as indexing and caching, were invented to mitigate this bottleneck.

Notice that I say they “mitigated” the bottleneck rather than totally removing it. As data volumes in time grew to be measured in terabytes, these workarounds stopped delivering the desired performance.

Rather than attempt to fix the disk-based approach, a more revolutionary question was posed by a new generation of software engineers: “Instead of fetching data from disk to memory and then putting it back onto disk – why not forget the disk and keep the data in memory?”

That was the birth of intelligent in-memory computing – and for at least the last 14 years it has delivered performance that the vendors of disk-based solutions can only dream of.

>See also: Remember the titans: who will win the in-memory battle?

Big desks, small desks, data everywhere

A database can be thought of as a big office that stretches off into the distance. Shelves line the walls, right up to the clouds and beyond. These shelves are the storage – the disk drives – and they contain all the documents in your organisation.

Someone with a small desk that is only big enough for one or two documents at a time is forced to structure tasks to make use of the cramped space. They get a couple of documents down from the shelves, work with them at the desk and then have to put the documents away again before repeating the process.

Tackling a complex task using this method will mean they are mostly up a ladder getting documents from the shelves, rather than down at your desk getting the job done.

This is the way disk-based database systems work. Even operating at full capacity, the processors on a server running a disk-based database system can be idle on average of 85% of the time. Why? Because they are often waiting for data to arrive from disk.

The small-desk owner could, of course, buy a bigger desk (add more memory), but if they carry on working in the same way, and don’t use the extra space efficiently, it won’t make much of a difference. In addition, data volumes are always growing and tasks are getting more complex, so how can you be sure that your desk will be big enough tomorrow?

This is where intelligent in-memory databases come into its own. It does sound a bit Harry Potter, but with an intelligent in-memory database system, books magically appear on the desk when they are needed. If the desk becomes full, the books that haven’t been used for a while put themselves back on the shelves to make room.

Of course, in reality such systems don’t rely on magic – this is achieved through advanced software engineering that only looks like magic to the end-user.

Businesses embracing in-memory database technology are seeing unprecedented growth and insight. Online retailers see increased sales through suggesting tempting additions, banks can check for fraudulent activity in real time, online and mobile gamers get incentives to keep them coming back, customers of all kinds of companies feel they are more valued through tailored promotions, and decisions makers know who their customers are, what they are doing and how to keep them coming back for more.

>See also: If HANA fails, SAP dies: Teradata CTO

An organisation doesn’t have to be Google or a major government to benefit from big data analytics any more. The price of memory has reduced dramatically from $25 per kilobyte when the first DBMS was created, to $ 0.000007 today.

As a result, users can run these databases on unexceptional hardware or, in fact, in the cloud. In addition, without the need for manual tweaking, they don’t need legions of software engineers and database administrators on the payroll.

This means that all businesses can, and should, be big-data businesses. There is no reason why big data cannot be weaved into the DNA of every company, especially when the price now means analytics should be considered as mainstream a part of running a business as spreadsheets and word processing. 

 

Sourced from Aaron Auld, CEO, EXASOL

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

Related Topics

Analytics
Big Data
Data