Flash flood: the second wave of flash storage has arrived

The technology behind flash storage has been around in one form or another for at least 30 years, and is commonly used in everything from USB flash drives to MP3 players. But it has only really been over the past year or so that it has entered the enterprise consciousness as a robust and speedy alternative to conventional magnetic hard disk storage. 

In 2013, we saw flash take the enterprise by storm as it began to catch up with the exponentially increasing data demands of businesses. It has been estimated that 90% of the world's data was created in just the past two years, as businesses have demanded more and more powerful intelligence to fuel their growth. Added to this, the consumer-driven intolerance for latency and slow processing has permeated almost every aspect of business IT, and storage has been no exception.

Analyst firm IDC expects the second wave of the flash storage market to continue to make a huge impact, with more features, functionalities and capabilities helping to nurture a $4.5 billion worldwide market by 2015.

Flash storage is being sold as the ultimate weapon for blasting through today's limitations of performance, budget and data centre size, which are preventing businesses getting the most from the big data phenomenon. And not without good reason – flash has given many organisations the edge by delivering powerful, efficient acceleration within existing data centres without having to scale out, whether through shared or in-server architectures.

'I expect flash to increase in dominance over the course of this year, by continuing to disrupt the tier one enterprise class disk category,' predicts Laurence James, product alliances and solutions marketing manager for storage specialist NetApp.

>See also: Controlling the skies: The rise and role of the cloud service broker

Users wait with bated breath

While a lot of the early concerns about the physical products relating to availability and durability have been addressed, people are still holding their breath to see whether the needed density, cost and performance improvements will appear as quickly as anticipated. And as the technology matures, so do user expectations.

'Users want flash to address their specific performance issues, which means that a successful product will need to be able to automatically deliver significant benefits to different workflow types with different IO patterns,' explains Laura Shepherd, director of HPC at storage solution provider DDN.

Most remaining limitations concern implementation rather than the technology itself. For users where a single or just a few applications dominate, giving the option to customise how flash is applied on an application-specific level 'would be a great option for those who need to get every last bit of performance out of their kit', she adds.

Another common criticism of flash is that, depending on an organisation's metrics, it can still be expensive compared with alternative storage solutions, with 1GB of eMLC flash costing around $8, rather than $0.65 for 1GB of hard disk space.

Artur Borycki, international solutions marketing director at analytics platform provider Teradata, believes that the hurdle of cost is likely to be overcome in 2014 as the market for high-density enterprise solid state drives (SSDs) moves from a single-level cell (SLC) to a multi-level cell (MLC) flash memory, driven by the present limitations of an SLC environment in terms of flexibility, cost and product life.

'Moving from an SLC to an MLC model will help reduce the overall cost of flash storage,' Borycki assures us. 'The problem of regularly rewriting cells within the SSD has always had a cost implication, as this leads to burn-out – typically in around five years – as the cells wear out through a high level of use. Improvement in MLC technology, previously used for consumer electronics, will see it adopted more for enterprise usage, as technical developments improve its burn rate.'

The beginnings of the move to MLC-based storage, which Borycki predicts we should start to see this year, will eventually reduce the frequency of rewriting a greater number of smaller cells, and thus extend the life of the SSD.

At the same time, SDDs will get ‘smarter’ as they utilise increasingly complex algorithms to leverage the most efficient use of different cells. A best-practice proactive-monitoring approach analyses and optimises the use of customer data in each cell, resulting in a lower level of rewriting, and extending the life of the cell.

However, one of the common myths surrounding flash storage, says Borycki, is that advances and lowering cost automatically mean that storing all data in memory is the best approach. The changes anticipated in flash storage in moving from SLC to MLC will perhaps not transform the dynamics in the mix between hot and cold storage, but it will make for a more stable, longer-life solution, reducing the overall total cost of ownership, he predicts.

'SSD is still, and will continue to be, some cost multiple of traditional spinning disk – and so an all-memory approach is not cost effective for anything but small data volumes,' he says. 'In most cases, only 20% of the data used in analytics during any specified period is frequently accessed. To ensure fast response, this ‘hot’ data needs to be held in faster memory, the benefit in response times outweighing the higher-cost media. The remaining 80% of ‘cold’ data can be held on lower-cost disc-based storage.'

While flash memory is a great tool to help businesses solve some of their key storage and data processing challenges, as Gavin McLaughlin, solutions development director at storage specialist X-io, is keen to emphasise, 'It's not the panacea and saviour of the universe – pardon the Flash Gordon pun – that many of the all flash array (AFA) vendors would have us believe.'

 We are following the classic 'hype cycle' right now that analyst firm Gartner has theorised all enterprise technologies experience, says McLaughlin, 'and it’s clear that 2014 will see the “trough of disillusionment” for the all flash arrays’.

'The flash storage market is definitely there, but it’s not as big as many have made out,' he says, 'It's a great tool to help with certain workloads, but for some – such as those with high bandwidth or sequential data, for instance – it either provides no benefit or, worse still, causes a performance drop.'

>See also: Taming the urban frontier: bigger data, smarter cities

Best of both worlds

A hybrid storage approach adopts a mix of media, balancing the use of different technologies based on data usage. Data that is accessed more frequently will automatically be stored on an SSD drive, while less frequently accessed transactional data will go onto hard disk drive (HDD) storage.

'Balancing cost versus usage in this way – optimising the velocity as well as the volume of data – is especially beneficial in a big data environment,' says Borycki, 'but it is only possible with advanced solutions that can automatically detect data temperature and move the data accordingly.

'If it’s reliant on a manual process, then data soon ends up in the wrong place and the analytics slow down. An all-memory solution is an expensive approach taken by vendors whose software is less advanced in this area.'

After the AFA hype of 2013, it is clear that in an evolving market a hybrid or mixed-storage technology continues to be the most effective and cost-efficient approach to flash storage, at least for the time being.

So far, however, successful hybrid adoption has hinged on the ability to move data between media in real time in a sophisticated manner.

'Far too many implementations have been inefficient and either based movement on history or by just utilising flash as an unintelligent cache,' McLaughlin explains. 'It has to be expected that many more of the big players will catch up on this, but 2014 may be a little too soon for them to play catch-up, especially as they got caught up in the AFA hype cycle and were distracted.'

Many believe that 2014 is going to be a crucible year for flash, with customers becoming wise to what flash can do for them – enough to start to put pressure on vendors and architects. It seems that simple, bolted-on flash-read caches are no longer going to stand up to this scrutiny.

As Shepherd explains, flash-only storage appliances are failing to make the grade as 'market expectations have passed the point where these devices can still be missing availability and connectivity features'.

'The flash winners in 2014 are going to be the products that can deliver the core functionality that customers expect from traditional storage,' she stresses.

Although an increasing number of AFAs are expected to enter the market, with many companies choosing to adopt all-flash because it is simpler from both a technical and operational perspective, it will take much more than economical all-flash to drive mainstream market adoption.

>See also: Why public cloud isn’t the whole answer: Enter software-defined

No flash in the pan

As we move through 2014, Chris Evans, director at IT consultancy Langton Blue, expects that we will see more acceptance of flash in use as primary storage – something that hybrid providers may be best placed to win from, as their products can be priced more competitively.

'However for the all-flash market, feature set will become the most important factor, as raw performance alone won’t be enough of a differentiator,' he says. 'We will see more adoption of all-flash solutions as a result.

'For in-server products like PCIe and NVDIMM, these products will be targeted at specific problems or solutions, such as accelerating virtual server environments and in-memory databases.'

And just as the software-defined trend has taken hold of much of enterprise IT, from Chris's perspective the most interesting developments are likely to come in software, as we see flash being exploited for new uses in the enterprise, including in-memory databases and hybrid compute solutions that are capable of delivering high-density virtual server farms.

Companies such as Pernix Data, Atlantis Computing and Infinio Systems may be well placed to deliver software-only solutions that exploit the benefits of flash, says Evans, and these are the kinds of companies that he expects to see more of in 2014 and beyond.

While the flash storage market is still young, it has become a fast-moving and exciting direction for storage.

'We will see new entrants to the market as well as new products and service offerings, and competing technologies will attempt to gain their share of the market,' says Evans.

'Customers more than ever need to consider certain questions before making an investment decision, particularly when it comes to newer market players. Will they still exist in the near future? Can they really deliver against their claims? What happens to customer support if they are acquired?

'It is important to remember that flash is not a one-size-fits-all technology.’

And the 'winners' in the flash storage race in 2014?

'They will be the vendors that help the customer make the right decisions for performance and reliability at the right cost for the workload that the customer wishes to accelerate.' 

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

Related Topics

Big Data
Flash Storage