High on the agenda: the future of flash tiering in enterprise

Flash technology has radically transformed the server space over the last couple of years, penetrating the data centre in all applications. The key promises of flash storage – high performance, flexibility, and the elimination of today’s most pressing environment bottlenecks, are thanks in part to the effects of de-duplication and compression that can drastically reduce the data footprint, driving down storage costs.

As Steve Wharton, principal system engineer at SanDisk explains, flash appeals to different users for various reasons.

‘For some, the need for performance in a highly transactional environment has led them away from legacy storage solutions to flash,’ says Wharton. ‘For others, it’s been a means of addressing the ‘server sprawl’ created by the inefficient utilisation of computing power in their data centre. There is also the issue of software and hardware costs. For many customers, right sizing their software costs can actually offset the cost of the flash deployed.’

> See also: 15 for ’15: the top 15 storage predictions for 2015

However, as Sean Horne, director of enterprise and mid-range storage at EMC explains, for companies looking to maximise the general benefits of flash, there is no ‘one-size-fits-all’ answer.

‘What they’re looking to achieve, how big they are and what their workloads are made of, will all effect deployment,’ he says.

Using the principles of Information Lifecycle Management (ILM) to tier data across different workload scenarios has become a major element of making the most out of flash. Tiered storage offers a cost effective way to ensure that IT environments are optimised for the performance requirements of the business, with automatic promotion and demotion of ‘hot data’ to and from the most performant tier. Letting your data live in the most appropriate tier of infrastructure can save you money while still providing appropriate performance and availability for applications.

The challenge ahead

Of course, allocating data to different tiers isn’t a new concept from an SLA (Service Level Agreement) perspective, but as Jeff Sisilli, senior director product marketing at DDN explains, ‘today’s mixed data patterns present challenges as to which types of workloads are most appropriate for flash tier- and the physical location of the flash itself.’

There are several types of flash implementations on the market today, says Sisilli. ‘The biggest challenge for IT managers is selecting both a vendor and an approach that best aligns to their unique data patterns while providing acceleration of their applications and efficiencies.’

In order to make the right decision when it comes to a flash implementation, developing the right strategy is essential. Ultimately, says Laurence James, NEMEA Product, NetApp, it’s important to do your homework – assuming that all workloads require the same treatment is not a winning strategy.

‘Businesses need to understand the cost per unit of capacity and per unit of performance in relation to the goals of the business,’ says James. ‘Good vendors have an array of tools that will help customers understand and visualise the outcome of introducing flash either in tiered or all-flash configurations.’

In all cases a clear understanding of the workload, its access and reuse of data profile will determine the business advantage that tiering is likely to achieve. Not all workloads are what can be described as ‘cache friendly’. Some are decidedly cache unfriendly and will fill the expensive flash tier with data that sees no further access.

‘Analysis of cache ‘hits and misses’ can lead you to making the right choice at the right price point,’ continues James. ‘Indeed such cache unfriendly, random, workloads may benefit from residing on an All-Flash Array where consistent, repeatable response is required. Again the key activity is mapping workload performance to business value goals.’

Cost sensitive workloads – those that require high performance with no latency, but only at certain time periods – such as end of month, quarter or yearly accounting reporting for example – are excellent for tiered storage incorporating flash.

‘Flash is the perfect storage solution for latency sensitive workloads and those that are ‘bursty’ by nature,’ says James. ‘For example, from a retail perspective this could be online transaction processing. Anything that requires near real-time analysis of data is a perfect candidate for flash storage as well.’

> See also: Flash flood: the second wave of flash storage has arrived

More generally, he argues, ‘incorporating flash into the storage tier can deliver huge improvements to response times, power usage, cooling and rack space. Adopting a hybrid storage tier that adds just 1% to 2% of flash-based capacity can improve average response times by as much as 90%.’

No panacea

But tiering isn’t the universal answer to every storage problem. Most tiering architectures use flash as an intermediate staging buffer for writes, but when it’s time to destage the writes to disk, tiered storage systems hit bottlenecks.

‘Tiering requires active movement of data between the hot and cold tier in real time for it to be effective,’ notes Radhika Krishnan, VP of product and alliances at hybrid storage firm Nimble Storage. ‘Since there is a significant overhead associated with the movement of data, most storage arrays don’t trigger promotion of hot data until a certain threshold is reached which could be hours or days after the application demands spike.’

This actually ends up compromising the responsiveness of the storage array significantly, particularly with applications such as VDI where the workload demands fluctuate frequently.

The granularity of the data movement between the two tiers also typically tends to be large.

‘This means that when a small 4K block of hot data needs to get promoted, there is a chunk of data that gets promoted alongside, consuming flash resources,’ says Krishnan. ‘This compromises the efficacy of the flash tier.’

And flash capacity is expected to increase exponentially over the next few years, with three terabyte flash drives appearing this year and six terabyte drives predicted to be available by the end of 2015. Even 10-15 Terbyte drives are on the horizon.

‘A customer looking to buy flash storage today needs to be mindful of the fact that technology is evolving all the time,’ says EMC’s Horne. ‘There is an increasing need to plan ahead and think about buying a storage system that can cope with larger drives.’

Larger, denser flash drives will bring new challenges, particularly around how those drives are absorbed architecturally into storage systems. One of the problems is that larger drives can increase rebuild times. If something fails and needs rebuilding, a larger drive means a longer rebuild time.

‘Over the next year companies will need to address how they manage larger drives and other technologies will need to evolve to improve rebuild times,’ advises Horne.

As 2015 progresses, experts expect that all-flash arrays will continue to grow in popularity.

Those such as SanDisk’s Wharton argue that tiering in all-flash arrays will certainly increase in 2015, with an evolution towards systems that can take advantage of the multiple types of NAND flash available.

‘The ‘software defined’ market will also increase its use of flash as operating systems, hypervisors, virtual SAN’s and even ‘big data’ technologies find ways to extract higher performance and lower cost per GB from flash technology,’ says Wharton.

‘That said the hybrid storage market is currently significantly larger than the all-flash market so it will be interesting to see how this develops over the next 12 months,’ he says.

However, with the multiple benefits and huge improvements that adopting a hybrid solution offers to businesses, NetApp’s James expects that hybrid storage arrays will also continue to grow, though at a slower rate.

In the current market it’s all about the economics of flash – the benefits of tiering only will only really apply to customers who have decided that because of their particular workloads, the cost of capacity on mechanical drives is still cheaper than it is on flash.

‘While it brings significant advantages on the performance front, flash alone is not a panacea to all that ails storage in the data centre today,’ as Nimble Storage’s Krishnan points out.

‘Next year we will see widespread adoption of intelligent flash architectures that can adapt on the fly to varying application demands,’ she says, ‘while meeting capacity, cost, scalability, manageability and data protection requirements, thereby simplifying the data centre and allowing the consolidation of multiple workloads over a single store infrastructure.’

Businesses will always look to storage providers for advice on what can best serve their business and IT needs.

‘As all flash and hybrid arrays become increasingly available and more cost effective as the price of flash decreases, we expect that end users will look to adopt these solutions in order to answer specific business requirements,’ says James. ‘We are living in a world where speed has become integral to remaining responsive and competitive. Technology that provides businesses with these competitive advantages will become increasingly popular.’

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

Related Topics