In storage circles, they call it “Tier 0”. Outgunning high-end disk drives on almost every aspect except price, the new breed of solid state storage disk drives (SSD) based on NAND flash memory has been an intriguing enterprise storage option since early last year when initial products first appeared on the market.
But it has only recently become clear – largely as a result of the experiences of early adopters – just how far reaching SSD will be. Not only will the technology offer an “ultra-performance storage tier that transcends the limitations of magnetic disk drives” (in the words of disk storage market leader EMC) but it will have a huge influence right across the storage hierarchy.
At this point, the take up of flash drives is still only evident among a small proportion of high-end storage buyers. Demand has come from areas such as electronic trading, quantitative analysis, government analysis of mass information sources, Internet transactions and foreign exchange where ‘latency sensitive’ applications require the fastest possible storage and retrieval rates.
But that has just been the test bed for a much wider set applications of flash memory that will emerge as the technology’s benefits beyond raw performance enhancement become more evident.
Certainly, vendors are not underestimating its impact. The shift to flash will “totally change the game in [storage] arrays,” according to Joe Tucci, CEO of storage market leader EMC.
The first facet of that is the removal of a fundamental roadblock in IT.
As the industry group, the Solid State Storage Initiative (SSDI), explains, SSDs will finally eliminate the server/storage performance gap that has grown ever since people started using hard disk drives (HDDs). This gap exists today because of the impasse between the digital and the mechanical. While server performance has grown exponentially, the primary source for the data they consume – HDDs – relies on spinning disks, the mechanical movement of read/write heads over the disk surface, and scores of other physical parts.
That mix of analogue and digital has simply not been able to keep up in the performance race, says SSDI. Indeed server designers have had to work around the fact that disk-dependent input/output (I/O) operations take a relatively long time to complete; caching and other schemes have been implemented to cope with that slow I/O but with only limited success – bottlenecks remain a common occurrence.
Proponents of SDDs promise that the move to a wholly digital platform provides a leap in application response times.
“Performance today is very much spindle-bound,” says Bob Maness, VP of worldwide marketing at Pillar Data Systems, the ambitious start-up funded by Oracle founder Larry Ellison to the tune of over $350 million and which is making SSDs a central focus of its proposition. “Today companies are being forced to buy capacity they don’t need,” because of shortcomings in the performance of hard disk drives. “The focus should be on dollar per input/output, rather than per gigabyte or terabyte’” he says.
Flash is certainly fast, according to early exponents. Vendors such as EMC and NetApp say that today’s standard high-end fibre channel disk drives (the classic ‘Tier 1’), spinning at 15,000 rpm, have a response time of 6 milliseconds at best. Meanwhile, a flash drive’s end-to-end response is typically 1 millisecond or less for the sample application.
Moreover, SSDs deliver up to 30 times more IOPS (input/output operations per second) than traditional high-end fibre channel (FC) drives. Even more extreme, compared to a large 1TB SATA drive, a 400 GB flash drive will have a performance 100 times better, says Pillar’s Maness.
The capacity of flash has also now reached a stage when it is becoming attractive and affordable. And that is happening more rapidly than predicted, says EMC’s Tucci.
While EMC was first to introduce the technology as part of a mainstream storage system in January 2008, it offered an option of adding 73MB flash drives alongside FC and low-cost SATA drives in its Symmetrix systems. Now this ships with 400MB flash drives, a five-fold increase in capacity in just over a year.
In terms of raw data capacity, flash still lags far behind hard drives, but the trajectory looks much more positive.
The growth in FC hard drive capacity is levelling out. In earlier years, it at least doubled with each generation 9, 18, 36, 73, 146, 300GBs. But the most recent step has been a relatively modest jump to 400GB, with performance rising even more slowly. SATA, on the other hand, is more about huge capacity and lower performance.
Manufacturiers are already looking to produce 4TB and 6TB SATA drives in the near future, but online mass storage serves a very different purpose to flash.
“SATA is a like a swimming pool that you fill up with a straw; flash drives are like a bottle that you fill up with a fire hose,” says Bob Wambach, senior director from high-end storage products, EMC. The challenge: to marry those different tiers for optimisation of business applications performance.
Flash has another card to play that is drawing attention. Because there are no moving parts, the energy used per IOPS is 80-90%% less than in mechanical drives. And in a data centre that means less heat, less cooling, and a smaller footprint for storage devices.
However, despite banishing a dependency on mechanical parts, questions about flash’s reliability come up in a different context.
Several vendors argue that hammering the same flash drives’ cells with read/writes can lead to failure of that cell. The solution is to ensure the load is spread the across cells.
But others, such as EMC, suggest that argument is just an attention grabber. “Flash drives are designed to wear cells very evenly. Even under very heavy workloads with thousands of IOPS, a drive will wear out in about 20 years. And, of course, the mean-time before failure is actually much higher than on traditional drives,” says Wambach.
So, in the opinion of almost all observers, there is only one major aspect holding back demand back: flash does not come cheap.
“To totally change the game [in storage] we need the prices to come down,” says Tucci. In early 2008 when they first appeared, enterprise flash drives may have been 30-plus times faster, but they were 40 times more expensive than the fastest FC drive.
Prices have since moved in the right direction, but not fast enough for widespread enterprise adoption.
During the past year flash drives have fallen by 76% in terms of price per MB. However, that still leaves them at around eight times the cost of similarly sized fibre channel drive.
While it may not drop so fast in coming years, the price decline will continue, says Tucci. “And as it does, when you look at the inherent benefits of flash drives – the speed, the reliability, the fact that they use less energy and the fact that a drive can now hold 400GB – you can see how this will change the landscape.”
But it is not the characteristic of using flash in isolation that have the most potential. Rather it is the technology’s blending with the whole storage stack that demonstrates its true potent.
Raising all boats
Today the evidence suggests that buyers are putting only around 5% of their capacity on flash. But what has become clear is that flash is not just about creating a higher performance plane but about taking the pressure off disk storage at multiple levels.
“Today people might be buying in small volumes, but a little flash goes a long way,” says Wambach. He explains: “The initial use cases were from customers that had performance problems. For them, flash drives make disk-based performance problems go away.”
But what soon became clear was a tight relationship between the use of flash and cache. Because of response times, the fact that flash can get to data faster means it makes much less uses of cache. “So applications that had a 90% cache hit-rate, run twice as fast if you just put a few flash drives in,” says Wambach.
Indeed, the epiphany came when EMC customers found that “flash makes everything run faster”, says Wambach.
“What we have found is that if you move your busiest LUNs (logical unit numbers) or volumes to flash, then the act of just using those small numbers on flash drives both returns very fast response times and also effectively removed I/O from the system.”
The analogy is a traffic jam. If 20% of cars are removed, then the volume may be still high but the traffic flows.
“Because all the queues really get cleaned out and there are fewer I/Os in the system, all the other drives run faster too. So not only do all the flash drives run faster, but all the other drives now also run faster. It has this magnification effect on the speed of the whole system,” says Wambach.
And that effect was an appropriate preparation for what is seen as the next step in the optimisation of those multi-tier storage structures.
“The low-hanging fruit for flash was high-performance applications. The second [opportunity] is the realisation that if a small proportion of an application is accelerated the whole thing runs faster,” says Wambach.
The next phase vendors such as EMC are planning involves automatically managing data across storage tiers. It effectively means the application of information lifecycle management principles within the same machine, where data is moved automatically between flash, fibre channel and low-cost SATA discs within the same cabinet, depending on the changing value of that data.
Automatic is the critical word here, because historically such management has required frequent manual intervention.
EMC calls its implementation Fully Automated Storage Tiering; the first FAST products are due toward the end of 2009 and will work across all of EMC’s platforms (initially at the file-level on Celera and at the device or LUN-level on Symmetrix and Clarion machines).
Others take a slightly different approach to the same problem of optimising for the characteristics of different tiers. Pillar, for example, offers ‘application-aware’ tiering. Such prioritisation should drive further demand for flash.
“One of the reasons customers say they do not have flash today is that there are not enough storage administrators availble to move the data around. FAST will take those pain points away,” says Frank Hauck, executive VP at EMC’s storage division.
That will also mean a change in the storage mindset. Customers need to blend these tiers together, not thinking about flash for ultra high-performance work, FC for high-speed and SATA for a capacity work, says Wambach. Instead, they will be focusing on the data that is static and locating it on SATA, then finding the data that is very dynamic and putting that on flash, he says.
That means the type of data will become abstracted from any type of device. Its changing use-profile means it will be moved to where it – and therefore the system as a whole – performs best.
“With FAST we are entering into a new realm where you have to be thinking about tiers not as physical devices but as service levels that need to be delivered to the customer,” says Wambach. So while there might be only three or four tiers of physical storage, there could be dozens of service levels that use different blends of tiers, he says.
That concept of matching service levels to the application is one that Pillar is already pushing hard. “SSD can be put into the pool of storage and set up so only the most performance-hungry applications use it,” says Maness. “But [the opportunity] is there to segment the workload and focus on critical performance applications.”
For the next three or four years, the target for that will be NAND flash. But there are a whole slew of solid state technologies coming down the pipeline, technologies with such esoteric names as phase-change memory (PCM), magnetoresistive RAM and Nano-RAM.
Those technologies promise to relegate mechanical storage – something of an analogue anomaly in what is for most orgainsations an almost exclusively digital stack – if not to history then or at least to where magnetic tape sits today.