Since the 1960s, computing power has been on an exponential trajectory in performance, keeping pace with business needs neatly as ‘Moore’s law’, which effectively states that computing power doubles every 18 months or so, has held absolutely and irrefutably to be true. Storage, which is as essential to enterprise IT as computing power, has, however, held to a much slower performance improvement trajectory for the best part of two decades.
Businesses have found, as their applications grow more demanding, that it has been harder and harder to get the storage piece to keep pace with the speedier compute capacity they install every year.
Imagine the following contexts: a financial services institution needs to clear a large number of payments in short order, or face regulatory fines. A social network needs to run real-time analytics on advertising effectiveness to remain competitive.
A law firm needs to roll out virtual desktops to maintain confidentiality of data whilst enabling a mobile workforce and a bring-your-own-device strategy.
All of these applications of enterprise IT face a serious and critical performance bottleneck in traditional disk-based storage, which simply can’t keep up with the volume of interactions called for by these applications within acceptable performance parameters.
Over the last year flash storage has emerged from use primarily in mobile devices and laptops to the corporate data center, where this burning need for faster storage systems has long been felt. While racks, or arrays, of traditional disk drives are still by far the dominant storage media in the data center, the use of flash continues to grow.
However, preconceptions over the historically high cost, apprehension about its scalability and the lack of a track-record of reliability for first-generation all-flash appliances occasionally raise undue concerns with finance chiefs and technologists. Here we debunk three Flash myths that every forward-looking CIO should be aware of:
Myth one – Flash costs more
The absolute cost of flash storage has been coming down, significantly over the last few years that makes the relative cost per TB of storage more attractive than ever in a Flash vs. traditional array
Flash arrays have a higher performance throughput, which allows you to use fewer arrays to manage the same workloads, especially when you run sophisticated in-line (with no performance impact) data reduction technologies which identifies and removes unnecessary multiple copies of the same information.
Traditional disk doesn’t have the performance to run this sort of de-duplication software in real-time so you need more capacity to support the same amount of data.
Additionally, as workload capacities demand higher throughput rates (measured in inputs/outputs per second or IOPS), many enterprises are finding their workloads are perfectly positioned for flash’s cost-to-performance ratio.
As well as reducing the need for storage over provisioning, Flash-based SSDs require much less power, cooling and physical space than HDDs, especially once you’ve run de-duplication and installed less capacity to begin with.
With power in the data centre usually in the top three IT operating costs, alongside people costs and datacentre space, any reduction in these generates significant savings to the business. Hitting all of them at once, as you can with Flash, brings manifold returns.
Deploying Flash within storage arrays built significantly to take advantage of flash based media will result in lower storage administration.
Revolutionary internal architecture completely eliminates complex set-up and tuning steps, while inherently delivering maximum performance.
Given that a single tier of flash based media will be deployed there is no more management of storage tiers and the ‘chasing of performance hotspots’.
There’s more to the enterprise use of all-flash storage arrays than just TCO and OPEX, however; making the financial aspect work simply removes a reason to not consider the change.
Myth Two – It is not field-tested, and represents a bigger risk than traditional storage
Whilst on the face of it, this is true: enterprise use of Flash is newer than traditional storage platforms, however EMC has been using flash based media in its arrays since 2008. When examining any new technology for deployment you have to evaluate the underlying principles, not its ‘newness’, or no innovation would ever happen.
So, to those underlying principles:
Flash has no moving parts. Physical wear and tear is a key cause of failure in traditional disk. In stress tests, Flash disks failure rates are much lower
The lack of moving parts also makes them less vulnerable to changes in environmental factors, such as temperature, moisture, and so on, which are key points of vulnerability for traditional IT systems. It also makes it better suited for use in more varied scenarios – on ‘mobile’ computing resources, hence its rapid adoption in mobile phones
It’s higher rate of performance also reduces the risk of in-flight data corruption
Every bit of the architecture, algorithms, and software implementation is designed to minimise write amplification and write operations to flash, and ensure optimal wear leveling (the process designed to extend the life of solid state storage devices).
Myth Three – The performance isn’t worth the investment
It should already be clear that there are few applications that would not benefit from the increased performance Flash has to offer.
Given that current generation SSDs gives you perhaps 100 times faster performance, it gives you huge flexibility in how you make use of the increased performance.
First, you can use the increased performance to support ‘next generation workloads’: applications that require extremely responsive systems, running at low latency or high throughput.
This could include High-Frequency Trading Systems, Exchanges, Gaming Platforms, Social Networks, virtual desktop infrastructure in any industry, online commerce platforms, mobile banking platforms, data analytics platforms and so on.
Second, you can re-architect your second platform applications for far greater efficiency and performance. The higher throughput means you can run deduplication to good effect, and fit more into less storage and physical space, and bring down your operating costs accordingly.
Thirdly, flash offers unparalleled levels of flexibility. Organisations of all sizes need to deploy new platforms to deal with the abrupt changes they face.
They need to ensure that work is delivered at an affordable cost, without leaving them open to uncomfortable risks. Weather they decide to scale up or scale out or both, storage needs to be able to adapt with that change, Using flash minimises the impact and disruption upon the whole IT ecosystem.
While these are the three most common misconceptions I hear when talking to customers around EMEA, it’s increasingly hard to doubt the argument that flash storage arrays are already presenting new opportunities to businesses that embrace the technology. Put simply, the move to flash is just too logical and attractive.
We’re entering an era where storage will be software-defined, flash focused, automated and simpler to manage. This will unlock a world of opportunity for businesses, and turn IT from an operational necessity into a strategic function.
We are seeing a huge acceleration in interest in the technology from both those needing support with next generation workloads and those grappling with the challenge of operational performance and efficiency from more traditional workloads against a growing data deluge.
In my role it’s easy to get over excited when we see disruptive technology asserting itself in the industry, but all-flash arrays genuinely have paradigm shift potential. It will transform tomorrow’s business, and with advances in software that make flash easier to manage and protect, this transformation is gaining even more support from forward-looking CIOs.
Sourced from Sean Horne, CTO & senior director Enterprise and Mid Range Storage at EMC