The hype machine is still in overdrive in the storage market, cranking out myths that businesses need to see through if they don’t want to be bamboozled into adopting solutions that won’t meet their requirements.
While there are plenty of things storage vendors want you to believe, here are ten things they don’t want you to know.
1. Five year warranties should be standard
Most storage vendors include a 12-month warranty and will probably extend it to three years if you push them hard. But ask them for a five-year guarantee and they’ll hike the total price by a huge amount. Why? It’s because they like you to rotate kit every three years. Moore’s law means it makes sense to swap out servers but not necessarily storage. Storage should not only last five years but it should still be performant – push for fairer treatment.
2. You should be able to use all the storage you buy
In any scenario outside the storage world, no one would accept they could only use a percentage of the product they purchase. If you buy a five bedroom house, you should be able to use five bedrooms. If you buy a five-seater car, you should be able to use five seats. If you buy storage, you should be able to use 100% capacity without performance degradation.
However, many storage vendors issue “best practice” guidelines warning customers of dramatic performance loss if they use more than 75% storage capacity. It is possible to deploy a storage array that can run at 100% capacity without any performance degradation. Why is it that most storage architectures aren’t designed for this?
3. Upgrades should not be costly or complex
A large-scale upgrade, or changes to parts of the existing infrastructure, can frequently mutate into a complex project that includes downtime, investment in new hardware and a hefty professional services bill. Background migrations are possible and, with true scale-out architectures, can be a thing of the past.
4. Storage doesn’t need people
The biggest cause of data centre failures is people. If you can limit the interaction between humans and IT by making equipment repairable in situ, you restrict the potential for data centre failure. Around 70% of warranty returned drives have nothing wrong with them or just need a simple recondition. The solution is to find an array vendor that repairs drives in situ and avoids the need for humans to go near the box for at least five years.
5. Quality matters when it comes to drives
Whether you’re looking at hard disk or solid state drives, the same rule applies, beware of consumer grade hardware. There’s a huge difference in the quality of components used, the testing carried out and, most importantly, the annual failure rate of drives in consumer and enterprise grade products.
Consumer grade hardware may well be cheaper to purchase but operational cost and risk are likely to be much higher than for more reliable and robust enterprise grade products. Make sure you balance capital expenditure, operational cost and risk when looking at any type of drive.
6. Flash isn’t the saviour of the universe
Flash is not the answer to every problem. It is a great tool to help certain workloads’ performance but flash has its limitations. When it comes to large sequential writes for example, hard disk drives are much more appropriate. Flash and hard disk are different tools that can and should be deployed for different jobs.
To get the right blend of media for their storage requirements right, businesses should talk to vendors that are not restricted to a single type of media to decide which tool is most appropriate.
7. All-flash arrays aren’t always more power efficient
Many people have been led to believe all-flash arrays are more power efficient than hard disk arrays. When looking at the power requirements of storage you need to consider the whole array, not just the drive modules, to get the full picture.
The truth is, some hard disk drive arrays use less power than all-flash ones. What matters most is how vendors implement more power hungry components, such as processors and cache memory.
8. Vibration kills predictable performance
Excessive vibration can cause reliability issues, but what a lot of people don’t realise is it can also seriously affect performance.
While it is possible to stop vibration and give 100% consistent performance, it’s not easy and takes significant investment to design such an array. It may sound crazy, but one easy way to test storage for performance is to give it a good shouting at and see what happens, you might just be surprised.
9. You don’t need all those bells and whistles
Decisions around enterprise storage have typically been built on feature checklists rather than the reliability and performance of the array itself. With the emergence of software-defined storage (SDS), a significant amount of the required functionality is shifting from the storage array to hypervisors, operating systems and applications.
Many enterprise storage array “deals” end up locking customers in to proprietary software that only works on one platform. SDS gives businesses the flexibility to choose the platform they want without worrying about the underlying hardware – this improves their bargaining power and frees them to focus on performance and reliability.
10. Scalable storage isn’t all the same
Many vendors offer scalable storage but most neglect to explain it relies on a central legacy storage controller to power performance-hungry software stacks and applications. This can often lead to significant upgrades to scale capacity and performance.
However, there is an alternative to the traditional “scale–up” storage model. There are some units that will work seamlessly together in a single storage pool combining increased capacity and performance.
Instead of performance and reliability, the storage market is sadly currently largely built on hype and jargon. What customers really need are straight talking, no fuss vendors to help them get the best storage platform for their businesses. There may not be many of them, but they do exist.
Sourced from Gavin McLaughlin, X-IO Technologies