Simple to procure, quick to deploy and easy to scale, hyper-convergence has all the benefits of the cloud. Yet this is all possible on-premise, without that reliance on third-party providers.
This may be an over-simplification, but that’s the whole point of hyper-convergence. It has simplified elements which make life a whole lot easier – and this is proving extremely popular.
The appeal has seen this market grow 64.7% year-on-year, causing the big tech vendors to sit up and pay attention. Originally a technology driven by agile start-ups Nutanix and SimpliVity, Dell EMC and HPE have now moved in and joined the party in a big way.
A response to past IT failures
In many respects, hyper-convergence is a response to the failures of the cloud and on-premise IT models which have led to hybrid environments becoming commonplace.
While cloud computing has many benefits, there are also downsides. For instance, there is a tendency for organisations to spool up more and more resources in the cloud and this can inflate price calculations well beyond the original estimates. This becomes particularly problematic when working with highly volatile workloads. It leads to huge unpredictability in monthly costs which is anything but popular from a procurement perspective.
This uncertainty may be far from ideal, but the benefits have often outweighed the pain of kitting out your own data centre with an amalgamation of technology – including computing power, storage, software, networking, cooling systems, etc. This can involve planning three or four years ahead. At the same time, new solutions are constantly landing on the market, changing the technology landscape every couple of years.
A third way forward
By combining that amalgamation of technology in one stack however, hyper-convergence has created a third way forward. Binding the hardware and software together has made configuration a lot easier and deployment so much quicker. Everything can all be managed from one console and if you want to scale up your workload, you just bolt on another stack.
It makes sense to add a whole new stack in this way too, as you’re rarely going to increase your server capacity without increasing your storage as well. There is such a strong correlation, that vendors in this market have stopped pigeon holing this as a niche solution ideal for web service providers and VDi, and are now looking at a much broader potential market.
This model is also proving popular beyond the IT department. For example, procurement have become big fans because they only need to deal with one vendor and the ‘pay as you grow’ approach means you don’t need to plan years ahead.
So, is hyper-convergence the future?
As fast as hyper-convergence is growing, it still only represents a small section of the market and it’s too new for any potential negatives around the refresh cycle to surface. So, it wouldn’t be sensible to get too carried away just yet. And when you do step back and look at the bigger picture there are a couple of big issues.
Firstly, we need to realise that while storage will grow in line with server capacity, it’s not necessarily true the other way around. Storage can grow much faster than your server requirements.
Now, you might well say, just bolt on a SAN. But if you do that, you start to lose the benefit of hyper-convergence and soon find yourself returning to that tech amalgamation model.
Second, when storage is archival, and the workload predictable, cloud computing remains an attractive alternative option.
In this respect, it is unlikely that hyper-convergence will replace the cloud as a future IT model. But, in a world that is increasingly hybrid in nature, this technology has serious growth potential and will play a big part in the IT mix for years to come.
Sourced by Mark Lomas, technical architect at Probrand