AbsurdIT: the old data centre computing model is broken

Many of today’s data centres are densely complex, expensive, and they’re blocking what businesses want to do.

Data centres have a long history of being the unknown and mysterious rooms of the enterprise. They are hidden away, sometimes in basements or at the extreme end of the back office, and access is heavily restricted.

Even if you can get in, the sights aren’t always pretty. Behold decades of technology layers: mismatches of hardware designs, software platforms and physical interfaces; snaking cabling; crammed racks and more.

The mess is understandable even if it’s not quite forgivable. As companies have become increasingly digital they have had to add bits and bytes and ways to shuttle and store those in order to speed up operations. And they have often done this without throwing out previous generations of datacentre gear because they have been afraid to dispose of systems that they think they might need. All too often, companies don’t know what they have and what dependencies exist, so the status quo persists.

>See also: Addressing the incredible complexity of the modern data centre

Stasis leads to some predictably sub-optimal (and, frankly, absurd) outcomes. Large companies frequently run ancient versions of software programs on legacy systems. Recruiting people who understand how to maintain and secure these elderly systems is tough and getting tougher.

Companies that dispensed with older approaches and embraced client/server and new technologies more generally aren’t any better off as the spaghetti cranked out by generations of systems from various vendors has led to issues of space, heat, complexity and high energy consumption.

Little wonder that there is a thriving boutique business in designing and refurbishing data centres. Some even repurpose spaces from cowsheds, aeroplane factories and caves to churches, military bunkers and salt mines.

Attempts to cool facilities have led to a boom in firms selling liquid cooling, fans, heat sinks, air- and glycol-cooled chillers and other devices. And here’s the rub: cooling sucks up about as much electricity as the machines they are taking the heat off.

We all know why we have this absurdity (or absurdIT, if you will). Change is tough and, in the case of the data centre, often requires comprehensive auditing and mapping of dependencies before cost justification documents are drawn up and time is set aside to make it happen. But as George Bernard Shaw wrote: “Progress is impossible without change and those who cannot change their minds cannot change anything.”

>See also: How is a data centre like a software development house?

Just keeping the datacentre lights on is not good enough in today’s hyper-competitive economy where 40 per cent of Fortune 500 companies are not expected to exist in 10 years, according to a John M. Olin School of Business, Washington University forecast. Or where companies stick around in the Standard & Poor 500 for 15 years today compared to 67 years in the 1920s, according to Richard Foster of Yale University.

Longevity of life at the top is very likely to shrink further. The ‘Uber for X’ phenomenon is ubiquitous and there will be fewer long-term industry incumbents; even the disruptors will themselves be disrupted in quick time.

This should be pointing modern business leaders in a very clear direction and with luminous cones marking the route. Data centres remain critical to business success. They are the nervous systems, heartbeats, engines, logistics hubs and knowledge depots of the enterprise. Ignore them, or just seat the assets, at your peril.

You need to modernise IT or accept the risk of becoming rendered uncompetitive and unable to change. IT projects that take multiple years are no longer feasible at a time when pretty well every sector is being disrupted and discomforted by the arrival of start-ups that have zero legacy. Months-long procurement cycles and labyrinthine processes won’t cut it: organisations today need an IT architecture that is flexible and adaptive, and where workloads can glide gracefully between deployment models.

>See also: Fortune telling: what’s in store for the data centre in 2017

But it’s not easy to create a data centre for the future: different workloads will have different levels of performance, security and data governance. And the new data centres must be protean and ready for all the events we know might happen – and even events we have no current knowledge of or perspective on.

The answer to this conundrum lies in taking a modernised, multi-cloud approach where workloads are matched with public cloud, private cloud, hybrid or on-premises deployment models, all managed from a single control panel, and where new technologies continue to drive gains in value, performance and efficiency. The old data centre model is broken but those that acknowledge that fact and accept the challenge of fixing it will be those best equipped to disrupt others… and avoid being disrupted.

 

Sourced from Chris Kaddaras, VP EMEA, Nutanix

Avatar photo

Nick Ismail

Nick Ismail is a former editor for Information Age (from 2018 to 2022) before moving on to become Global Head of Brand Journalism at HCLTech. He has a particular interest in smart technologies, AI and...

Related Topics

Data Centres
IT Project