The importance of smartening up a legacy data centre

Data is one of the most important commodities in the enterprise today. No matter whether a company specialises in handling it, selling it or just using more of it – data is king. This is evident from a recent CBRE study that shows Europe’s data centre market is still booming with companies leasing a record amount of capacity.

However, with data playing such a pivotal role, these centres are being forced to take on more servers, racks and hardware – which is becoming harder to monitor. This complexity of managing the dynamic environment of people, power and assets is creating huge data blind spots, and means inefficiencies go unaddressed. And the problem is only going to snowball. With IoT promising to deliver an even greater influx of data in 2018, legacy data centres will start to creak at the seams without proper planning.

>See also: Propelling legacy systems into real time

These growing levels of complexity must be met with smart monitoring and automation in order to perform at the required levels for the foreseeable future. The speed of data growth, and of the business appetite for it, demands nothing less.

Measure twice, cut once

If you are looking to implement new solutions, the first thing you must do is plan. Better architectures allow network devices to be more resilient, programmable and agile.

However, poor planning is a serious pitfall that must be avoided. Even with the solutions in place, without a proper strategy, organisations will struggle to use the solutions, which will ultimately impact revenues.

Organisations need to create an executive plan with objectives and expectations to show exactly what benefits can come from the implementation, and more importantly, how solutions and frameworks can be used by the people at the coalface.

Without this, new software and hardware will be put in place and used by engineers who have little to no knowledge of their true potential.

Keeping data in plain sight

A direct result of more and more devices being added to corporate networks, among other things, is that data centres are now being packed with more racks to store this data, at a much higher density than ever before, as well as to run the demanded applications and services. This is bringing inherent and underlying issues to the forefront, which can’t be ignored.

>See also: AbsurdIT: the old data centre computing model is broken

First of all, more racks in the same space leads to heat pockets building up throughout the centre. If left to heat up, this can have huge ramifications on performance and could, if not addressed, lead to internal meltdowns – and not just with the poor frazzled engineer trying to salvage core data.

However, having the ability to track data across the centre, knowing its granular position throughout the estate, will allow for cooling to be balanced more efficiently, letting the data centre work smoothly, without hiccups.

Having this ability also means data centre managers don’t need to always look to expand the physical estate. With the new and improved pieces of kit installed, data centre managers can track the data and see how it can be stored more efficiently.

Without this vision, data centre managers may think they need to reach into yet another budget and waste time and money on pushing out the walls or installing more racks.

It’s not worth the trouble

Power outages and downtime at data centres have been seen to cause widespread havoc for consumers, as well as the company’s bottom line. You only have to look at the impact on the Microsoft’s Azure services throughout Northern Europe last year as a prime example of what can go wrong – and how customers react.

Incidents like this, especially if mistakes are not learnt from, can have a massive impact on future revenue streams. After all, users will steer clear of a brand if it appears untrustworthy.

>See also: Legacy infrastructure hindering digital transformation of supply chain

It’s about time that organisations safeguard themselves, using monitoring software to ensure that not only can the business stay up and running, meeting its revenue needs, but can integrate with the newest additions to the estate for the smart planning of future services.

From a simple cost-benefit analysis, not implementing a DCIM solution and relying on older, outdated (often manual) solutions is not worth the gamble. DCIM can enable smart, real-time decision making across the entire estate monitoring power, networking and usage patterns, to name but a few. In addition, fail safes can be introduced. This can take the form of custom alarm creation – getting the right, critical information to the correct team or individual so they can take evasive action to avoid data outages in the future.

Taking a pre-emptive approach will always pay dividends.

DCIM can be the watchful eye used to alleviate the growing burden on existing infrastructures, making sure you are being as efficient as possible – having a single view across data and performance issues in the data centre. It is no longer just an add-on, it is a critical component of any data centre saving organisations money, time and performance.


Sourced by Mark Gaydos, chief marketing officer at Nlyte Software

Avatar photo

Nick Ismail

Nick Ismail is a former editor for Information Age (from 2018 to 2022) before moving on to become Global Head of Brand Journalism at HCLTech. He has a particular interest in smart technologies, AI and...

Related Topics

Data Centres
Power Outage