Understanding application workloads for more agile, innovative IT

Imagine a world without technology: without the communication systems people rely on for business, without the IT infrastructure that drives revolutionary advances in healthcare and without online financial services to accelerate trade around the world.

It’s possibly stating the obvious that people are ever-more dependent on technology. But that reliance also means that when IT stops working or even slows down, trouble looms. And as businesses – and people’s lives – become increasingly technologically connected, the impact of outages and slowdowns is felt even more.

>See also: How workload analysis is changing the storage purchasing process

It’s this recognition that technology is now at the heart of almost everything we do that’s driving a trend towards monitoring and proactively managing application performance throughout the entire IT infrastructure – to catch any glitches before they cause a greater problem.

What’s behind IT outages?

People tend to hear about the high-profile outages: the bank’s IT failure that leaves customers without money over a long weekend, the government IT system that crashes on implementation day or the airline travel system that fails, stranding thousands of passengers at airports. But smaller scale outages and slowdowns have implications too – the financial consequences are often discussed, but customer confidence and brand reputation can also fall victim to poor performing IT.

Outage and slowdown triggers are widespread – from human error to power cuts – but one of the most common is implementing new technology without understanding how it will affect the existing infrastructure and applications. A new application that promises the world can completely shut down the IT system if it conflicts with other shared infrastructure components that are crucial to its overall performance.

>See also: Hyperconvergence or how to optimally manage secondary data

Changing workloads are also an issue. Not recognising or expecting huge increases in demand, for example, can overwhelm the infrastructure. And failing to manage the interaction between multiple application workloads – often called the noisy neighbour problem – risks the system grinding to a halt.

Add to this the growing popularity of hybrid data centres. Legacy and new technology sit side-by-side in siloes which can make it incredibly difficult to monitor how each element is performing and its impact on other parts of the data centre.

Failing to recognise the correlation between shared and cooperating components means there’s no way of knowing for sure that the systems are truly integrated for peak performance.

Taking control

Most outages and slowdowns are predictable. In fact, many could be avoided if IT teams had full visibility across their infrastructures. But until recently nearly all of the monitoring tools available have been vendor-specific, which doesn’t help if an infrastructure is made up different components (virtualisation layers, servers, networks, storage) from a variety of vendors. At best these tools can tell if an application is running slowly – but it certainly doesn’t have the insight to pinpoint where the root cause of the problem lies.

These added layers of complexity are driving the movement towards a more holistic approach to application-centric infrastructure performance management (app-centric IPM). Holistic means taking in the full view across the entire infrastructure. App-centric IPM monitors from the virtual machine right through to data storage. It records and correlates thousands of metrics in real-time every second: understanding contextually how each component fits with the others, recognising when some applications will be working harder than usual and how to manage increase in demand.

>See also: IT environments require transformation to meet business needs

That level of insight spells an end to reactive trouble-shooting and finger-pointing, and also means that IT teams no longer have to over-provision infrastructure due to fear of future performance problems. They know exactly how their infrastructure performs, know all performance limits, and they recognise what’s truly needed to make sure everything runs smoothly with optimal utilisation.

By understanding application workloads and leveraging workload modelling and simulation best practices, the IT team can also plan ahead much more efficiently. This is already happening: with teams now able to predict with much greater accuracy how their workloads are likely to expand and how demands on the IT infrastructure are expected to increase.

That level of insight will completely turn the tables when it comes to selecting new infrastructure solutions: the customer will no longer have to wait for a vendor to tell them what performance they need. Instead the IT executive will be in a position to explain what’s required, and the vendor will have to prove its technology is up to the job.

What next?

No one wants to be responsible for an outage or a slowdown, so the technology capable of managing dynamic workloads is developing quickly. Infrastructure management products are starting to become more application aware. And as app-centric IPM develops, people will see a movement towards automated systems to detect and correct resource bottlenecks, and to identify and remediate potential performance problems before users are affected.

>See also: Getting real visibility for monitoring virtual environments

Performance planning will work in tandem with advanced simulation techniques – helping to test the viability of a new system before procurement. In fact, workload modelling is already starting to become a key component in many organisations’ IT strategies, and over the next few years I predict it will be adopted as a prerequisite process before infrastructure changes and upgrades in most enterprise class data centres.

Opening up innovation

There’s no denying that technology has completely changed business practice. But it has left them vulnerable too. If it shuts down or slows down, even for a short time, businesses suffer.

>See also: The most disruptive enterprise technology trends of 2017

By understanding what’s really going on within IT infrastructures and how each application impacts on another, outages and slowdowns are no longer a threat.

Instead of firefighting performance issues and struggling to keep the infrastructure online, the IT team will proactively identify and resolve problems before users even notice and also be intelligent about investigating new technologies that are a better fit for their needs, allowing the business to become more agile. This increased insight is the green light to more innovative, better performing infrastructures across the board – and there’s more to come.

 

Sourced by Len Rosenthal, CMO, Virtual Instruments

 

Nominations are now open for the Tech Leaders Awards 2017, the UK’s flagship celebration of the business, IT and digital leaders driving disruptive innovation and demonstrating value from the application of technology in businesses and organisations. Nominating is free and simply: just click here to enter. Good luck!

Avatar photo

Nick Ismail

Nick Ismail is a former editor for Information Age (from 2018 to 2022) before moving on to become Global Head of Brand Journalism at HCLTech. He has a particular interest in smart technologies, AI and...

Related Topics

IT Infrastructure
IT Outages