The legacy of legacy tools: Vendor lock-in vs. innovation

Today’s business world is all about agility and responsiveness. Business owners want to bring exciting new capabilities to their customers immediately.

There is no time or willingness to wait, and as a result, IT is being asked to reinvent itself to support these new business demands.

Where they were historically focused on building environments that were static and managed in silos, IT organisations are now challenged with building dynamic highly reliable and flexible infrastructures.

Having successfully delivered value with virtualisation, IT is now in the process of adopting clouds (both private and public) to take the data centre to the next level – IT is responding to these business demands.

>See also: Highly customised ERP systems soon relegated to 'legacy' status, says Gartner

However, this adoption was not an option for IT – it was a necessity. With options like Amazon AWS and Microsoft Azure, business owners can now get the infrastructure capacity they need with a swipe of their credit card.

As a result, IT finds itself competing with alternatives they had never encountered before. Clouds are becoming an integral part of an organisation’s infrastructure portfolio – whether tactical “as needed” cloud bursting or part of a strategic “hybrid model”, the data centre is rapidly expanding beyond the walls of many organisations.

The ability to dial up and dial down capacity to meet dynamic application needs has completely changed the IT paradigm.

And yet, with all this excitement, a relic of yesterday’s data centres continues to persist: legacy tools.

These tools, built years ago to monitor and manage static and well-defined environments, are still being used today in even the most modern environments.

Dinosaurs of IT operations, these tools have limited depth for today’s modern technologies and they are too inflexible for the dynamic nature or today’s environments. However, they continue to exist for a couple of reasons.

There are old but critical applications in place and these tools have been historically relied upon to monitor them. And organisations have made massive investments – time, money, and other resources – in customising these tools and are now reluctant to walk away.

While sunk costs that shouldn’t be considered when making the right decision for the future, there is always the element of risk in change, and organisations need a compelling reason (or reasons) to take on the challenge.

Today, there are multiple compelling drivers for making the move from legacy tools to unified tools. While the small percentage of tools monitoring that “old, critical app” may still continue to exist, most other legacy technologies are likely to be replaced within the next few years.

Legacy tools were built for static environments where the infrastructure was dedicated and the demand predictable, and often linear. None of this holds true anymore. Today, infrastructures are dominated by virtualisation, converged infrastructure, and cloud platforms – and thus there is the need to manage infrastructure that is fluid and dynamic.

Acknowledging the shortcomings of their tools, many legacy vendors have come to market with bolted-on or cobbled-together acquired solutions. The cross-product integration is often weak and superficial, and results in multiple gaps and blind spots. These blind spots, coupled with the tool limitations, force manual intervention. This increases both resolution costs and time (MTTR), which ultimately results in excessive downtime.

Originally built during the 90s (in the heyday of IT), these tools came with sticker prices that would be considered obscene by today’s standards.

These high-priced solutions continue to carry high annual license maintenance fees, and licensing restrictions on their customers. And making it worse, many of these tools have not kept up with the changing technology and have forced their customers to make significant additional investments to build and maintain inter-tool connectors, further increasing the ongoing care and feeding effort required. Together, these factors make for tools that carry very high operational costs.

Today’s hyper-competitive business climate means that there is no room for downtime. It is no surprise that the business impact for each hour of IT downtime continues to grow.

A recent Forrester Consulting survey indicated that the cost of downtime for nearly half the organisations was $100,000 per hour or more. As these costs continue to escalate, organisations are realising that they cannot afford the slow, manual root-cause isolation process that is the norm with legacy tools.

To maximise the return on modern infrastructure investments, IT teams need modern monitoring solutions that not only provide them with timely alerts, but also with intelligent context around them.

It is not enough to know that a component is failing, they need to know whether this alert will actually result in an application disruption or is there redundancy, and will impact be felt by the business end users? If there is an impact, what services will be affected and how will the business be impacted?

As discussed, legacy tools are deployed in silos, which means they are disparate and standalone, and lack transparency into enterprise-wide information. They are therefore neither able to provide any meaningful insight for issue prioritisation, nor into trends that might impact future availability if not addressed. Lastly, with data in so many disparate tools, effective capacity planning is manual and error-prone.

>See also: Gold for innovation: The technology legacy of the London 2012 Olympics

Even though software-defined everything (SDx) appears to be vendor speak, the truth is it is exactly how IT is taking shape. As new application orchestration becomes automated, there is a need for provisioning tools to ensure that the newly orchestrated applications continue to deliver value in a reliable manner. Again, legacy tools just do not have the capability to do this.

No matter where an organisation stands on the migration away from legacy tools, one thing is certain: the status quo is not going to last.

For some organisations, this may mean keeping some legacy tools while taking steps toward incremental modernisation within their infrastructure. For others, it might be time for spring-cleaning and a garage sale of their existing tools.

In either case, as newer tools continue to fulfill enterprise demands for more open, scalable and cost-effective solutions, more and more organisations will start accepting that their father’s old Chevy has outlasted its usefulness.

 

Sourced from Deepak Kanwar, senior manager, Zenoss

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

Related Topics

Technology Vendors