Solving the performance challenges of web-scale IT

Once only seen within the headquarters of big tech corporations, web-scale IT infrastructure is now being used to maintain operations at scale, using the cloud. Data centres and infrastructures  allow for increased agility and efficiency. But as deployment leads to company systems getting larger, performance can falter.

Performance challenges can also occur when moving data from an operational database to an analytical database, where there is a danger that assets can become stale, rendering it useless.

“With web-scale IT, organisations can run hybrid IT operations that enable the agility required to scale to millions of users,” said Eugene Kim, director of product strategy at Cisco AppDynamics. “The growth of sophisticated and scalable hybrid cloud architectures has meant businesses have been able to take full advantage of modern cloud infrastructures and microservices architecture to boost productivity and increase cost-efficiency.

“Despite these advantages, moving to the cloud can bring challenges to the forefront for some organisations. To solve the performance challenges of web-scale IT, technologists must be able to have visibility and monitor their full IT stack in real-time – from a customer’s device, to the back-end application, or the underlying network – to deliver great user experience and performance.”

This article will explore how organisations can keep performance in check when leveraging web-scale IT.

Visibility, insight and action

Kim went on to state that having efficient monitoring, powered by analytics, in place is key to ensuring that any performance pitfalls are overcome.

“In highly complex — cross-platform and multi-site — deployments, it can be hard for IT managers and CIOs to demonstrate the gains of migrating to the cloud,” Kim explained. “However, if they can monitor, configure and optimise the organisation’s entire technology landscape, through a single lens, they can bring the benefits to life.

IT portfolio rationalisation: how CIOs are surviving economic turmoil

Alberto Pan, chief technical officer from Denodo explores IT portfolio rationalisation and how CIOs can use it to survive economic turmoil. Read here

“From a performance monitoring perspective, this not only means collecting massive amounts of data, but also making sense of it and turning it into actionable insights. One key approach organisations can take is visibility, insight and action; as businesses move to a hybrid approach with web-scale IT, they likely have portions of their applications still running on legacy infrastructure in their data centres, with other portions migrated to the cloud. So, while the cloud provides more scalability and agility, it also means that there are more things to monitor and manage.

“By using analytics, and visualising this data, technologists can not only reduce the complexity of monitoring, but also connect and correlate IT performance to business performance outcomes so it’s understood across the organisation. What’s more, by understanding IT operations in the context of the business impact and customer experience, organisations can then benchmark performance to take action and optimise the environment and ensure the best IT performance.

“Real-time IT insights also enable organisations to proactively optimise the IT stack to prevent performance issues or quickly resolve them. Monitoring and observability are critical to your hybrid cloud strategy for business continuity and disaster mitigation.”

AI and automation

Staying with the need to monitor your infrastructure, one way in which you can ensure that insights are as accurate as possible is by implementing artificial intelligence (AI) and automation capabilities.

“As the scale and complexity of today’s enterprise cloud environments continues to rise, organisations need to find new ways of monitoring and managing digital service performance,” said Michael Allen, vice-president, global partners at Dynatrace. “Moving beyond visibility to observability is a crucial step for organisations as they work to overcome these challenges.

“When combined with AI and automation, observability approaches provide the groundwork that enables organisations to effectively monitor and manage web-scale IT environments that they are increasingly reliant on. This enables IT teams to understand what ‘normal’ behaviour looks like and instantly identify problems as they arise.

“AI further enables observability to deliver precise answers that allow IT teams to instantly respond and resolve problems before user-experience is impacted.”

Key success factors behind intelligent automation

Tom Gardner, co-founder and director at Robiquity, discusses the keys to success when it comes to implementing intelligent automation. Read here

In-memory capabilities and testing

Two other components to be considered when it comes to performance monitoring for web-scale IT are an in-memory database, and a strong testing protocol.

Kieran Maloney, IT services director at Charles Taylor, explained: “Utilising an in-memory database and caching are ways to improve the performance of web-scale applications, particularly for things like real-time or near-time analytics.

“The majority of cloud infrastructure service providers now offer PaaS services that include in-memory capabilities, which increase the speed at which data can be searched, accessed, aggregated and analysed – examples include Azure SQL, Amazon ElastiCache for Redis, Google Memorystore, and Oracle Cloud DB.

“The other consideration for solving performance management is a testing approach for identifying how the actual performance is, and also determine whether there are specific aspects of the application that are not performing, or could be “tuned” to improve performance or the cost of operation.

“There are a number of Application Performance Testing tools available that provide business and IT teams with real-time usage, performance and capacity analysis – allowing them to proactively respond and make interventions to improve performance, as required.”

Caching for saving costs

Finally, Aron Brand, CTO of CTERA, expanded on the benefits of caching in this area of operations, stating that it is among the most economical decisions that can be made.

“Cloud-based data centres do not provide the same performance as your on-premise data centre,” explained Brand. “By moving to a hybrid model that places some of your resources in the public cloud, you’re adding latency to the mix. Many applications that were designed to work over a LAN will operate poorly if you relocate them to a cloud datacentre that’s accessible only by WAN.

Corporate WANs underperforming, says Telia Carrier study

Research from Telia Carrier has revealed that corporate wide area networks (WANs) and service providers are failing to deliver on businesses’ priorities. Read here

“Storage services are particularly problematic in this regard. When you move storage to the public cloud and leave some of the storage clients on-prem, you may discover that users complain about sluggish performance despite the presence of high bandwidth network links. Storage workloads tend to suffer especially when dealing with small files and metadata-intensive jobs.

“The most cost-effective way to avoid storage latency issues with your legacy applications is through caching. For example, if I keep a local cache of the file system in a remote location, then accessing the files in that cache can be achieved with zero latency. This allows workloads to operate efficiently with cloud storage, without requiring modification.”

Avatar photo

Aaron Hurst

Aaron Hurst is Information Age's senior reporter, providing news and features around the hottest trends across the tech industry.