The need to address today’s zeitgeist and predict tomorrow’s business landscape has led to a variety of approaches being maintained. Balancing the many and varied workloads that exist today has become more important than ever.
Things have changed a lot in 50 years. The early days of mainstream business computing were characterised by predictable workloads and batch processes. Today, workloads are anything but predictable, thanks to two overarching trends: the sheer scale of applications, infrastructure and end-users; and the underlying architecture of applications and infrastructure that must adapt to changing business and customer needs.
Think of the way companies use IT today. They are modern software factories that increasingly consume and develop software to drive growth and innovation. Every enterprise is a software company.
Today, the number of software applications we use on a daily basis is growing exponentially and the importance and ubiquity of internet-connected devices mean that those applications must always be accessible and performant. Cloud has fundamentally changed the way software is distributed, as enterprises seek performance, and scale to end-users across the globe.
Evolutions in application and infrastructure architecture require us to rethink how we use application networking services, like load balancers. Think of it as analogous to what happened to our road networks. A boom in car ownership in the 1950s led to rapid construction of public highways and, later, motorways. But nobody predicted that car travel would become ubiquitous, so those roads became clogged and the highway agencies have had to become smarter about how they keep vehicles moving.
So, they tax usage, encourage alternative forms of transport, invoke schemes such as congestion charges or tweak existing ways to manipulate the flow such as traffic light sequencing, diversions and re-routing. If you think traffic is bad enough today, imagine what it would be like without those controls.
It’s the same for data traffic. The old appliances are no longer fit for purpose because they were designed for a predictable world where the information available was much less, it went fewer places and there was plenty of infrastructure headroom to manage the capacity.
Today, managing the data centre, networks and third-party services is all about being agile. It necessitates a software-defined world in IT, with modern application services delivered by load balancers having a real-time forensic insight into the way applications behave.
As well as being application-aware, it’s also about being environment-aware, whether you’re running your IT on a virtualised datacentre, a private cloud (like OpenStack), a public cloud or a combination of the above. Load balancing systems have to be cognisant of the way traditional and modern applications, such as those based on microservices, are dispersed.
Load balancers must leverage APIs to facilitate communication between applications, infrastructure and end-users. There also has to be an ability to troubleshoot and replay challenges and traffic events so that administrators are able to identify bottlenecks and improve network efficiency.
Using hardware-based load balancers where each appliance is managed separately is the equivalent of controlling a fleet of lorries using walkie-talkies and ignoring tachographs, logistics software, GPS, RFID and cellular communications. Or, to back to my earlier analogy, it’s like the highways people trying to unclog the roads with a piece of chalk and some traffic cones.
Times have moved on and the traffic we see today has to be managed in very different ways or performance, availability and reliability all suffer. If your network is struggling under the burden of the new workloads, you owe it to yourself to look at the next generation of load balancers.
Sourced by Dirk Marichal, VP EMEA & India at Avi Networks