The future of enterprise applications

Demand for round-the-clock services is forcing many organisations to implement new enterprise applications to deliver high-performance, scalable and always-on digital services.

Today’s consumers expect more from the companies they buy from. The demand for round-the-clock services is forcing many organisations to implement new applications to deliver high-performance, scalable and always-on digital services.

This has caused a shift away from older application architectures towards more modern, cloud-based approaches. In the past, the traditional monolithic approach reigned supreme; today, enterprise applications require more flexibility in the way individual components are isolated, developed and tested. Typically running on cloud, modern architectures utilise new technology such as microservices to create a much more agile development platform.

What are microservices, and why should CIOs care?

The key principle behind microservices is that business applications become much easier to build when broken down into smaller, modular units. These units can be separately maintained and continuously worked on, with the main application made from the sum of its parts. If any area of the application requires more resources to cope with demand, more container instances can be added immediately. If elements of the service have to be changed or reduced, this also can be done quickly.

>See also: The CIO’s role is changing – here’s why

Microservices make development much more efficient, allowing smaller development teams to adapt, deploy and scale their respective services independently of each other and without downtime. For CIOs tasked with meeting business goals, a container-based approach can offer an ideal solution.

However, it also presents challenges around cost and operational control. Furthermore, traditional tools for IT monitoring and data analysis simply aren’t capable of performing the same task in a microservices architecture.

Data, data everywhere

While the ability to continuously test and deliver applications has greatly improved the quality of software, clean code doesn’t mean software always behaves as expected. When things go wrong in a microservices architecture, tracking down the issue quickly is key, but with every container continuously generating its own machine data logs, finding the root cause of an issue can be incredibly difficult.

Equally, high-value information can easily be buried beneath petabytes of log-data. Without real-time visibility or a meaningful way to extract it, these insights can be easily missed.

>See also: The role of the CIO in a digital age

In the past, traditional application monitoring and analytics tools would be used – but within modern cloud architectures, their limitations are all too apparent. In a large enterprise environment, trillions of items of machine data may need to be mined in order to isolate the specific code or integration point that’s causing issues. This approach can take a prohibitive amount of time using traditional methods – if it can be managed at all. Furthermore, data is often siloed between traditional monitoring tools, meaning there’s no single source of truth from which developers can work from.

As a result, more and more development time is spent troubleshooting, with performance and availability problems only getting worse the longer root issues remain undiscovered.

Taking a cloud-native approach

To tackle these challenges, analytics has to be built specifically for modern cloud architecture environments. This involves gathering real-time, continuous intelligence data across an organisation’s entire infrastructure and application stack, then making that information available to different teams across the business.

Unlike traditional monitoring tools, cloud-native solutions can ingest machine data from applications, systems, network and tools across the entire continuous delivery pipeline, not just server data. This means machine data can be monitored everywhere, independent of source, location or format. This helps eliminate data silos within the environment and ensures a single source of truth for everyone.

>See also: Cloud strategies for digital transformation

Each team should have its own use cases for this data:

  • Developers: Real-time information on app performance can show faults and failures as they happen, as well as potential bugs. Greater insight means issues can be tackled quickly, streamlining the release process and improving overall software quality.
  • Wider IT teams: Being able to see where poor performance is taking place is essential to planning IT architecture over time, particularly when it comes to making better choices over how to design services.
  • CIOs and those in IT leadership roles: Real-time data can provide much more oversight into how much is being spent on cloud services at any given time. For example, after the Meltdown and Spectre security issues led to cloud service providers patching their systems, there was a noticeable decrease in performance.

With many cloud services charging models based on consumption, this could lead to significantly higher bills for the same volume of work, impacting budgets and profitability over time. Without proper insight into the reasons behind this, CIOs risk being unable to manage services effectively.

The changing role of application data within enterprises

Applications within enterprises today are assembled and integrated from a mishmash of internal software, external applications and open source components developed by third parties. Tracking all these components as they are consumed and changed is a very different job compared to traditional IT architecture design and monitoring.

Making use of all the data created by applications involves capturing it, interrogating it and then making it fit for purpose across the team, regardless of where that data comes from. By gathering this data in real-time, developers can use it to streamline their continuous delivery processes and take advantage of the elastic scalability that cloud services provide. Using this data more efficiently will help demonstrate where costs can be saved, where improvements can be made and where large-scale digital transformation projects relate to individual IT architecture decisions.

In this way, companies can push ahead with new developments that help meet customer requirements. By understanding this new cloud-native application world, everyone benefits.


Sourced by Christian Beedgen, Chief Technology Officer at Sumo Logic.

Kayleigh Bateman

Kayleigh Bateman was the Editor of Information Age in 2018. She joined Vitesse Media from WeAreTheCIty where she was the Head of Digital Content and Business Development. During her time at WeAreTheCity...