Disrupt or die is the new battle cry for software innovation

Where there’s innovation, there’s disruption. No matter which industry you are working in, there’s always a disruptor. Netflix in media, Amazon in retail, Skype and WhatsApp in telephony – all offering customers something faster, cheaper or simply better than what went before.

We have become used to technology being at the forefront of disruption, but what we are looking at today is software-defined disruption. Technical innovation is nothing new, it’s been disrupting industries for decades.

But disruption can be disturbing for some and we see companies in multiple vertical sectors looking over their shoulders to see which startups are going to change or influence their markets.

>See also: Big data is eating the world – but it’s not eating the data scientist

This is the landscape that Canonical and its customers play in, and one which is shaking up the thinking of savvy enterprise companies. When it gets a call, it is typically to solve one of two problems as expressed by the customer: either the company is looking to adopt an entirely new workload: AI, machine learning, Kubernetes or containers for example. Or, and this is increasingly the case, they have explicitly identified a competitor who poses an existential threat to their business, and the conversation is about how best to head off that threat through new capability.

In practice, these become the same meeting. Amazon buying Whole Foods puts every established supermarket chain on watch for the inevitable disruption this will bring –  leaning in to beat that trend rather than be left behind by it.

Software-defined businesses of the future – be brave and take a leap of faith

One weapon available to everyone, incumbent or disruptor, is Open Source software. At its core, open-source gives you a great deal of capability, available both in libraries of data and code, and also in the millions of individuals working on those libraries.

But open-source also means a chance to share & crowd-source innovation. By sharing your efforts to solve big, hard problems, you invite the world to help improve the answers in a way that any single company would struggle to do.

It is now normal to find entities ranging from Walmart and Carrefour to eBay loudly open-sourcing their work & inviting anyone interested in retail innovation to help contribute & leverage this shared body of work. Sharing is always a leap of faith, but one that is well worth the effort, in this era of disruptive innovation.

>See also: How companies must adapt to the digital revolution 

And while open-source has typically been assumed to be referring to software, the concept is just as applicable to Operations i.e. how that software is operated in a data centre. For example, Deutsche Telekom and Bell Canada, two large telcos on different continents, share a similar approach to operations for their next-generation network infrastructure. They have decided that sharing the same underlying models of their IT infrastructure gives everyone a competitive advantage.

If one of them makes a marginal improvement to a piece of their own stack, everyone else using that stack benefits. Differentiation can now be focused on services delivered to end-users rather than the underlying servers they run on. We are going to start seeing a world where infrastructure and operational knowledge become a commodity, and where crowdsourcing of IT becomes something we do as a matter of course.

Big software – dealing with infrastructure complexity

One reason that this crowd-sourcing of IT is inevitable is that we are now dealing with a different class of software.

Most legacy infrastructure, that takes up the bulk of the budget and the floor-space in enterprise data centres, is typically comprised of monolithic, slow-changing applications – like a database server – running on a relatively small number of machines.

But take any cutting-edge software capability today – machine-learning, big-data, or indeed an OpenStack architecture – and it must be integrated, configured and tuned in a way that is specific for each group of users.

The resulting solution is typically compiled from multiple, disparate sources and then deployed across elastic infrastructure that could scale to thousands of servers. Change – patches, versions, configuration updates – is assumed to be part of the daily beat not a special event. Operations at this scale & speed is a different, and far more complex, problem.

>See also: Cloud technology for HR: how big companies can benefit

Canonical coined the term “big software” to represent this class of at-scale software which organisations now rely on to stay ahead. Any innovating organisation must expect to ingest and rely on growing amounts of big software.

There is simply no way that any organisation can ramp up & maintain operational expertise of this new type rapidly enough to keep pace with the business imperative. But they can get there by open-sourcing their IT operations knowledge.

Doing this involves encapsulating operational expertise in intelligent open-source ‘models’ that are iterated on by many organisations at once. Those ‘models’ become the automation backbone for big software, delivering speed and economics that legacy IT approaches can only dream of.

Automation is the key to setting us free

This shift in approach to ‘model-driven’ automation of IT described above is inevitable, because the economics say so.

Companies routinely pour 80% of their IT budget into simply operating existing infrastructure; running required installations and upgrades and basically keeping the lights on. That leaves just 20% of the budget for innovation and this is the shortfall that disruptors are able to exploit. If your business is to grow and remain competitive in this software-defined age, that dial needs to move the other way, and quite substantially.

>See also: Turning big data into high-class insights

It starts with getting past the mindset that IT operations have to be done by hand. That approach was adequate for the decade gone by but in a world of at-scale infrastructure and agile, oft-changing, composable workloads being ingested from a variety of sources, every IT organisation has to think of running their data centres the way a Google or an Amazon runs theirs.

This means automation as the default – moving beyond simple scripting of batch processes to truly intelligent, model-driven operations that allows IT staff to completely offload the routine and spend time on competitive differentiation.

Canonical works with customers across industries to bring the power of model-driven automation to their data centres. For example, it has a pharmaceutical customer for whom it built a research cloud.

In under 90 days they went from concept to being able to deploy, configure, stop and start apps, across thousands of machines. Where once their IT team would have done manual & one-off work across tens of machines, now model-driven operations allows that team to crunch big data sets, at will, on cloud infrastructure that just works.

>See also: Is Hadoop’s position as the king of big data storage under threat?

The pay-off here is in the pace of discovery – shorter time-to-market with new drugs, new chemicals or molecular entities – with the potential return measured in billions per patent. True model-driven automation pays for itself and then some.

Society is in an extraordinary period of creative disruption. Software innovation isn’t about making job cuts to slash the bottom line. It’s about letting go and making the software do the work, so companies can enable people to be smarter, move faster and innovate. Disrupt or die is the new battle cry for both challenger and incumbent, in this software-defined era.


Sourced by Anand Krishnan, EVP, Cloud and General Manager, Canonical

Avatar photo

Nick Ismail

Nick Ismail is a former editor for Information Age (from 2018 to 2022) before moving on to become Global Head of Brand Journalism at HCLTech. He has a particular interest in smart technologies, AI and...

Related Topics