It’s time to double down on cloud adoption

It’s safe to say our world has changed over the last year. How we shop, work, and socialise has been reimagined in a multitude of different ways so that we can carry on throughout national and regional lockdowns. Many of these changes will disappear once COVID-19 is a memory. But others are here to stay.

The acceleration of technology is one change that’ll be with us forever. Businesses across the world have increased their technological investments both to keep afloat during the pandemic, to keep pace against in an increasingly competitive market.

Because of the pandemic, cloud adoption is higher than ever before. In fact, Gartner reports a 23.1% YoY increase in worldwide end-user spending on public cloud services, which will see global cloud spend rise to $332.3 billion.

There are a number of reasons why cloud has become essential to businesses: one being because it is essential for enabling the innovations of the future; things like AI, machine learning and blockchain.

Well, I say future. Because in truth many of these technologies have already gone from science fiction and what you may find in the latest to age old blockbuster movies, to an everyday reality. There’s no bigger proof of this than looking at the new EU AI legislation.

New regulations

The pace of AI adoption and its’ impact is astonishing. There is no doubt it raises opportunities for societal and commercial progress, but it also raises ethical questions. That’s why on April 21st, the European Commission presented proposals for new AI regulations in EU member states.

The regulations are the first of their kind – both on an EU and global level. Their aim is to avoid potential ethical abuses and build trust in AI; ultimately guaranteeing “the security and fundamental rights of individuals and businesses, while strengthening adoption, investment and innovation.” Or, in other words, they attempt to find the right balance between technological growth and regulation – allowing AI to flourish throughout the EU without compromising on ethics.

Of course, this balance is always going to be a tough – and for some, such as many businesses born in Silicon Valley, they may seem too restrictive. The regulations require AI systems in EU member states to be tested and evaluated before being placed on the market, and to obtain an associated certification according to their established level of risk.

There are four levels of risk, and these really are at the heart of the regulations: minimal risk (games video, spam filters); limited risk (things like chatbots); high risk (energy, transport, education, medicine, public services and other critical infrastructure); and unacceptable risk (applications such as manipulations of people depriving of free will, social ratings or generalised remote biometrics).

To avoid algorithmic biases as much as possible, all AI-based systems must meet very strict criteria of security, robustness, transparency, efficiency, and non-discrimination. Those that do not comply will face heavy fines.

Global frameworks the way forward for AI and data privacy — Google CEO

In this article, we round-up the highlights of a recent interview with Google CEO Sundar Pichai, conducted by BBC reporter Amol Rajan. Read here

A solid foundation

I’m not here to debate the effectiveness of these regulations. But the very fact they are being put in place is in itself significant.

According to a recent study by Eurostat, only 7% of companies with at least 10 employees in Europe use the most frequent type of artificial intelligence. Despite the technology still being in its infancy, the enthusiasm generated is already leading us to think carefully about its foundations. The emerging legislative framework is one product of this, another is the adoption of solid storage infrastructure in preparation for the technology – which brings us back to cloud.

Whether they are based on machine learning to set trends on a large scale or on deep learning to create actionable models, AI systems have one thing in common: having to manage and consume a lot of data.

In order to unlock the full potential of AI, the solutions developed by AI players must benefit from simplified execution and a clear and unobstructed path of the journey made by myriad data. In short, data needs to properly managed, governed and have the ability to be tracked and traced. For example, if a model is trained using customer data and any number of particular customers may request via ‘right to be forgotten’ that model will need to be retrained with that data removed. Any business that leaves data siloed, inaccessible, or poorly stored simply aren’t going to be able to leverage AI as effectively as their competitors.

This is where cloud truly excels. In a world of remote working, it allows data to be accessible anywhere, anytime. Organisational data can be at the fingertips of anyone who requires it – while all the time being properly secured. We see organisations moving to hybrid models with both on-premise data management being required across multiple planes.

In fact, new research conducted by us found that 87% of employees believe storing data in the cloud is simpler than other methods in storage. Quite simply, when it comes to cloud, convenience is king. We’ve also found that cloud can reduce digital wastage by up to 60% – freeing up funds to reinvest into cloud services that can automate manual processes and make the most of data.

Whichever way we look at it, cloud is an essential building block on the journey to AI. As we transition out of COVID-19, we’ll further see an acceleration of adoption as businesses improve their data storage and prepare to embrace the next generation of technology.

Written by Jack Watts, EMEA leader, AI at NetApp

Editor's Choice

Editor's Choice consists of the best articles written by third parties and selected by our editors. You can contact us at timothy.adler at stubbenedge.com