Back in 2009, surrounded by academics and at one of the most prestigious universities in the world, I was in the privileged position to help build the 20th fastest supercomputer in the world. Joining together 500 state-of-the-art file servers, data and low latency networks and x86 based CPUs, we delivered a High-Performance Computer (HPC) capable of crunching data to answer complex, albeit specific questions. Not only that, we did so at a cost appreciably lower than the system ranked 19th on the same list.
Eight years later, at that same facility, the project leader calculated that to achieve the same results, previously landing the University a top twenty place, they would no longer require five hundred servers. Instead, the same performance could be delivered in a physical space one hundredth the size of the original and at one hundredth of the price. More importantly, as demands for greater computational power grew, they could deliver greater performance at actually decreasing costs.
The high-performance computing of 2020 is quicker, more powerful, smaller and cheaper than ever before. The key difference is that these supercomputers are no longer the preserve of academic institutions and enormously wealthy private enterprises. At last, HPC can be readily applied to run powerful artificial intelligence (AI) workloads in widespread businesses environments.
Recent surveys, such as this one by McKinsey, have highlighted that 95% of IT executives believe AI will transform their industry. But such a statistic makes me wonder how the other 5% are theorising the impact of AI. Failure to understand how AI will transform business is to fundamentally misunderstand what AI is. Any and every organisation operating at scale can benefit from automation and near real-time decision making. The point is that AI and the associated technical advances that come with it makes the technology not only plausible for a massive number of organisations; it becomes mandatory.
Use cases for AI while remote working
What businesses do need to be mindful of when looking to deploy AI is that it is not a magic bullet to answering business critical questions. What’s more, and as we’ve seen in several recent unfortunate examples, is if your AI is not trained with a wide set of data, it can end up amplifying a wrong supposition, rendering the end-product useless. For example, if you’re only training your facial recognition programme with pictures of white men, you’re going to get some biased and potentially discriminatory outcomes. As with all forms of modelling, the findings you get out are only as good as the data that you put into it. There are no shortcuts around this simple fact and any organisation trying to do so will quickly become unstuck.
The key take-away from these missteps is not that AI has failed or is incapable of carrying out a task; it’s that humans have, can and often do apply the technology poorly. Today, AI is transforming almost every industry and vertical, from pathology, where it’s used to detect cancer, to AI picking up the phone to answer contact centre customer queries, from water controlled indoor farming to driving autonomous vehicles in warehouses and on public roads.
AI offers tangible benefits (such as increased productivity, reduced costs and improved competitor advantage) if, and only if, the business applying it knows how to articulate the problem, understands the solution – or at least what the solution should look like – and can successfully evaluate the implications it will have within their organisation.
How trusted data is driving resilience and transformation beyond Covid-19
When I think back 10 years to how excited and optimistic I felt standing by the 20th fastest supercomputer in the world, that feeling pales in the face of how moved I am by the promise the next generation of HPC holds for organisations engaging in every walk of life. Now, unlike ever before, businesses have relatively cheap and easy access to the most powerful computational power ever seen – what they do with it is up to them.