From free-thinking robots to flying cars, the world has been future-gazing about the impact of artificial intelligence (AI) for decades. Now as society approaches 2020, AI has changed the way people live, work and play, however, not necessarily as people first thought.
Individual’s mobile phones act as ATMs; digital bracelets intelligently track their health and fitness levels; and their refrigerators can now even tell when they are running low on milk.
Across the business community, businesses are increasingly seeing automation and robotics become commonplace, taking on mundane, data-heavy tasks traditionally carried out by humans and performing them in milliseconds.
Over 5 million jobs are expected to be replaced by a form of automated or machine-learning technology over the next three years. With the impacts already being visible across the vast majority of industries, in labour or data-heavy roles in sectors such as healthcare, manufacturing, financial services, or transportation, we are likely to see the most significant shift.
>See also: What is machine learning?
Even the skills shortage that the technology industry has been dealing with for years will see a shift as AI alleviates the strain. The IT function will then evolve to focus on more creative work that differentiates the enterprise.
Understandably, it still raises some concerns about the job market around the world, but much like preceding industrial revolutions, roles will evolve, not disappear.
Pushing the boundaries of artificial intelligence
Though ground-breaking innovations in AI are already having a positive impact on both business’s bottom line and productivity levels, machine-led decision making is now being incrementally introduced across industries, from financial services to armed-forces.
High-street banks and wealth management firms are investing in algorithms and automation to augment their human advisors. Robo-advisers or robo-traders (as they are currently called) are able to determine the best funds for customers to invest in and recommend banking products such as a mortgages, all based on a customer’s personal circumstances.
At this stage, humans still provide the formal advice, as regulators – especially in the UK – are not comfortable letting go of the reins just yet.
Nonetheless, this is expected to change. HSBC, Goldman Sachs and Barclays are some of the early investors in this space, but the real innovation is coming out of start-ups, like online investment platform Nutmeg, and personal finance bots like Cleo, who are accelerating the use of machine-led decision-making.
That said, in order for AI to continue to thrive in financial services the advances in algorithms and data need to be as open – and for standards to be as a transparent and uniform – as possible. If companies cannot see what kind of data is driving these new models of working, then they’re unable to understand what outcomes they are going to drive.
In the realm of international affairs, NATO now has plans to harness the power of data and use AI in its decision-making process.
For example, the technology can now be used to strengthen NATO anti-access area denial systems, designed to terminate or prevent enemy forces from entering restricted sea, land, or air spaces.
NATO believes it will get to a point where AI can make strategic decisions on vital NATO issues. This move means AI transcends driverless cars, and transitions to decisions in international diplomacy, where an automated decision could potentially trigger a global conflict or war.
If these two instances were enhanced through cognitive computing, we would start to see AI evolve to the point where it has enough brain-power to learn from each decision and maybe even understand the impact.
Cognitive computing marries AI and machine learning and “learns” from data without interference from humans. It acts as an autonomous entity that senses and perceives the environment, learns and adapts and takes rational actions to ensure it reaches its goal.
Google’s DeepMind, an AI firm established in the UK, has created technology that has learned to play the ‘Atari’ video game, and has already beaten the best player in the world at board games.
In essence, it has learned from examples and applied them it to solve real world problems, akin to the thought process of a human being. As it learns, it can improve over time making the possibilities for its uses endless. By applying that to our healthcare system, and a visit to our GP could be a somewhat different experience!
Regulating the unknown
The efficiencies AI can bring to the world are undeniable, but the technology does raise a number of concerns around standards, regulation and the propensity for this technology to go beyond its remit.
In a recent report, over half of British consumers are concerned about the impact robots and AI will have on their everyday lives. Even tech revolutionaries like Elon Musk and Stephen Hawking have their reservations, both claiming that AI poses one of the greatest threats to humanity in the 21st century.
The reality is legislators around the world possess very little knowledge or understanding of the tenets and wide-ranging capacity of this technology. Last year, the UK Government released an in-depth report on how it plans to approach artificial intelligence, where it sees the benefits and areas for concern. While it is a positive step forward in acknowledging the inevitable, the key takeaway was that it is too soon to set out sector-wide regulations.
The software industry has as much reason to be cautious of the technology as society does more broadly. The advent of AI means that the industry will need to acquire new skills and adopt radical new ways of working.
That means organisations and industries alike need to find a way to help employees do more with these technologies and focus on how the industry can cope with and respond to these new developments.
It does not matter whether it’s in use on the battlefield, or at the highest level in business, every decision still has consequences even if it is made by a machine.
The future is not without humans
The only way for AI technology to thrive is with humans by its side. This new era is not about replacing humans, but evolving past our capabilities to create a more efficient and sustainable world. British commentator and author Tom Chatfield, states:
“Our creations grow faster than we do, and may reach further. Yet we are all the more remarkable for this – if we can learn to let go of the insistence that it all still comes down to either a battle or a love affair.”
It is no longer about man versus machine. The human brain will always be the first and most powerful super-computer. AI was born from it, and evolved to contain the principles of learning, logical reasoning, problem solving, perception and a basic understanding of language in their purest form. That’s why if we become fundamentalists about humans and technology, it will only hold us back.
Businesses and people have to embrace a culture of openness and collaboration. Whether this is sharing research, making algorithms more open, creating shared data sets that are consciously free of the biases that their human creators might embed in them, the tech industry has to overcome these challenges so that humans can be truly in control of the technology. Not the other way around.
It is up to today’s technology leaders to pave the right way forward, striking a balance between humans and technology, especially in areas that are customer-service heavy.
If society focuses on creating a future that builds on what it has already created; anchoring humanity and our planet firmly at the core, the prospect of this brave new world, will finally start to outweigh the fear of the unknown.
Sourced by Suman Nambiar, global head of AI Practice at Mindtree