Not worried about unethical AI? You should be

As AI becomes more pervasive, employers, tech leaders and employees must start to discuss the problems brought by unethical AI.

AI is the hot technology of the moment, expected to be applied by businesses to various use cases and problems in the next few years across a variety of sectors. But, ethical discussions are lagging. According to a study from Genesys, more than half of the employers questioned in a multi-country opinion survey said their companies do not currently have a written policy on the ethical use of AI or bots, although 21% expressed a definite concern that their companies could use AI in an unethical manner.

“As a company delivering numerous customer experience solutions enabled by AI, we understand this technology has great potential that also comes with tremendous responsibility,” said Steve Leeson, VP UK & Ireland, Genesys. “This research gives us important insight into how businesses and their employees are really thinking about the implications of AI – and where we as a technology community can help them steer an ethical path forward in its use.”

Preventing unethical use of AI in corporate finance transactions

Solutions exist to prevent the unethical use of AI in corporate finance transactions, according to Drooms — the secure cloud solutions provider. Read here

Ignorance is not bliss

Genesys found that nearly two-thirds (64%) of the employers surveyed expect their companies to be using AI or advanced automation by 2022 to support efficiency in operations, staffing, budgeting or performance, although only 25% are using it now. Yet in spite of the growing trend, 54% of employers questioned say they are not troubled that AI could be used unethically by their companies as a whole or by individual employees (52%). Employees appear more relaxed than their bosses, with only 17% expressing concern about their companies.

A fair number of employers surveyed (28%) are apprehensive their companies could face future liability for an unforeseen use of AI, yet only 23% say there is currently a written corporate policy on the ethical use of AI/bots. Meanwhile an additional 40% of employers without a written AI ethics policy believe their companies should have one, a stance supported by 54% of employees.

Even more interesting is that just over half of employers (52%) believe companies should be required to maintain a minimum percentage of human employees versus AI-powered robots and machinery. Employees are more likely (57%) than employers (52%) to support a requirement by unions or other regulatory bodies.

Does your company have an AI ethics dilemma?

The ethics of Artificial Intelligence has been in the news — particularly with the creation and almost immediate collapse of Google’s AI Ethics board. But do companies that are new to AI tools need to be asking themselves: ‘Do I have to ‘care’ about ethics?’ asks Alexa Hagerty and Igor Rubinov. Read here

Millennials want to see it in writing

Millennials (ages 18-38) are the age group most comfortable with AI, but they also have the strongest opinions that guard rails, or ethical practices, are needed. Whether it’s anxiety over AI, desire for a corporate AI ethics policy, worry about liability related to AI misuse or a willingness to require a human employee-to-AI ratio — it’s the youngest group of employers who consistently voice the most apprehension. For example, 21% of millennial employers are concerned their companies could use AI unethically, compared to 12% of Gen X and only 6% of Baby Boomers.

“Our research reveals both employers and employees welcome the increasingly important role AI-enabled technologies will play in the workplace and hold a surprisingly consistent view toward the ethical implications of this intelligent technology,” continued Leeson. “We advise companies to develop and document their policies on AI sooner rather than later – making employees a part of the process to quell any apprehension and promote an environment of trust and transparency.”

Driving business value with responsible AI

PR Krishnan, head of enterprise intelligent automation, Tata Consultancy Services, explains how responsible AI can be a strategic differentiator for enterprises. Read here

UK employers and employees mostly agree on AI ethics

On the whole, UK employers and employees trust each other’s ethics — and their companies — when it comes to AI. There is also surprisingly strong support among both UK employers and employees for the regulation of AI.

• 59% of UK employees don’t believe AI or bots will take their jobs within the next ten years.
• 64% of employees believe there should be a requirement that companies maintain a minimum percentage of human employees versus AI-powered robots and machinery and 61% of employers agree.
• Only 26% of employers in the UK say their company has a written policy on the ethical use of artificial intelligence (AI)/bots.
• 29% of UK employers are concerned their companies could face future liability related to their use of AI.
• More than half (52%) of employers aren’t afraid that their companies might misuse AI, and 67% of employees agree, although 29% of employers admit they are afraid.


Related articles

AI predictions: how AI is transforming five key industries

AI ethics and how adversarial algorithms might be the answer

If you want to see the benefits of AI, forget moonshots and think boring

AI ethics: Time to move beyond a list of principles

Increasing the adoption of ethics in artificial intelligence


Avatar photo

Nick Ismail

Nick Ismail is a former editor for Information Age (from 2018 to 2022) before moving on to become Global Head of Brand Journalism at HCLTech. He has a particular interest in smart technologies, AI and...

Related Topics

AI Ethics
Millennials