The dawn of AI ethics – from equal representation to AI legislation

Improving standards in AI ethics requires equal representation in technology and new laws, such as the EU's proposed AI legislation.

The use of artificial intelligence (AI) has surged as organisations look to capitalise on the technology’s potential to significantly improve operations and customer experience.

Unchecked, this growth in the use of AI can be dangerous; impeaching on people’s rights, impacting their livelihoods and even creating the threat of war.

AI ethics — the set of values, principles, and techniques that are accepted standards of right and wrong to guide moral conduct in the development and use of AI technologies — and a greater level of diversity in technology, and in the creation of AI algorithms, represents a solution to the threat of unchecked AI.

This article will explore how AI can be damaging if left unchecked, the importance of equal representation in the creation of AI and the role of the education system, the importance of trust through transparency, the European Commission’s recent AI legislative proposal and what this means for industry both in the EU and the rest of the world, and how organisations can implement ethical AI.

AI discrimination

The use of AI in industries such as healthcare is creating huge, unprecedented benefits, such as the early detection of cancer.

However, the technology can create a significant amount of harm, particularly relating to discrimination.

Looking at the example of AI-enabled recruitment, Jonathan Kewley, partner and co-head at Clifford Chance Tech Group, explains that there are cases when algorithms have dismissed qualified candidates based on their religion, ethnicity, education or geographical location.

“The issue is that some companies [who are using AI-enabled recruitment services] don’t understand how the technology is working, they don’t see the inherent bias, which is built into it. And suddenly, they’ve got a huge issue that’s creeped in invisibly,” he says.

The issue doesn’t stop at the hiring process. Uber drivers, both in the UK and more recently in the Netherlands, have claimed that Uber’s algorithm fired them unjustly.

“[These AI algorithms] can fundamentally damage someone’s life prospects, their ability to get a job, their receipt of health services, their ability to buy a product or to get a mortgage. I don’t think the public quite realises how insidious some of this technology can be,” adds Kewley.

 “The public and employees are concerned about the dehumanising effects of AI” – Kewley

To overcome this AI discrimination and the impact on people’s lives, AI algorithms need to access the right data sets that represent the people who the product is trying to serve — in almost every case, every demographic should be represented to remove AI bias.

Equal representation and the education hurdle

Fundamentally, AI algorithms are built by humans. The technology is a creation of society and it needs to represent the whole of society.

The problem is that the vast majority of those studying STEM subjects and then going onto design the AI algorithms are from a particular demographic; white, middle-class men.

There is a significant shortfall of diversity in technology, not just in terms of gender, but those from ethnic minority and less advantaged backgrounds.

Until the lack of representation in education and in the workplace is addressed, across gender, race and socio-economic backgrounds, then the issue of AI bias and discrimination will continue.

There are numerous examples of this. One is Apple’s credit card, which it was claimed offered different credit limits for men and women and another was the A-Level algorithm debacle that gave students grades based on their location.

There are some initiatives to help correct the balance. Clifford Chance, for example, has launched a bursary scheme with Hertford College, University of Oxford, to encourage greater diversity in those studying Computer Science. The aim is to inspire young people from underrepresented backgrounds to pursue careers involving technology, and in doing so reduce the consistent cases of tech bias and prejudice.

Commenting on this, Kewley says: “If you have a group of diverse people moderating that come from a varied set of backgrounds, and can represent the voices and diverse views of our society, then the technology they build is much less likely to be biased.

“We can create all the laws we want. But actually, we need to start from the building blocks of education to ensure that we’re building ethical technology from the get go. That means having a diverse group of people studying computer science, not just privileged white men.”

Herbert Swaniker, senior lawyer at Clifford Chance, agrees that it’s crucial to create opportunity and access when it comes to education.

He says: “governments are doing a lot more about tech literacy, specifically artificial intelligence. Part of the role for companies is to make information available about how these technology platforms work, in order for people to trust technology better. But, crucially, people want to get involved – having that education access point is critical.”

A guide to artificial intelligence in enterprise: Is it right for your business?

While true artificial intelligence is some way off, businesses are taking advantage of intelligent automation, like machine learning, to improve business operations, drive innovation and improve the customer experience. Read here

AI legislation

Bringing more diverse talent into the technology sector will help with the problem of AI bias and discrimination. However, the regulatory situation must be approved as well; greater clarity is needed.

At the moment, there is an invisible cloak surrounding the development and application of AI. That’s why the EU has set out its AI legislative proposal, which Kewley calls “the most radical legal framework in 20 years” and a necessary “safety standard” for the future of ethical AI.

Should it come into force, the proposal would require providers and users of AI systems to comply with specific rules on data transparency, governance, record-keeping and documentation. It would also require human oversight for high-risk areas along with setting standards in robustness, accuracy and security.

AI providers and users would need to meet these requirements before being offered on the EU market or put into service.

Failure to comply with the legislation, if approved, would see similar fines to the GDPR, plus an even higher fine: €30 million or 6% of the total annual turnover of the preceding financial year, whichever is higher for serious breaches. These rules would also apply to companies from outside the EU in some cases.

Compared to other regions, the EU is going big and bold with this AI legislation. By comparison, the US is not going to take such an aggressive position for new regulation as the EU, because they’re concerned about the dampening effect this would have on innovation.

According to Kewley, “the US is going to take a more liberal approach. They believe that existing frameworks available to them, like the FTC, are enough to regulate this area.”

Looking to the East and arguably the world’s great AI power, China has the Beijing Principles, which is an ethical framework the government has released to govern how AI should be developed and designed.

Its main tenet is that the R&D of AI should serve humanity and conform to human values as well as the overall interests of mankind. The Beijing Principles state that AI should not be used against, utilise or harm human beings.

The EU’s proposed AI legislation is the most comprehensive. However, it’s not just about one region. There needs to be a global, multilateral debate on the issue.

Trust through transparency

Crucially, this new law may help produce a new age of enlightenment for the public, similar to the Cambridge Analytica scandal, when it comes to their data and the power of AI.

Organisations that are transparent about their work will help build trust in the groups of people that they serve.

“There is an opportunity for businesses to get ahead of this now and embrace transparency rather than it just becoming the next compliance issue,” says Swaniker.

Enforcing the legislation

Enforcing the proposed AI legislation will be interesting, as seen by the challenges of enforcing GDPR, according to Swaniker.

“We’ve got some road to run before the final rules are in place. But it’s apparent that the whole framework is based on risk and that will encourage companies to have to meaningfully evaluate whether a particular product is, for example, subliminally trying to manipulate people.

“There’s no doubt it will be difficult to enforce and so there’s going to need to be a sharpening of the concepts in order to make it more fit for purpose,” he says.

Swaniker also notes that the regulation takes a very European approach as opposed to the rest of the world, because the legislation is based on European concepts of fundamental rights, which are based in EU law, but that aren’t the same in other jurisdictions.

He adds that other countries, as mentioned above, have existing laws and irrespective of them, they expect companies to be accountable about how they use AI.

“It’s not fair to say that Europe are doing something and no other country cares. There’s just different regional approaches to how this AI and technology use is going to be regulated.”

 “This is regulation about how AI is used, not AI itself, which is an important nuance to draw, because it involves organisations practically thinking about whether their use of AI is high-risk” — Swaniker

Implementing an AI ethics standard

One of the key tenets of the EU’s AI legislation is the requirement of human oversight in high-risk areas. Providers and organisations that adopt and implement the technology will be need to be careful in controlling how that technology is used with stringent reviews to ensure that it is safe before proceeding. Unlike the current practice today, when organisations deploy the technology, they must check on it.

In practice, Kewley suggests that organisations need to be forensic when it comes to implementing an AI ethics standard.

He says: “Businesses are used to accounting standards and financial audits, and have built up whole teams and methodologies around these practices. It’s become part of their DNA and that’s the way in which deploying ethical AI will be best implemented, which is effectively to know exactly what you’re using, the context it’s being used and what the outcomes are. It’s also important to understand the product lifecycle, how it has been deployed and in which countries, and what decisions it’s taking.”

In the future, organisations will also need to deploy teams of people to oversee the AI and how the ‘robots’ are working. Left alone, this technology could be dangerous and so whole new teams and ways of working will need to be created.

Avatar photo

Nick Ismail

Nick Ismail is a former editor for Information Age (from 2018 to 2022) before moving on to become Global Head of Brand Journalism at HCLTech. He has a particular interest in smart technologies, AI and...

Related Topics

AI Ethics
Regulation