Responsible Tech Series 2021 Part 1: exploring ethics within digital practices

Delivered by Information Age in partnership with sponsors OneTrust and Clifford Chance, the first part of this year’s Responsible Tech Series brought tech experts together to discuss how stakeholders can better practice digital ethics for the benefit of society.

With AI, social media and other digital technologies playing an increasingly prominent part in people’s lives, now is the time to start thinking about the impacts that they can make long-term, and what can be done to improve ethical practices in tech.

The event was co-hosted by Rebecca Rae-Evans, founder of Tech for Good Live, and Jenna Kelly, senior conference producer at Bonhill Group, who kicked off the proceedings.

“We’re able to move beyond discussion and into practical application,” said Rae-Evans. “When we’re feeling inspired, we can find useful tools to support us in our organisations.”

Areas discussed regarding digital ethics during the event included digital accessibility, how ethical frameworks can be more suitable to underrepresented groups, and the differences between data breaches and ethical breaches.

The Data Breach vs The Ethics Breach, and preparing for both

Joseph Byrne, CIPP/E, CIPM, CIPT, privacy solutions engineer at OneTrust, delivered a keynote exploring data breaches and ethics breaches, including the differences between both kinds, and how organisations can prepare for them.

Joseph Byrne, privacy solutions engineer at OneTrust, talked us through how security, data and ethical breaches can be distinguished.

The welfare of personal and corporate data has been recognised as more vital than ever, with Gartner naming digital ethics and privacy as one of the top 10 strategic trends for 2019, and Byrne argued that there are three types of data breaches:

  • Security breach: Information accessed without authorisation, such as by a threat actor or through employee negligence;
  • Data/privacy breach: The sending of data to an untrusted environment;
  • Ethical breach: Knowingly giving data to untrusted sources, or willfully violating codes of conduct around data usage and putting data at risk, can shift from a data or privacy breach to an ethical breach.

“Morals are subjective, and just because we can do something, doesn’t mean we should,” said Byrne, during his presentation.

Byrne went on to state that categorising breaches can be dangerous without a fully fledged framework, adhered to from the top of the organisation downwards, and that compliance to regulations does not always take ethics into account.

“Reputation can take years to build, but can be destroyed instantly,” he added.

While security frameworks manifest themselves in the form of standards such as ISO 27001 and NIST, frameworks that cover data and privacy breaches include GDPR, CCPA and ISO 27701. Ethical frameworks, meanwhile, can come in the form of an ethical code of conduct, or ethics by design.

In conclusion, Byrne said that designing tech with ethics in mind, from the very start, “will help further down the line”.

2021’s AI challenges

Next up in the digital ethics section of the 2021 Responsible Tech Series was a conversation on the challenges that artificial intelligence (AI) has faced in 2021, between Toju Duke, manager at Women in AI Ireland, and Jonathan Kewley, partner & co-head, Tech Group at Clifford Chance.

Toju Duke, manager at Women in AI Ireland, and Jonathan Kewley, partner & co-head, Tech Group at Clifford Chance, discussed the challenges that have surrounded by AI this year, and how we can overcome them.

AI bias and the importance of diversity

Duke, who also serves as a programme manager at Google, describes herself as a strong advocate for ethical AI, and when exploring the challenge of AI bias, she expressed belief that the fast pace of its evolution has made it difficult to keep track of biases.

“Data sets often contain old data that doesn’t reflect societal changes, so this historical data often has biases in it,” she explained.

“We often talk about ‘ML fairness’, how fair the ML system is to society, and there are so many definitions of fairness. It all depends on context.”

Kewley, meanwhile, ushered in a discussion of a lack of diversity at the top of global organisations, and believes that we need to look at why women aren’t choosing to go into computer science careers. “Computer science has been owned by men for too long,” he commented.

According to Duke, while organisations such as Women in AI are trying to make computer science careers seem more attractive to women, “many still aren’t interested”, with gender pay gaps and male-dominated cultures still present.

Regulations

AI regulations are starting to take shape, notably in the EU, but with such measures not set to be fully enforced for another few years (implementation of the AI Act in the EU isn’t expected until 2024), Kewley believes that companies aren’t thinking about compliance enough.

“Companies think that they’re over the hill when it comes to privacy,” he said. “But compliance isn’t being thought about yet, and it’s now a very real concern to be considered.”

Regarding what more can be done to ensure that regulations are suitable globally, Duke suggested keeping track of how products embedded with AI systems, distributed around the world from countries such as China and the US, are designed.

“We need a global framework for AI,” she commented. “Work is being done by the US and the World Economic Forum, but this isn’t globally standardised. This needs to be proactive.”

Positive discussions and risk committees

On the flipside, a recent survey conducted by Clifford Chance and YouGov, which had participation across Europe and the US, found that 66% of respondents are feeling positive about AI, and Kewley believes that positive discussions about the technology is a step in the right direction.

“Negative aspects of AI are starting to be counter-balanced,” he said. “It’s important to talk about the positive effects it can have, as well as discussing due diligence and transparency.”

The biggest trends in digital ethics

This article will explore the biggest trends that are occurring in digital ethics, as technologies continue to play major roles in people’s lives. Read here

The future of online content moderation

Closing off the event discussing digital ethics was a panel discussing what the future holds for online content moderation. Moderated by Nick Ismail, content editor at Information Age, the discussion featured insights from:

The panel explored the benefits, challenges and the possible future of online content moderation.

Firstly, the discussion covered how content moderation has changed over time, and since the start of the global pandemic. Hunter-Torricke identified recent research that found that over 40% of Gen Z Internet users turn to social media for information, and said that trust in mainstream media has greatly decreased.

“Cast your mind back to the first decade of the 2000’s, and you had these ugly, basic websites,” he explained. “Then in 2007, everything changed: the iPhone had arrived, and we entered this new mobile era.

“Content became more visceral, and everyone had a camera and the Internet in their pocket, which brought more user-generated content.

“Now, videos have taken over from photos, in terms of delivering engaging content through social media platforms.”

Thwaite, meanwhile, believes that content moderation can mean different things to different people, commenting: “Some think that content moderation is a new thing and become up in arms about it, claiming that free speech should be protected.

“But content moderation dates back to a standard set by the BBC in the 1920’s, and newspapers and radio have been moderated for over 50 years.”

Benefits of moderation

As director of SWASTHA, Suryawanshi has seen the benefits that content moderation can bring: “In the last 20 years, we’ve seen a democratisation of the Internet. Before this, there were only a few experts, and only they monitored content.

“Now, people come together to decide whether pieces of content are right for the community, and there is more of a choice of platforms.”

The healthcare-facing Wikipedia director was keen to dispel a misconception that any user can publish anything on the website without it being monitored: “Everyone can edit articles, but this needs to be cited and referenced.

“Statistics have shown that people have relied more on Wikipedia than their doctor when looking for healthcare information. We have a team of doctors to check that content is properly cited.”

The downsides

On the flipside, Hunter-Torricke believes that society is becoming more aware of the downsides of content moderation, and are thinking more about improvements: “Years ago, there was lots of cheerleading and very little scrutiny, especially when it came to vulnerable groups.

“However, some people tend to preach all these easy solutions, not realising that there is no easy solution. Various stakeholders are needed to tackle issues in different ways.”

The future of content moderation

When it comes to how the future of content moderation could take shape, Nanfuka commented that “to understand the future, you need to understand the origins”, referring to an example of how content has been moderated in parts of sub-Saharan Africa.

“During elections in the 2000’s and early 2010’s, people would use SMS to communicate, but messages with certain words would not be shared by the network,” she said.

“There was this consistent need to maintain power despite the evolution of technology, and content moderation has happened offline, with arrests being made due to what’s been posted online.

“In some parts of Africa, bloggers need to pay up to $1000, and the state needs to know who you are, which affects online content. We see this as a threat across the continent.”

Overall, it was agreed during the panel discussion that for content moderation to truly work, a global mindset with diverse groups of stakeholders is required, in order to fully represent society across the world. Transparency and explainability was also stated as being important, with stakeholders clearly explaining how content moderation will work.

The second part of the 2021 Responsible Tech Series, exploring sustainability, will take place on the 14th October.

Avatar photo

Aaron Hurst

Aaron Hurst is Information Age's senior reporter, providing news and features around the hottest trends across the tech industry.