As digital technologies become more sophisticated, intuitive and powerful, their growing impact on society brings the pressing issue of ethics. From corporations such as Facebook, Amazon and Apple being summoned to US congress for anti-competition allegations, to the spread of misinformation and bullying online, there is a need for a rethink about how big tech should be regulated, and how tech can provide a more ethical future.
With this in mind, we take a look at the biggest trends that are happening in relation to digital ethics.
Forces at work
- Accelerated adoption: “The seemingly overnight transformation of organisations during the pandemic is now well documented, with many shifting to enable remote working and offer digital products and services to survive. Organisations are now taking a more serious look at what accelerated adoption of automation and AI means for them, and are developing data strategies that could shift entire business models. Without integrating digital ethics into this acceleration, the ethical risks proliferate, and the more data we use, the more technology we incorporate without understanding the potential consequences.”
- Growing awareness: “The public is now more aware of the potential for unintended consequences of technology, such as the amplification of misinformation, and bias and discrimination on digital services based on poor quality data or faulty algorithms. At the same time, more people are aware of how some digital business models work, making use of personal data in ways that, increasingly, are worrying to consumers. Additionally, the proliferation of media stories reporting incidents of ethical failings due to technology or data use has put these issues in the public consciousness, and the popularity of films such as The Social Dilemma and Coded Bias shows a widespread interest in these topics.”
- Faltering levels of trust: “The annual Edelman Trust Barometer report has shown a number of emerging concerns over the past few years. This includes a widening trust gap between the informed public and the general population, and a decline in public trust in the technology sector. With public trust in a precarious position organisations are starting to recognise the need to address digital ethics concerns.”
- Regulation prospects: “While governments are still not keeping pace with the rate of change in tech and the acceleration of the adoption of increasingly advanced technologies such as AI, there are now signs they are on the political and legislative radar. The fact that regulation is only on the horizon, but not imminent, is no excuse for organisations not to act. In light of all the factors described above – accelerated adoption leading to increased ethical risk, growing public awareness, and shaky public trust – there are plenty of reasons to integrate digital ethics into organisational strategies and governance immediately.”
Big Tech regulation: why it could be a long road ahead
The power of private data
Now more than ever, organisations are obtaining and making use of the private data of their customers. While this has proved to be a sufficient way to provide more personalised experiences, data breaches and the selling of assets have demonstrated a need for users to be more mindful about contributing data.
“According to this report, legal experts have warned of a ‘privacy crisis’ caused by a rise in companies exploiting QR barcodes to obtain names, addresses, telephone numbers and email details before passing them on to marketers, credit companies and insurance brokers. Thus, the challenge is how do we educate and inform end-users about the power of their private data and why they should not share their personal details too willingly and openly.
“Our work, for example, involves applying augmented intelligence techniques to help users, and indeed businesses, understand the value of digital ethics that enhances security and overall user experiences.
“Ethical tech is smart tech since, by definition, it must involve building algorithms that understand ethical limits and, as a result, achieve a more intelligent human-machine symbiosis.”
Woods said: “At Splunk, our approach to digital ethics is based on the need for a clear objective behind the use of data. Education is key to a better understanding of how AI and machine learning are best used, subsequently reducing ethical risks.
“With no clear protocol in place on how to identify, evaluate, and mitigate the risks, teams end up either overlooking problems or rushing to fix them at the last moment. When companies have attempted to tackle the issue at scale, they’ve tended to implement strict, imprecise, and overly broad policies that lead to false positives in risk identification and stymied production.
“Companies need a plan for dealing with the inevitable risks — how to use data and develop AI products without falling into ethical pitfalls along the way.”
How to boost internal cyber security training
Data ethics and AI in retail
Many of the ways that retailers currently make use of consumer data are facilitated by AI, with AI models automating sales operations and making service decisions, such as recommendations based on past purchases.
Dr Sam Short, chief data scientist at Upside Saving, explained what retail outlets should consider when it comes to ethics in this context: “Differentiating itself from data privacy, data ethics is considered in the context of artificial intelligence (AI) and the impact of AI models, that are derived using data, on people.
“Whilst the data ethics landscape is constantly evolving, within retail one thing is clear: given the purposeful shift in consumers’ buying habits toward more ethical products and companies, whether it’s animal free food, slavery free clothing, or plastic free packaging, the retail winners will be those who embody AI and data ethics in all parts of their organisation.
“Retailers who want to champion data ethics can start by asking themselves the following questions: Are you fair with consumers when you collect their data? Are they cognisant of the collection, and even given a share of the value? Are you transparent about how you use data and AI? Do you understand which features in the data have the biggest impact on the results the models produce? Is it easy for a customer to understand, and then challenge, a decision made using data and AI? Do you consider the risk associated with each AI use case and are there clear lines of accountability within the organisation?”
Finally, Stuart Solomons, founder of Ernie Connects, identified the use of video consultations within health and social care over the past year as another major ethics trend.
Solomons said: “Remote video consultations provide a time-saving alternative to face-to-face consultations where formal physical examination is not required.
“In care environments, it helps to speed up the process of getting medication into the hands of the people urgently requiring it, by cutting out the ‘middleman’ of carers who traditionally relay messages.
“Demand for video consultation technology in the UK care sector is expected to rise, with person-centred technology widely available now, in the form of handheld tablets and phones, that connect those needing care to GPs swiftly and seamlessly. The time frame from consultation to receiving medication is faster than ever.”