The role of online ID authentication in increasing social media safety

How we interact with digital technology is rapidly changing and it’s having a major impact on our lives from the ways in which we communicate, how we’re educated and how we work. In no small part our behaviours and attitudes are being shaped and developed by our digital world.

Social media, or the early stages of it, were first introduced in the late 1980s and 90s with the introduction of real-time online chat functions and bulletin board messaging. These foundations paved the way for the likes of Friendster, LinkedIn, Myspace and most notably Facebook. Now, over 77% of the UK are on social media platforms with internet usage at an all-time high over the past year, with an average UK adult spending four hours online. Social media has been and is a space for us to make connections, to share our lives and opinions with each other, but just as the debate around how much of ourselves to share online continues, there is mounting public pressure around online safety and how users can be held accountable.

So-called online anonymity has allowed an environment of cyber bullying and hate speech to fester and grow, poignantly highlighted by the recent racist attacks against certain members of the England football team in the Euro 2020 final. There is now the question of how can platforms more easily hold those responsible accountable for their actions, and whether technology is part of the answer.

ID authentication and verification has been an idea that has been floated by the UK government and others to combat the issue. This is challenging for social media companies due to the lack of available solutions, the lack of comprehensive ID databases against which to check an ID, as well as the reluctance and trust of users to share such personal information with online platforms. However, some companies are already dipping their toe in the water and are investing and testing authentication and verification solutions, depending on the level of risk and problems arising.

Events in the last year, where the spread of misinformation, trolling, abuse and the undermining of democracy has provided a watershed moment for the public, government and companies to upstep and maximise their efforts. Approaches to minimising the harm these factors are causing, and all solutions and enhancing them, should be on the table, including AI. Otherwise, the lack of online safety and security will undermine the trust and confidence in using platforms and all the opportunities that can be offered as they grow and develop in making our lives easier and increasing our bonds with family and friends.

The biggest trends in digital ethics

This article will explore the biggest trends that are occurring in digital ethics, as technologies continue to play major roles in people’s lives. Read here

Perceived anonymity and the role of social media platforms

When I discuss anonymity online, I’m not referring to the use of our data and surveillance online and protection, but rather the idea of protecting one’s identity in order to speak freely and with impunity. While this certainly allows the freedom of speech and ideas without fear of judgement and prejudice this perceived anonymity can also be used to spread hate, misinformation and hurt and in some cases attack others.

Some commentators and researchers have observed and commented that online abuse and cyber bullying has been on the rise during the pandemic, as people attempted to replicate their daily lives online to resemble a sense of normality. It’s been purported that this rise has been highest among children and teens, with one report noting a 70% increase in hate speech during online chats among young people. The fact that this trend seems to be on the up has increased focus and attention on social media platforms and the role that they play in helping to combat hate speech and cyber bullying.

It’s critical that all platforms step up to face these challenges along with others, including governments, educators and NGO’s, as everyone has a role to play in countering these difficulties, not least the combative and at times abusive behaviour from key leaders in society. Learning civility and behaving with respect towards others online is a major challenge of the 21st Century, and must be faced if we’re to have an internet that is safe and offers the wonder of sharing ideas, opinions and meeting and connecting with others across the world, as it promised in its nascency.

Perhaps a more critical imperative would be to help users understand through education, using platforms or both, that they are in fact not anonymous and can, if the harm they cause is criminal, be traced, and, if necessary, face the full force of the law. The perceived anonymity is in itself harmful and users who turn to harassment and abuse themselves put their own lives at risk, as the attack on the US Capitol on January 6th aptly illustrated, as some of them now face the legal consequences. Many have found themselves losing their employment or friends and loved ones due to their actions.

While it’s impossible to completely eradicate online abuse and harassment and will remain an ongoing challenge, safety and privacy needs to be a central part of the design of online products and services as we go forward. Regulation that is smart, proportionate and scalable is broadly acknowledged as needed, even by the online platforms. However, not all platforms face the same problems and issues and therefore a ‘one size fits all’ approach is unlikely to be a good fit for the diverse online world we all participate in. Crucially, as the Australian Online Design Code highlights, the first step for companies is to undertake a risk assessment of their service and identify and mitigate the risks of abuse taking place. The emphasis has to be on identifying and eliminating online harms before they occur.

Countering online abuse is also a necessary consideration for the investment and venture community, who would benefit from ensuring they place safety and ethical considerations at the heart of the early design processes they are seeding and funding.

Responsible Tech Series 2021 Part 1: exploring ethics within digital practices

Part 1 of the Responsible Tech Series 2021 explored ethics within digital practices, unpacked what’s what in the provision of information, and debated the matter of privacy. Read here

The role of technology

As touched upon earlier in the piece, there has been a lot of suggestions and discussion about what steps should or can be taken to improve safety online through identity authentication and verification. There are different approaches to authenticating users, including verifying the user. In its purest form, online verification would require people to prove their identity when registering on a platform or using certain features, for example financial transactions, with a form of personal ID such as a passport or some form of Government ID. This can facilitate better security, but also where there is high risk to users it can prevent and limit the bad actors looking to exploit and abuse others, and also building trust amongst users participating in an online community.

Alongside ID authentication and verification, age estimation technology is a powerful AI tool in identifying and flagging accounts where there may be doubts surrounding the age of a platform user. This is particularly important for platforms who have content solely for 18+ users and very young users under the age of 13 years. It would be encouraging to see more investment in technology to develop these solutions even further but this may take time due to the privacy and ethical challenges that surround the use of AI in the area of age authentication.

AI is the larger concept of computers and machines being able to simulate human behaviour and carry out tasks in a ‘smart’ way. Used intelligently, AI can aid in monitoring online behaviours and fighting against fake accounts. Automated moderation and blocking are, for example, great tools that allow a platform to automatically block malicious users and monitor for fake accounts. As well as monitoring for fake accounts, AI can also be leveraged to find and filter inappropriate content through human training and machine learning to eliminate online harms before they even reach their desired end point.

Although there has been a step forward in the types of technology that can be used to fight online toxicity, we still have a long way to go. It’s important that this is undertaken by all platforms with some urgency, with the support of academics and others to deliberate the ethics and move forward with pushing the internet to be a more civil and participatory space, as it was originally conceived.

Written by Annie Mullins OBE, safety advisor at Yubo and founder of Trust + Safety Group

Editor's Choice

Editor's Choice consists of the best articles written by third parties and selected by our editors. You can contact us at timothy.adler at stubbenedge.com