Hate speech and AI: Can AI provide a fix?

The recent congressional hearing of Mark Zuckerberg has brought many different topics back to the forefront of scholarly discussion. Among these is Zuckerberg’s claim that artificial intelligence will be able to filter out all hate speech in the next five to 10 years.  Is he right about AI and hate speech?

AI doesn’t have the best track record of recognising and eschewing hate speech — but could that be changing? Let’s take a deeper look at what AI can (and could) do, as well as the implications of those advancements.

Current AI capabilities

Facebook already filters out some hate speech, mostly around pro-ISIS and Al Qaeda content, through AI programming, and that’s not the only way the company uses artificial intelligence. The tech giant has also applied artificial intelligence to functions like facial recognition (so users can “tag” themselves and their peers in photos that have been uploaded to the site), advertising, and further AI development.

But what direction is Facebook’s AI headed in the future? That’s harder to say. If Zuckerberg’s timeline is to be believed, machine learning will have to make some serious strides in the coming years. Currently, the people who build AI programs are subject to bias, and those prejudices manifest in the AI. So to reach a point where machine learning can overcome those human biases, Facebook will need to create truly intelligent and adaptive programs that can recognise internal prejudices — something many humans struggle to do.

It’s not entirely out of reach, however, if proper efforts are made. David Attard, founder of CollectiveRay, feels that biased training sets are an issue but not an immovable block in developing smart hate-speech filtering. “If efforts are undertaken to make available a good training set, it should take much less than five years,” he says.

While the capacity is absolutely there, though, there’s still much disagreement about what would constitute a good set of standards for an AI to learn from.

Here’s a solution to the AI talent shortage: Recruit philosophy students imageHere’s a solution to the AI talent shortage: Recruit philosophy students

A partial solution to the problem of lack of artificial intelligence talent might lie with recruiting philosophy students

The nuances of hate speech

One of the biggest hurdles in developing AI that can flag and remove hate speech is the fact that so much of communication relies on context and implications specific to the groups that use that language. As Tim Brown, online marketer and owner of Hook Agency, says, “It’s easy enough to make sure explicit hate words are removed from social platforms — but the meaning behind those words, and the context that is needed to really create a purposeful algorithm to remove those comments, is likely still a long way away.”

A phrase may seem neutral to one segment of the population but be pointedly hurtful to another. Some groups have given harmful implications to once meaningless phrases or symbols — the triple parentheses, for instance — highlighting just how much communication happens outside the literal meanings of spoken (or written) words. To truly solve the problem of hate speech, machine learning will need to advance enough to start identifying those ever-changing non-verbal cues and understand the context around them.

The hate speech debate has another layer of complexity: the entire internet isn’t governed by American legislation or standards. The World Wide Web is, as the name suggests, worldwide. Any AI tasked with “eliminating hate speech” would also have to navigate international moral and linguistic codes—no small feat by any count.

Increasing the adoption of ethics in artificial intelligence

Digital Catapult, the innovate UK backed centre for digital innovation, has unveiled plans to increase the adoption of ethics in artificial intelligence

Adaptability in the future of AI

There is a lot to gain from the implementation of an advanced AI filtering system: completely eradicating hate speech and similar hurtful media would make for happier web users and more pleasant online experiences. But the tech hasn’t quite hit that point yet, and there are still legitimate worries about AI’s ability to evolve.

Ethical AI – the answer is clear imageEthical AI – the answer is clear

Even if technology reaches a point where written words can be filtered easily, there’s still the issue of other media types. Content is consumed more and more in the form of photos and videos, so AI would have to be capable of evolving

Being transparent with ethical AI is vital to engaging with the public in a responsible manner.

“Videos and photos can pose a problem, not just for Facebook but other websites as well,” says Emily Lawrence of Frontier Bundles. “To really solve hate speech, AI needs to be capable of evolving for other mediums. A static filtering program won’t be enough.” A program unable to intelligently adapt to changing mediums and meanings could make the problem even harder to recognise or, worse, silence commentary on the issue altogether.

If tech organisations could reach requisite levels of adaptability, they could solve some major issues. Studies on cyberbullying and its effect on the human psyche show myriad negative impacts like depression and anxiety, so if AI could filter out hateful speech patterns on social media, it could save a lot of web users from long-term trauma or emotional damage.

Will AI be the main filter for all hateful speech within the next decade? And how will programs regulate and decide what quantifies “hate speech” in the first place? The answers to these questions are still largely up in the air, and Facebook hasn’t been forthcoming on concrete timelines or advancements. But if the right groups can start a good discussion around how this topic should be navigated in the coming years, it’s possible we’ll see the productive—and intelligent—solution Zuckerberg has promised.

Written by Alec Sears, a digital marketing expert with a love for AI and IoT
Written by Alec Sears, a digital marketing expert with a love for AI and IoT

Editor's Choice

Editor's Choice consists of the best articles written by third parties and selected by our editors. You can contact us at timothy.adler at stubbenedge.com