Google and Facebook are both in the spotlight for disseminating so-called “fake news”, despite the artificial intelligence (AI) systems that these companies developed and deploy on their platforms.
If AI is currently struggling to discern facts from fiction, could it be that human intelligence is still a necessary component for the continued successful integration of AI?
In a much simpler time, Google was a search engine that indexed websites. Today, the search giant is evolving towards giving users summarised answers to their billions of questions. Type in a word and you’ll get the definition. Type in a name and you’ll get a short biography. Type in a question and roughly one in five times, Google will generate a specific answer.
This evolution of Google Search into something one could call Google “Q&A” goes hand in hand with the rapid evolution away from typed search towards AI-powered voice assistants.
>See also: AI: the greatest threat in human history?
In response to straightforward questions, Google usually cites reliable facts and sources pulled from its “knowledge graph”. However, the more abstract or off-the-wall the questions are, the more likely the answers will be formulated from potentially unreliable sources
Multiple examples have recently gone viral, each demonstrating the shortcomings of Google’s search methods and the ability of the AI utilised to generate factual information.
Ask Google if Obama is planning a coup, and you’ll get the answer that he might be. Ask Google if women are evil and Google delivers the answer that all women have a “degree of prostitute in them”. Ask Google if all republicans are fascists, and Google produces an answer that includes the suggestion they’re all Nazis.
Most of these answers are derisory and inherently damage Google’s brand. More dangerous, however, is the potential to warp users’ perception of the world by providing incorrect and unfortunately inflammatory information.
By giving patently false narratives top billing on Google lends these stories unwarranted credibility. Google’s utilisation of AI is helping to usher in what has been called the “post-truth” era.
Google is certainly cognizant of the problem. In a recent blog post, the VP of News explains their efforts to improve responses to questions through implementing a “fact check” tag, and several other initiatives focused on authenticating the sources from which its AI draws its answers.
Similarly, Facebook – now the world’s top referral source for web traffic – has rolled out its “disputed news” tag as a flagging system for fake news in the US.
The question is, how inherent is the problem to the system? Simply put, both platforms are powered by AI which is only as as accurate as the information on which it is trained.
Weighing the veracity of different perspectives requires the kind of critical thinking that humans posses. Beyond tagging and user feedback, perhaps what Google and Facebook really require is valuable human expertise to steer and sanity check their AI – a technology still very much in the infancy stage of its development.
Sparrho made the very conscious decision early on to combine AI with expert human curation. As a science discovery platform with over 60 million papers and patents, it’s vital and highly essential that we deliver the most accurate and relevant results or simply facts to user queries.
Through blending its AI with the insight of our users and expert scientists, we’re able to identify non-linear links between research papers, ensuring that the most relevant items rise to the top.
Rather than thinking of AI as a replacement for human intelligence, information providers such as Facebook and Google ought to consider how one can enhance the another.
Only by blending AI’s ability to quickly process vast quantities of data with the ability of humans to understand nuance and context can people ensure facts remain facts in this post-truth era.
Sourced by Dr. Vivian Chan is CEO & co-founder of Sparrho