The Digital, Culture, Media & Sport Committee has published their final report on after an 18-month investigation into disinformation and fake news.
The report follows government action to introduce new laws by the end of the year, to protect users online. Media companies are tackling the issue head on by hiring more content moderators and using next-generation technologies to flag-up toxic content.
A government spokesman said: “The government’s forthcoming white paper on online harms will set out a new framework for ensuring disinformation is tackled effectively, while respecting freedom of expression and promoting innovation.
“This week the culture secretary will travel to the United States to meet with tech giants including Google, Facebook, Twitter and Apple to discuss many of these issues.”
Facebook and Twitter: Will fake news and privacy scandals reflect in earnings?
Facebook and Twitter due to reveal earnings amid concerns about fake news on their platforms. Read here
Facebook: A digital gangster?
In the report, MPs branded Facebook “digital gangsters”. It said that the social media giant was using its dominance to dominate rivals and stifle competition.
The report calls for a code of ethics to ensure social media platforms remove harmful content from their sites
Obtained internal Facebook documents also showed that the tech firm “violated” laws by selling people’s private data without their permission, according to the committee.
‘Companies like Facebook should not be allowed to behave like ‘digital gangsters’ in the online world, considering themselves to be ahead of and beyond the law,’ the report stated.
Tory MP and committee chairman Damian Collins said: “The big tech companies are failing in the duty of care they owe to their users to act against harmful content, and to respect their data privacy rights.
“Companies like Facebook exercise massive market power which enables them to make money by bullying the smaller technology companies and developers who rely on this platform to reach their customers.”
Mark Zuckerberg, Facebook’s co-founder and CEO, was accused of showing “contempt” to UK parliament
The era of data privacy
In the era of data privacy, the stakes are high for tech giants, as governments plan to impose steeper fines and new laws by the end of the year.
Bhupender Singh, CEO of Teleperformance, agrees and said that “the stakes are high as the government is imposing high sanctions such as fines and shutting down sites that fail to flag up and take down inappropriate content. Top of mind for business leaders is being able to find ways to reduce the backlog, increase accuracy and resolve efficiency problems associated with stopping the wide-spread of misinformation.”
“While experts are sceptical about the use of artificial intelligence to monitor content, it is important to recognise that the best results will be delivered by humans and technology working together. Humans have the emotional intelligence to bring the real-world context to detect malicious or fake content, whilst automation grants the speed and accuracy of managing a high volume of content.”
Shadow culture secretary Tom Watson said the “era of self-regulation for tech companies must end immediately”.
What would this new era look like then? Well, the report suggests that a new code of ethics would be overseen by an independent regulator, who has the power to launch legal action against those who breach it.
Similar to the ICO, this regulator could then issue significant fines against the guilty parties.
Cyber populism: is social media damaging democracy?
In a talk at Chatham House, a panel of experts discussed the growing problem of the manipulation of information on social media. Read here
A battle of multiple fronts
Regulatory pressures, combined with the rise of social sharing, mean businesses are battling challenges on multiple fronts. Purging the internet of fake news and malicious content has now become a global mandate for all business leaders around the world who are facing rising scrutiny over their content moderation practices.
The sheer volume of user generated content being published online continues to grow at an astonishing rate, posing a unique challenge not just for tech giants, but all businesses. The challenge lies in businesses having to wade through vast amounts of content to identify and remove content that most users would find objectionable.
Singh said: “A brand’s reputation is on the line every day, with users from all over the world contributing and sharing content at the click of a button, anytime, anywhere. With the rise in social sharing and the government proposed plans, judging the appropriateness of content has become a mandate for all businesses, 24/7.”
Tech vs fake news: Separating the fact from Russian troll farm manipulation
Technology may be a real aid in helping the world deal with the major current social problem of fake news. Read here
An electoral commission spokesman said: “We agree that reform of electoral law is urgently needed.
“The UK’s government must ensure that the tools used to regulate political campaigning online continue to be fit for purpose in a digital age.”
Karim Palant, from Facebook UK’s public policy department, said: “We share the committee’s concerns about false news and election integrity and are pleased to have made a significant contribution to their investigation over the past 18 months, answering more than 700 questions and with four of our most senior executives giving evidence.
“We are open to meaningful regulation and support the committee’s recommendation for electoral law reform. But we’re not waiting. We have already made substantial changes so that every political ad on Facebook has to be authorised, state who is paying for it and then is stored in a searchable archive for seven years. No other channel for political advertising is as transparent and offers the tools that we do.
“We also support effective privacy legislation that holds companies to high standards in their use of data and transparency for users.
“While we still have more to do, we are not the same company we were a year ago. We have tripled the size of the team working to detect and protect users from bad content to 30,000 people and invested heavily in machine learning, artificial intelligence and computer vision technology to help prevent this type of abuse.”