Social networks such as Twitter, YouTube and Facebook have created a multibillion-dollar industry and networked the world. For most of us, these sites allow us to connect with friends, share our experiences and promote our interest groups or business.
Social networks however, like every community, have a dark side. It's well publicised they are used as a communication vehicle for bullies, child sex abusers and more recently very media savvy extremist groups.
The continued growth and lasting mainstream appeal of social media will depend in large part on the social network companies’ ability to police their user-generated content. Customers being subjected to images and content that is both offensive and objectionable when accidently stumbled upon is disturbing for users and damaging to the company. Hence, massive labour forces are employed to handle ‘content moderation’ – the removal of offensive material for large U.S. social-networking sites.
So, yes, the social network companies are largely taking responsibility for blunting or blocking actions of abusers and extremists by taking down pages containing illicit images such as child sex abuse and more recently ISIS beheading videos. But the question of where their social responsibility begins and ends, is set to be tested publicly and legally in the coming months.
While these companies remove offensive content from their sites and actively report suspected child sex abusers to the relevant authorities, few would argue this is and should be part of their right to do business. But the issue of radicalisation, extremism and terrorism is less black and white and is further complicated by the rights of free speech and freedom of the internet.
In a game of algorithmic whack-a-mole, social media companies have chosen to take down thousands of suspected extremist and terrorist sites but discussions in the U.S. suggest they are not proactively notifying authorities of potential threats or offenders to help their intelligence and national security activities. In fact, it would appear they are actively working to kerb or work around any laws that would compel reporting of such activities. A notion perhaps highlighted by Facebook’s offering a link to its website that works on Tor – an anonymizing program that is often synonymous with the 'dark web' and favoured by organised crime gangs and paedophiles.
There is no doubt this is a complicated issue, but surely one would err on the side of caution and most would argue there is a social responsibility to actively share these potential threats to national security with authorities charged with protecting it.
It is a concern to law enforcement agencies around the world that those charged with protecting people aren’t always able to access the evidence needed to prosecute crime and prevent terrorism even with lawful authority. GCHQ Director Robert Hannigan highlighted this in an article he wrote for the FT yesterday: 'The Islamic State of Iraq and the Levant (Isis) is the first terrorist group whose members have grown up on the internet. They are exploiting the power of the web to create a jihadi threat with near-global reach. The challenge to governments and their intelligence agencies is huge – and it can only be met with greater co-operation from technology companies'.
With no one entity controlling social media and no rule book, social media is the new Wild West and appears to favour the appointment of its own local town sheriff rather than taking the wider view of national and international security.
While counties like Yemen have been widely recognized as harboring training camps for those that threaten national security, social networks are, albeit not willingly, increasingly hosting the same in a virtual community. And as we see a the rapid shift from traditional warfare to cyber warfare, it feels like Joe Citizen and some governments are sleepwalking their way through this problem – but a wakeup call could be coming.
As the problem of returning foreign fighters continues to grow, the risk of a significant security incident in a country far from the front-line ISL engagement increases. It's inevitable if and when an incident does happen, the question – 'who knew' – will be asked. If a social networking site did know but chose not to inform authorities, it may provide more clarity around the scope of their social responsibility, but will it be too late?
The Internet has been with us for two decades now. It's time to stop treating it like it's a fantasy landscape, accept that it's an extension of our community, and start imposing some of the controls developed communities expect with regard to law and order.
All modern nations have laws and policies controlling land, sea, air and even, space, so perhaps it is now time to recognise the internet as the “fifth dimension”? Given the reliance that most citizens and enterprise now have on the internet for day-to-day activities, how much longer can we expect authorities to protect us with one arm tied behind their back? Maybe it's time we listened to their warnings and support their lawful requests for information from social networks where it could protect a child, save a life or defend a border.
Sourced from Paul Stokes, COO of Wynyard Group