Their task: to assess the likelihood and impact of a ‘worst case worm’ – a virus that could exploit a hitherto-unnoticed vulnerability in some widely installed package, spread via email and web pages and then prevent the user from getting access to an antidote by enacting denial of service (DOS) attacks on antivirus update sites. Standard defences, they concluded, would be rendered useless in the face of this malevolent blend of attacks.
They also decided the threat of such a superworm was very real. Others agree: “Technically it is feasible now,” warns Pete Simpson, an expert in emerging threats for content security vendor Clearswift. “How long before it happens really depends on geopolitical factors, but my feeling is that it is becoming more likely by the month.”
Viruses are hardly the only Internet security headache for organisations and individuals. New threats appear constantly: organised criminals are using DOS attacks to extort protection money; ‘phishing’ emails attempt to glean sign-on details from online bank users and e-shoppers; the first viruses aimed at mobile phones have emerged; and hackers are getting into corporate networks via new channels such as WiFi.
Of course, that is not how it was meant to be. During its take-off years in the late 1990s, the Internet was seen as an unruly frontier that would, in time, be tamed. To underpin ecommerce, governments, the IT industry, international law enforcement agencies would ensure the vulnerabilities were closed down. By now, the picture was expected to be a whole lot better. Instead, things have got a lot worse.
As Paul Simmonds, global information security director at ICI, observes, “Our borders are less and less successful, more porous, and we are going to be forced to do something about this.”
But who, if anyone, is actually taking the responsibility to do that? As with wider matters of security, the onus might naturally be expected to fall on governments and law enforcement. But existing laws and methods of policing have proved too parochial for the task. And although vendors of security software, Internet service providers (ISPs) and infrastructure software companies are all doing their bit, their energies remain uncoordinated. With the future of online commerce or even an open Internet at stake, a more effective solution is urgently needed.
At least some of the criticism for the situation is being directed at the security software industry. Antivirus software companies, for example, have come under fire for sustaining a business model that tackles the effect rather than the cause.
“Antivirus tends to be using technology like a watchdog and what we really need is a bloodhound,” says David Porter, head of security and risk at Detica, a specialist IT consultancy. “There’s no reason why the technology can’t be more like a roving Sherlock Holmes that tries to find this stuff before it happens.”
Richard Millar, UK MD of Internet Security Systems, is more explicitly critical. “The bad guys have got ahead of the antivirus vendors,” he says. So-called zero-day exploits, that target a fresh vulnerability in a popular package, have undermined the traditional ‘signature’ way of detecting viruses. “The threats are rising exponentially,” he says.
Others argue that the industry is coming at the problem from the wrong angle, suggesting that the whole premise behind defending the organisation by warding off threats with products such as firewalls and spam filters is flawed.
Says ICI’s Paul Simmonds: “Vendors come to me trying to flog deeper firewalls, but that’s not what my business is telling me that they want in the future. The border is inhibiting their ability to do business quickly.”
Simmonds sits on the Jericho Forum, a group of CIOs from large UK and multinational companies, which advocates a move to ‘perimeterless security’ in which the focus shifts away from building ever stronger walls around the organisation’s systems.
Rather its security model is open, relying on strong authentication, encrypted protocols and systems built to be inherently secure. That is the only way forward, he says, as the Internet’s infrastructure was only built with a small, trusting network in mind. “Now we’re in this big nasty world but the computer industry hasn’t caught up.”
Mary Ann Davidson, chief security officer (CSO) of Oracle, agrees: “Like it or not, perimeter defence has gone away. The question is whether people are in denial or whether they embrace it.”
Achieving fully perimeterless yet secure Internet transactions requires much closer collaboration between vendors – and between them and law enforcement agencies. The efforts underway here look piecemeal. In the UK, the recently established National Infrastructure Security Co-ordination Centre (NISCC) aims to establish a dialogue with companies that provide critical services such as Internet services.
The UK government too realises more needs to be done. In June, for example, the All Party Internet Group (APIG) met to discuss proposals to update the 10 year-old Computer Misuse Act. Although the original legislation was broad enough to keep up to date with many new developments, its sanctions reflected the more limited nature of cyber-crime in the early 1990s. APIG recommended the jail sentence for hacking should be increased from six months to two years and that hacking should be made an extraditable offence – crucial for catching criminals based outside the UK. APIG also proposed that denial of service attacks become a specific offence.
ICI’s Simmonds says that light sentences are part of the problem: “A deliberate virus, which wipes a hard drive, is akin to putting a Molotov cocktail through an office window. We should be treating the culprits in the same way.”
But harsh penalties only work as a deterr-ent if criminals fear conviction. And the global nature of the Internet means that cyber-criminals are elusive, often residing outside their targets’ jurisdictional boundaries. The authorities hope that intensified international cooperation efforts will boost arrests. The UK’s National High-Tech Crime Unit recently worked with their Russian counterparts to track down three Russians who were trying to extort large sums from UK gambling web sites by threatening to hit them with denial of service attacks. But such examples will remain exceptional without a stronger international legal framework.
At a July meeting in Geneva of the United Nations’ International Telecommunications Union (ITU), regulators and industry representatives from 60 countries called for legislation for prosecuting spammers and electronic fraudsters to be standardised around the world.
Like the UK’s All Party Internet Group, the ITU wants another set of industry companies to make greater efforts to act as gatekeepers and control the security threat: ISPs.
“We need ISPs to come together with ethical codes of conduct, which they can supervise because they are the control merchants in this international network,” said Robert Horton, acting chair of the Australian Communications Authority and chairman at the ITU’s Geneva meeting. Suggestions have come from the likes of Johannes Ullrich of the SANS Institute’s Internet Storm Centre, which monitors security threats and helps with responses. He says ISPs should block certain ports to stop automated ‘bots’ looking for weak passwords. “These bots are surprisingly successful. With very little effort this would eliminate 80% of the problem and give ISPs more time to look at the rest of their traffic.”
Ullrich says his recommendations have met with resistance from ISPs, reluctant to invest in expensive routers and firewalls and wary of where such content filtering would end. For example, individual companies might decide it would be safer to block all content arriving from dangerous quarters.
But some ISPs are increasingly finding a commercial imperative to take a wider security role. Those belonging to the London Internet Exchange (LINX) recently voted through a code of practice that gives them the mandate to shut down websites promoted through spam, even if junk mail messages are sent through a third-party or over a different network.
ISPs have traditionally left the management of security to users. Others would put even more onus on them.
Susan Brenner, a professor of law and technology at the University of Dayton, Ohio, suggests that such a requirement should be enshrined in law. “If you can’t catch the criminals, you have to take care of yourself,” she says. Her solution is to hold individuals and companies criminally responsible for securing their own systems. This works on a similar basis to “aiding and abetting” any other crime. If a user leaves his or her computer open to attacks from malware such as Trojans that then proceed to cause damage to other computers, that user is liable for the consequences.
Brenner believes the crux of the problem lies in our approach to law enforcement. On the web, the traditional rules of proximity, scale and physical constraints do not apply. Top-down policing was not developed to deal with such an intangible, dispersed threat and the reactive model is too difficult to implement: paucity of resources and cyber crime’s international nature mean Internet patrols are unfeasible. Prevention at a user level is therefore the only way to bring crime down to more manageable or tolerable levels.
Some have criticised the proposal as draconian or unenforceable. But as with legal compulsions to wear a seatbelt, Brenner believes there needs to be an “incentive” if people are to be made aware of their security responsibilities – for their own sake as well as of the Internet community at large.
Brenner also advocates software companies should be legally obliged to take on greater responsibilities for writing secure software. She is not alone in that view.
Cranfield University’s Professor Brian Collins, former technology director at the government’s signals intelligence agency, GCHQ, believes that IT companies should be designing systems that are less crime prone. “We’re now building towns with no dark corners where [criminals] can hide,” he says. “Why aren’t we doing that with the Internet?”
Oracle CSO Mary Ann Davidson sees this as a “systemic problem” in the software industry. “What if civil engineers built bridges the way developers built software?” she asks. “The software industry doesn’t understand they’re building infrastructure. Bridges are safe, secure and reliable before anything else. Not everybody who works on a bridge is professionally accountable, but somebody is. That construct doesn’t exist for software professionals. I think the whole industry needs to change and maybe look into some professional licensing scheme.”
The problem is widespread, but Microsoft – the brunt of much of the criticism in this area – has tried to build a security ethos into all its development. Indeed, the latest update to Windows XP, Service Pack 2, runs the risk of breaking applications running on XP in order to make the whole more secure. Other efforts are evident in the Trusted Computing Group, an alliance with Intel, IBM, HP and AMD that promotes a hardware and software cryptography standard.
And Microsoft’s current programme of offering a bounty to those providing information on individuals writing viruses that target its software had a high-profile success in July with the arrest of Sven Jaschan, who admitted to writing the Netsky and Sasser worms that made up 70% of all virus infections in 2004. But Jaschan was only a teenager trying to prove himself to his peers. “In most cases the hobbyists aren’t the ones that really create the damage,” says Meta analyst Tom Scholtz. “As a rule their work doesn’t carry the dangerous payloads that can create real damage.”
But even if new approaches to development are widely adopted, fears linger that criminals will always learn to work around them. Many advocate that one of the few keys available is user education. Says Professor Collins of Cranfield: “In the economic world, we’re moving towards the recognition that some basic level of qualification such as the European Computer Driving Licence,” he says. “It means users at least know what driving safely looks like. It should absolve people from accidentally doing something damaging.”
It is a vital element, but without dragging everyone with a PC back to school, education alone can only do so much. “Every company seems to have one or two people who just don’t learn and keep on clicking whatever comes their way,” says Ullrich. “Short of finding some way to fire these people there is not much you can do. Somehow the system has to be more resilient so they can be more easily isolated and limit the impact.”
As that suggests, a safer Internet is only going to come about by all those with a vested interest in it taking greater responsibility, and coordinating their security initiatives where possible – hopefully before the first ‘worst case worm’ hits.