The UK car manufacturing industry in the 1980s provides an uncomfortable parallel for today's software developers. At the time, security was not an integral part of car design, and car thieves and joy-riders were able to break into and 'hot-wire' almost any vehicle in a matter of minutes. Manufacturers, happy for the extra business in replacing stolen cars or damaged parts, kept silent. It was only after intense pressure from governments and consumer groups that manufacturers agreed to introduce first steering wheel locks, then alarms and, finally, engine immobilisers as standard.
Much corporate software is still stuck in those pre-wheel lock days. Although packaged software developers have started to make security an aspect of product design in the last couple of years, the vast majority of organisations who develop software in-house do not build security into their development processes from the ground up.
So while most have spent the last five years locking down their IT infrastructure with firewalls and filters, virtual private networks and intrusion prevention software, "they've not thought about what's sitting on top of that – the apps," says David Grant, product marketing director of vulnerability testing vendor Watchfire.
Although there are secure versions – and even military-level secure versions – of databases, web servers, operating systems and other key pieces of systems software, organisations are still being left exposed by the structure of applications, especially web-based applications where the front end is presented to the outside world. Just by the holes they leave open, these applications, written without adequate security considerations, can expose the organisation to risks such as data theft, identity theft and denial of service attacks.
To put the level of exposure in perspective, a review of code at 2,000 companies by Watchfire found 98% had flaws in their web applications. Cahoot, a UK-based online bank, was the most recent victim of the widespread failure to develop secure applications.
In November 2004 a Cahoot customer discovered a flaw that allowed site visitors to view other customers' accounts by simply altering the URL in the browser's address bar. Cahoot's site had to be taken down for 10 hours while the problem was fixed. If other companies are to avoid similar breaches of privacy legislation and damage to the reputation of their core services, experts say they need to build security into their application development processes from the very start.
The cost and practical difficulty of patching flaws after the application has been deployed is sufficiently large by itself to build a business case for more secure coding. A 2002 study by the US National Institute of Standards and Technology found that removing a flaw after a system is operational costs three to 20 times more than fixing it during the coding or testing stages. Additionally, Gartner estimates that around 30% of all code changes cause malfunctions and that removing even 50% of flaws reduces configuration management and incident response reduces costs by 75%.
The first step in building more secure applications is communicating the business imperatives down to the developers themselves. Although security is a top concern for CIOs, it is rarely at the forefront of developers' minds, who often view it as a problem for admin and operations staff. Security requirements, formulated in consultation with business needs, must be clearly stipulated in a development specification brief and then the resulting code audited at several points afterwards.
Of course, security of any kind always involves a trade-off in terms of cost, time and even functionality. But the impact of all these can be diminished with the proper processes in place. Secure code is often more tightly written, making it cheaper and easier to maintain. And the opportunity to reuse security functions and controls means they will eventually pay for themselves. "Once you've solved [a security problem] for your most critical applications, you then have a lot of resources that are much easier to apply across a broader range of apps," says Randy Heffner, an analyst at IT strategy advisor, Forrester Research. "It might cost 'x' for one, but a third of that to apply it to another app that couldn't have justified 'x' by itself."
He adds that costs can also be kept under control employing people who are savvy enough to understand the risks posed by each piece of code and to balance the resources devoted to making them secure in terms of the potential impact on the business and development costs.
That points to the most resource-hungry element of secure applications development: reviewing the code for errors throughout the process. Doing this manually can be very laborious, so it is not surprising that the market for code and vulnerability scanning tools and services is growing by as much as 50% per year, says Heffner. Vendors are entering this niche from a variety of sectors: Security specialist Kavado and Watchfire both offer vulnerability scanners along with web application firewalls but their broader portfolios differ widely; Developer tools vendors Compuware and Borland have added security testing elements to their development environments; Parasoft, a vendor of automated error prevention tools, has made security a central part of its toolset.
The speed and accuracy of such scanning and testing tools is vital when the team is under pressure to bring a product to market or apply a key update to an existing application. They are targeted at two levels. Source code scanners keep costs under control by reducing the need to involve specialist security personnel. Developers can run the tests themselves and so learn from their mistakes without having to be retrained as security experts. Vulnerability scanners are aimed at a more supervisory level, testing for common vulnerabilities such as 'buffer overflows' or 'SQL injections' that leave an application open to attack (see box, 'Top 10 vulnerabilities'). Users of vulnerability scanners can then see the range of fixes required and quantify the security risks for the business before the application goes live.
Another useful set of technologies in this area are 'code obfuscators'. Running a piece of source code through such tools makes it difficult, if not impossible, for hackers to read it by stripping out programmer comments, encoding constants in inconveniently illegible ways and renaming identifiers in source to nonsense names that convey no information. So even if a web server is compromised, because something like CUST_ACCT now reads as x.fg, hackers cannot make sense of it.
Related to this suppression of comments is the good programming practice of not revealing exploitable data about applications in error messages. Of course, the source can be restored to its normal form for legitimate use by the same tools.
People, process, policy
But, as ever, people and processes are fundamental, with even tools vendors acknowledging that manual source code reviews are necessary. Errors can occur not just because of coding mistakes but because of misinterpretations of requirements or unforeseen circumstances in the application's use, says Duncan Hine of QinetiQ, a UK technology consultancy company formerly part of the Ministry of Defence. "Software complexity has gone beyond anybody's reasonable expectation to test exhaustively," he argues.
Learning from mistakes is critical. Dr Adam Kolawa, founder and CEO of Parasoft, says that organisations must report and remember their errors, and adjust processes accordingly. "Every time you find an error in the code, you see if you can modify practices to prevent this kind of error," says Kolawa. "Then you know how to improve and what to measure."
Parasoft advocates the Deming principle of continuous improvement – 'plan, do, study, act' – incorporating the general cycle into its automated error prevention (AEP) methodology. "Software development is like a production line," he says. "AEP is how you control the quality of the product." He is critical of organisations who fail to build an infrastructure around development and of developers who see software as a unique, almost artistic endeavour rather than a scientific process. "Everything starts with the security policy," he says. "If you define what is important you can log the problems and prevent them."
A related management technique is the 'V model' which connects required input to actual output. This model links a testing activity to each development activity. For instance, drawing up a system's specifications is linked to the activity of drawing up acceptance criteria and procedure. By moving the preparations for the final acceptance test forward in time, reliable planning of testing activities is enabled. "If you write the functional specifications you should also be writing the user acceptance test at the same time," says Dave Martin, principal UK security consultant of LogicaCMG, where the V model has been used extensively.
It is clear that many companies are not taking advantage of such tools and techniques. Martin recalls a time working for a European banking client, where Logica was called in to validate the strength of its encryption algorithms. But he first looked into the careers of the internal developers and found three were convicted hackers. "Nobody had thought to run background checks," he says. "It doesn't matter if the encryption was strong. Who knows if they were building in backdoors?"
In another near miss, tests on a live application at a big-name company by security consultancy IRM, revealed that a simple URL manipulation flaw exposed users' national insurance numbers and personal details. Cahoot-style headlines were narrowly avoided by quick remedial work to an application that had barely been tested before deployment. Just ask Halifax and Powergen if that kind of error had a negative effect. A couple of years back, Halifax had to suspend its ShareXpress sharedealing services after customers found they were able to access other people's accounts, allowing them to, theoretically, draw on each other's bank accounts to execute trades. And Powergen had to ask 5,000 of its customers to cut up their credit cards and pay each £50 in compensation after it was found that a little rudimentary tweaking of the company's web site HTML provided access to their credit card details.
Given such high-profile failings (and only a handful are actually made public each year) secure application development can no longer be an afterthought. And just as the car industry accepted it had to build security right into the body of the product, so must development teams.