Secure from the ground up

The UK car manufacturing industry in the 1980s provides an uncomfortable parallel for today's software developers. At the time, security was not an integral part of car design, and car thieves and joy-riders were able to break into and 'hot-wire' almost any vehicle in a matter of minutes. Manufacturers, happy for the extra business in replacing stolen cars or damaged parts, kept silent. It was only after intense pressure from governments and consumer groups that manufacturers agreed to introduce first steering wheel locks, then alarms and, finally, engine immobilisers as standard.

 
 

Best practice: Advice from the experts

Phil Robinson, chief technology officer at security consultancy IRM, on the virtues of simplicity: "The more complex you make an app, the more security issues you could introduce. Ensure library routines are used, rather than rewriting session management, input filtering and encryption. Why reinvent the wheel?"

Guy Lidbetter, head of SI, Atos Origin, on project planning: "Define your endgame at the beginning. Plan to the context; plan with resources in mind; plan early activities in more detail than later ones; make any assumptions explicitly; and of course prioritise. Always do two independent estimates using two estimating methods and estimators."

John Pescatore, from research analysts Gartner, on developers' priorities: "Software developers should focus on reducing configuration errors, making default configurations more secure, and reducing attack surfaces."

Dr Adam Kolawa, CEO of Parasoft, on how to ensure security policy is consistently applied: "Every damn input has to be validated."

Mark Quirk, head of development and platform group, Microsoft: "To rank vulnerabilities, use the STRIDE threat model – spoofing, tampering, repudiation, information disclosure, denial of service and elevating privileges – and match this to the risk using the DREAD model – damage potential, reproducibility, exploitability, affected users, discoverability."

Dave Martin, LogicaCMG, on why less is more: "People will be thrown at a late project and only make it later. Keep teams as small as possible – work out specific sections with well-defined boundaries."

Arthur Barnes, principal consultant, Diagonal Security: "Steer developers away from the idea that you write code that progresses a user down a fixed path – you can't assume the user will be well-behaved and do what you expect. You have to assume malicious intent."

John Harrison, Borland: "Avoid the waterfall approach, develop in a more iterative way."

Recommended reading includes : Writing secure code by Michael Howard and David LeBlanc; Secure coding: principles and practices by Mark Graf and Kenneth van Wyk.

 

 

Much corporate software is still stuck in those pre-wheel lock days. Although packaged software developers have started to make security an aspect of product design in the last couple of years, the vast majority of organisations who develop software in-house do not build security into their development processes from the ground up.

So while most have spent the last five years locking down their IT infrastructure with firewalls and filters, virtual private networks and intrusion prevention software, "they've not thought about what's sitting on top of that – the apps," says David Grant, product marketing director of vulnerability testing vendor Watchfire.

Although there are secure versions – and even military-level secure versions – of databases, web servers, operating systems and other key pieces of systems software, organisations are still being left exposed by the structure of applications, especially web-based applications where the front end is presented to the outside world. Just by the holes they leave open, these applications, written without adequate security considerations, can expose the organisation to risks such as data theft, identity theft and denial of service attacks.

To put the level of exposure in perspective, a review of code at 2,000 companies by Watchfire found 98% had flaws in their web applications. Cahoot, a UK-based online bank, was the most recent victim of the widespread failure to develop secure applications.

In November 2004 a Cahoot customer discovered a flaw that allowed site visitors to view other customers' accounts by simply altering the URL in the browser's address bar. Cahoot's site had to be taken down for 10 hours while the problem was fixed. If other companies are to avoid similar breaches of privacy legislation and damage to the reputation of their core services, experts say they need to build security into their application development processes from the very start.

Damage limitation

The cost and practical difficulty of patching flaws after the application has been deployed is sufficiently large by itself to build a business case for more secure coding. A 2002 study by the US National Institute of Standards and Technology found that removing a flaw after a system is operational costs three to 20 times more than fixing it during the coding or testing stages. Additionally, Gartner estimates that around 30% of all code changes cause malfunctions and that removing even 50% of flaws reduces configuration management and incident response reduces costs by 75%.

The first step in building more secure applications is communicating the business imperatives down to the developers themselves. Although security is a top concern for CIOs, it is rarely at the forefront of developers' minds, who often view it as a problem for admin and operations staff. Security requirements, formulated in consultation with business needs, must be clearly stipulated in a development specification brief and then the resulting code audited at several points afterwards.

Of course, security of any kind always involves a trade-off in terms of cost, time and even functionality. But the impact of all these can be diminished with the proper processes in place. Secure code is often more tightly written, making it cheaper and easier to maintain. And the opportunity to reuse security functions and controls means they will eventually pay for themselves. "Once you've solved [a security problem] for your most critical applications, you then have a lot of resources that are much easier to apply across a broader range of apps," says Randy Heffner, an analyst at IT strategy advisor, Forrester Research. "It might cost 'x' for one, but a third of that to apply it to another app that couldn't have justified 'x' by itself."

He adds that costs can also be kept under control employing people who are savvy enough to understand the risks posed by each piece of code and to balance the resources devoted to making them secure in terms of the potential impact on the business and development costs.

That points to the most resource-hungry element of secure applications development: reviewing the code for errors throughout the process. Doing this manually can be very laborious, so it is not surprising that the market for code and vulnerability scanning tools and services is growing by as much as 50% per year, says Heffner. Vendors are entering this niche from a variety of sectors: Security specialist Kavado and Watchfire both offer vulnerability scanners along with web application firewalls but their broader portfolios differ widely; Developer tools vendors Compuware and Borland have added security testing elements to their development environments; Parasoft, a vendor of automated error prevention tools, has made security a central part of its toolset.

The speed and accuracy of such scanning and testing tools is vital when the team is under pressure to bring a product to market or apply a key update to an existing application. They are targeted at two levels. Source code scanners keep costs under control by reducing the need to involve specialist security personnel. Developers can run the tests themselves and so learn from their mistakes without having to be retrained as security experts. Vulnerability scanners are aimed at a more supervisory level, testing for common vulnerabilities such as 'buffer overflows' or 'SQL injections' that leave an application open to attack (see box, 'Top 10 vulnerabilities'). Users of vulnerability scanners can then see the range of fixes required and quantify the security risks for the business before the application goes live.

Another useful set of technologies in this area are 'code obfuscators'. Running a piece of source code through such tools makes it difficult, if not impossible, for hackers to read it by stripping out programmer comments, encoding constants in inconveniently illegible ways and renaming identifiers in source to nonsense names that convey no information. So even if a web server is compromised, because something like CUST_ACCT now reads as x.fg, hackers cannot make sense of it.

Related to this suppression of comments is the good programming practice of not revealing exploitable data about applications in error messages. Of course, the source can be restored to its normal form for legitimate use by the same tools.

People, process, policy

But, as ever, people and processes are fundamental, with even tools vendors acknowledging that manual source code reviews are necessary. Errors can occur not just because of coding mistakes but because of misinterpretations of requirements or unforeseen circumstances in the application's use, says Duncan Hine of QinetiQ, a UK technology consultancy company formerly part of the Ministry of Defence. "Software complexity has gone beyond anybody's reasonable expectation to test exhaustively," he argues.

 
 

Top 10 most critical web apps vulnerabilities

The Open Web Application Security Project (OWASP) is a group of volunteers who produce free tools and documentation to promote more secure development practises.

The organisation's top 10 web application vulnerabilities provides a ‘minimum standard' for security to test against before deployment. Among other adopters, the US Department of Defense uses the list as part of its security accreditation process and credit card giant VISA demands all its merchants scan their custom-built code for these flaws.

1. Unvalidated input

Information from web requests should be validated before use in order to prevent hackers using flaws to attack back-end components.

2. Broken access control

Weak enforcement of restrictions on what users can and cannot do gives attackers access to unauthorised files and functions.

3. Broken authentication and session management

Poor protection of account credentials and session tokens can allow "cookie poisoning" and identity theft.

4. Cross site scripting (XSS) flaws

The web application can be used as a mechanism to attack an end user's browser, leaving users vulnerable to spoofed content and other risks.

5. Buffer overflows

Unvaildated input crashes the application or hijacks its processes.

6. Injection flaws

Malicious commands entered into simple entry fields in the application front end are executed by the underlying database or operating system, compromising that deeper layer.

7. Improper error handling

As well as denying service, an error could give away system information that could then be used by an attacker to compromise security.

8. Insecure storage

The cryptographic functions often used to protect information and credentials are difficult to code and integrate properly, weakening their effectiveness.

9. Denial of service

An attack consumes web application resources to such a high degree that others cannot use it or it crashes.

10. Insecure configuration management

Strong server configuration is critical to a secure web application – as is the integrity of all its supporting infrastructure, such as the database or network.

 

 

Learning from mistakes is critical. Dr Adam Kolawa, founder and CEO of Parasoft, says that organisations must report and remember their errors, and adjust processes accordingly. "Every time you find an error in the code, you see if you can modify practices to prevent this kind of error," says Kolawa. "Then you know how to improve and what to measure."

Parasoft advocates the Deming principle of continuous improvement – 'plan, do, study, act' – incorporating the general cycle into its automated error prevention (AEP) methodology. "Software development is like a production line," he says. "AEP is how you control the quality of the product." He is critical of organisations who fail to build an infrastructure around development and of developers who see software as a unique, almost artistic endeavour rather than a scientific process. "Everything starts with the security policy," he says. "If you define what is important you can log the problems and prevent them."

A related management technique is the 'V model' which connects required input to actual output. This model links a testing activity to each development activity. For instance, drawing up a system's specifications is linked to the activity of drawing up acceptance criteria and procedure. By moving the preparations for the final acceptance test forward in time, reliable planning of testing activities is enabled. "If you write the functional specifications you should also be writing the user acceptance test at the same time," says Dave Martin, principal UK security consultant of LogicaCMG, where the V model has been used extensively.

It is clear that many companies are not taking advantage of such tools and techniques. Martin recalls a time working for a European banking client, where Logica was called in to validate the strength of its encryption algorithms. But he first looked into the careers of the internal developers and found three were convicted hackers. "Nobody had thought to run background checks," he says. "It doesn't matter if the encryption was strong. Who knows if they were building in backdoors?"

In another near miss, tests on a live application at a big-name company by security consultancy IRM, revealed that a simple URL manipulation flaw exposed users' national insurance numbers and personal details. Cahoot-style headlines were narrowly avoided by quick remedial work to an application that had barely been tested before deployment. Just ask Halifax and Powergen if that kind of error had a negative effect. A couple of years back, Halifax had to suspend its ShareXpress sharedealing services after customers found they were able to access other people's accounts, allowing them to, theoretically, draw on each other's bank accounts to execute trades. And Powergen had to ask 5,000 of its customers to cut up their credit cards and pay each £50 in compensation after it was found that a little rudimentary tweaking of the company's web site HTML provided access to their credit card details.

Given such high-profile failings (and only a handful are actually made public each year) secure application development can no longer be an afterthought. And just as the car industry accepted it had to build security right into the body of the product, so must development teams.

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

Related Topics