Security from scratch

Microsoft is not the only company to suffer security software flaws, despite a stream of news stories suggesting the contrary. Other programs, whether they are off-the-shelf packages, open source programs or bespoke software applications, are all prone to security errors.

In fact, Cambridge University professor and security expert Ross Anderson suggests that all software will have more or less the same number of flaws in it – what is important is the rate at which bug fixes are produced and applied.

But what if programmers could be trained to stop these flaws from appearing in the first place?

Microsoft, aware that its tarnished security image is turning away customers and that patches do not prevent damage, only limit it, is trying to do just that. In 2002, it invested $200 million in a ‘Trustworthy Computing’ initiative. The aim is to teach its 17,000 programmers to programme more securely and, therefore, guard against security flaws from the outset.

Mike Kass, product manager in the developer platform and evangelism group at Microsoft, argues that investment in training is crucial. “Security cannot be an after-thought,” he says. “It must factor in each stage of the application development lifecycle – from the design blueprint, through coding and

 
 

The most common ways to break a program

Buffer overflow: When programs accept data, they commonly put the data into a buffer for use later. If the program does not check to see if the buffer is large enough for the data, the data can ‘overflow’ its buffer. Later, the buffer will be overwritten by new information, leaving behind the excess code, which could potentially be executed by the computer.

Race conditions: This is where shared resources, such as files or variables, have not been set up correctly and can be accessed improperly. An attacker can then access files to which they would not normally have access.

Setuid/setgid programs: These programs run with the privileges of a user other than the one running the program. If they then suffer from buffer overflows or race conditions, an attacker can then have privileges he would not normally have.

 

 

testing, all the way to deployment.”

That approach can also deliver considerable paybacks. According to research from IBM’s System Sciences Institute, for example, fixing security vulnerabilities after deployment costs between seven and 15 times more than fixing it during development.

Not only that, many large businesses are now making third-party suppliers of all sizes liable for any damage or loss that security flaws cause. In development contracts, for example, they may stipulate that third-party developers will be liable for any security breach connected to their software, including the cost of cleaning systems, time lost, etc.

Modern languages

Much of the blame for insecure coding can be attributed to modern programming languages, many of which have inherent security flaws.

According to John Viega of Cigital, developer of the free code-scanning ITS4 program, languages such as C and C++ make it difficult to write code securely because the languages rely heavily on code libraries, so programmers can add vulnerabilities to their code inadvertently.

Many programmers are not “security aware”, says Viega, and do not think of the security ramifications of using the flawed code until later. They also may not know the details of the flaw, they may not know how to overcome it, or they may simply hope that people will not be able to exploit it.

But even teaching programmers to use ‘secure function calls’ – where performing one programming function will automatically lead to the execution of another, security-focused function – may not stop them from making mistakes, says Scott Blake, vice president of information security at BindView Corporation.

Karl Keller, president of development company IS Power, also believes that training is just the beginning. “Security programming is a mindset,” he says. “It may start with a week or two of training, but it will require constant reinforcement. And managers must learn that programmers need to take the time to architect, design and test their code.”

Others believe that relying on human ability to spot flaws in millions of lines of source code is asking the impossible. Freely available tools, such as ITS4 and Flawfinder, can scan code for common security flaws, while commercial tools, such as Sanctum’s AppScan DE, can integrate with Microsoft’s Visual Studio development environment so developers can constantly test their code for flaws as they write. Microsoft itself now uses AppScan, together with a trained group of code checkers, as its main process for secure development.

With the emphasis placed on security by IT decision-makers increasing, developers will need to provide greater proof that their products are secure.

Customers are going to ensure secure programming is the norm, not the exception, even from Microsoft.

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

Related Topics