When software giant Oracle released its 11i Applications suite in Spring 2000, early adopters had little idea what was about to hit them. Quite simply, the software was so bug-ridden that, in the following months, Oracle had to rush out patch after patch in a bid to make good the shortcomings of the original release.
But customers were quickly overwhelmed. Quite simply, Oracle was releasing patches at a rate of five or more a week – faster than IT support could implement them. Indeed, most organisations affected simply did not have the staff to implement such a blizzard of software updates.
It took the best part of a year and more than 5,000 patches, according to analysts at Forrester Research who kept count, before Oracle got on top of the bugs that had blighted the launch of 11i. But the challenge of managing software patches – particularly security patches – continues to haunt many major organisations.
This was reflected in the October 2002 release by the US Federal Bureau of Investigation (FBI) of its ‘hit list’ of the most dangerous software security holes. Sixteen out of the top 20 were flaws for which patches had been written months earlier – in some cases, years earlier.
Yet still a vast number of operating systems, applications and web servers remain unpatched and easily exploited by malicious hackers.
Indeed, both the Code Red and Nimda worms that wreaked havoc with Microsoft’s Internet Information Services (IIS) web server in the Summer of 2001 took advantage of flaws for which a patch had been available for months.
“We brought out the patch about a month before Code Red hit. We then sat down and worked with those customers that were hit. About four or five months later, 50% of those customers were hit with Nimda, which exploited the same vulnerabilities,” says Stuart Okin, chief security officer of Microsoft UK. Despite what Okin had told them, those organisations had still not got round to implementing the patches.
So why do so many organisations fail to implement patches – even ones that have been labelled as ‘critical’ to security?
Part of the problem is the frequently lackadaisical approach of software vendors themselves. This has made many users wary of implementing patches to systems that they feel are already running smoothly.
In some respects, major software vendors have already taken some steps to improve software quality. For example, Microsoft executives claim the company has tried to stamp out sloppy programming practices that, in the past, were used as ‘quick and dirty’ fixes by application developers – and which were frequently affected when Microsoft issued new patches.
“Years ago, there were some undocumented bugs in APIs [application programming interfaces] that left buffers ‘dirty’. We fixed that particular problem, but then discovered that some applications had been using that as a primitive way of passing data between them, so the applications stopped functioning,” says Okin.
In addition to showering customers with patches – something both Microsoft and Oracle, among others, have been accused of – the patches themselves are often poorly documented and may not even work as advertised. Even worse, there is a risk that the patch will ‘break’ applications it was intended to fix.
“I can attest from the Oracle Applications world that patches are known to not fix the problem, to break other things, to have problems themselves or to have no effect whatsoever,” says Susan Dorn, CEO of patch management software specialist Ringmaster Software.
Furthermore, patching is regarded by IT staff as a chore, a dead end in terms of career advancement that can involve a lot of work at awkward hours for little reward. In addition, those responsible for implementing software patches are only too aware of the high risk of being blamed if the patch brings down mission-critical systems when it is rolled out.
Nevertheless, it is a vitally important task. The first step in the patching process involves extensive testing to find out how the patch might affect the set-up of the operating system or application. “If you apply a bad patch to a production system, it may bring it down. To apply a patch successfully in a controlled environment requires testing, significant testing,” warns Dorn.
Indeed, testing is the most time-consuming element of patch management. If a patch has been well-documented, the user will at least have the benefit of knowing where in the application to look for potential problems.
After all, not only are enterprise applications so vast that users need to know in some detail what elements a patch will change, but the application will almost certainly have been customised to some degree, further complicating the process.
“Does the patch change something that they have customised? What were those changes? Did the patch change a report? Did it change a screen? Did it change database objects? And if it changes database objects, what else is affected?” asks Dorn of Ringmaster Software.
But if the software vendor has been less than thorough in its patch documentation – or equally likely, the organisation has not properly documented changes it has made to its applications – then the testing process will be that much more complicated and time consuming.
Inadequate documentation was one of many complaints users levelled at Oracle during the 11i debacle.
Fortunately, patch management at the desktop PC level is a simpler task. Although some risks still remain, it is one area that can be more easily automated through the use of software distribution tools from vendors such as St Bernard Software, Tivoli and Novadigm.
Yet even at the PC level, problems frequently arise. For example, when Microsoft issued patches for its Windows XP operating system in early 2002, the updates disrupted network and video card drivers.
For users taking advantage of Windows XP’s auto-update feature, which communicates with the Microsoft web site for updates on a regular basis and downloads any such patches in the background, the first they would have known about the defective patch would have been when their computers failed to boot up. Many may not even have known what had happened.
Instead, software distribution vendors can offer validation processes, which means that the software is pre-tested with customers’ particular PC configurations before roll-out.
“We look at the Microsoft web site every hour, on the hour, and we parse the hot fixes through a quality assurance process,” says David Nicholds, a technical executive at St Bernard Software. “We take each hot fix and we look at the dependencies, the DLLs [software code libraries shared between applications], registry, the relations and so on.”
Once the software has been validated, the patch is put into the database for distribution to subscribers. Users can also query the database to find out what patches they have implemented and make sure that they are implemented in the correct order, adds Nicholds.
Such an automated approach can be employed to a certain degree in server farms, where hundreds, maybe thousands of commodity servers are all running the same group of applications in a cluster, but users still need to be careful before clicking the ‘start’ button, says Andy Crosby, European product marketing manager at applications performance management software specialist Mercury Interactive.
This is because, while many of the elements will be uniform, subtle differences in server configurations may cause problems. For example, the servers will have been bought at different times and are likely to have different microprocessors, BIOS chips and amounts of memory.
Another part of the problem is that many organisations lack the robust management procedures for handling the patching process. As a result, patching is done on an ad hoc, rather than systematic, basis and, in many organisations, nobody is in charge of evaluating the importance of new patches – even security patches – let alone implementing them.
Okin of Microsoft recommends that organisations centralise the task of PC and server management, including patch management, and try to keep software configurations as consistent as possible. That ought to help cut down on the amount of testing required prior to implementation and enable a more automated roll-out of patches after they have been tested and approved.
“You basically need a set of defined policies and a minimum set of staff doing changes,” says Okin. In addition, because those employees might be geographically dispersed, regular communication between them is a must and they need to play an active part in the patch process, contributing to ways in which it can be done better, he adds.
Internally at Microsoft, the implementation of patches is left to end users who are notified of new patches on a regular basis and expected to download and implement themselves. Less technology-savvy companies will want to automate the PC patching process, he suggests.
In many companies, even keeping track of what patches have been applied to what systems and when is often handled in a chaotic manner. To keep track of the patching process, many better-run organisations simply log the changes in spreadsheets.
However, at 3am when a patch is applied, filling in the spreadsheet can be a chore or indeed ignored altogether, says Dorn. On top of that, the spreadsheet can easily get neglected or lost, making it more difficult for an IT manager to know to what extent their software has been patched.
Microsoft offers a free tool called the Microsoft Baseline Security Analyzer (MBSA) to enable users to test their systems in order to see where they are on the patching schedule. However, MBSA is limited to scanning only recent Microsoft operating systems, web servers and databases, limiting its usefulness for many organisations.
But until software developers learn the art of building perfect software, organisations will need to take a closer look at their patching process because far too many are falling prey to Code Red, Nimda and, perhaps, potentially more serious problems they do not yet know about.