Information Age (IA): Microsoft security – isn’t that a bit of an oxymoron?
Scott Charney (SC): In the old days it might have been, but this wasn’t unique to Microsoft. For someone who has been in the security space for 12 years, most of which was spent at the US Department of Justice prosecuting cyber crimes, I can tell you that I went to vendors, large enterprises, as well as government agencies in the early 1990s and said, ‘We need more security’, and they said, ‘The markets aren’t demanding it, our customers don’t want it’, or they would say, ‘The IT revolution is all about functionality and driving down costs’.
But since 2001, things have changed due to the World Trade Centre, ‘Nimbda’ and ‘Code Red’. Now, markets are demanding security and Microsoft is adopting its Trustworthy Computing Initiative. We’re focussing on security and so that reputation is going to change.
IA: Other operating systems, such as Linux and Unix, have traditionally had more security built into them. How do Microsoft’s software products compare to these?
SC: There is a perception that Linux and other operating systems are more secure. In fact, if you look at security bulletins put out by vendors, or go to independent organisations like CERT [Computer Emergency Response Team] and you look at their critical bulletins, you see that Windows is as secure or more secure than other operating systems.
The fact of the matter is that we have huge market share, and as a result of that, most hackers target Windows. So we have a much larger responsibility, but it’s actually a fallacy that other operating systems are free from defects.
IA: Surely, Microsoft operating systems have been, up until the launch of Windows Server 2003, shipped with everything turned on by default instead of the other way round?
SC: That’s absolutely true – the goal was to give the user the best ‘out-of-the-box’ experience. Ultimately what we learnt was that most people don’t use every feature in an operating system, and the ones that they weren’t using, they weren’t managing or monitoring. So starting with IS version 6 in beta and Windows Server 2003 we started locking things down by default.
IA: 2003 has seen a number of viruses and worms attack Windows. To what extent is Microsoft responsible?
SC: The responsibility for computer security is definitely a shared one. And in fact statistics show that the majority of exploits are caused by misconfiguration of the systems as opposed to vulnerability of the code. Having said that, vendors have the biggest share of the responsibility in two respects: first, we have to build secure code. You put vulnerable code out there and you’re putting your customers at risk. But secondly, we have to achieve what I call security usability – security is too hard to manage today.
IA: Can you explain the Trustworthy Computing Initiative and how you plan to change the way that software is developed at Microsoft?
SC: What the Trustworthy Computing Initiative is really about is changing the way that Microsoft makes its code. We now think about everything we are doing with what we call D3+C – design, default, deployment and communications. To do that well, we are changing the way we develop code. For example, with Windows 2003, we’ve given 8,500 developers training on writing secure code – because historically they’ve been trained to write functional and efficient code [not necessarily secure code]. And then we carried out a security push, where their code gets reviewed: threat modelling is done, penetration testing is done and, increasingly, we’re using automatic tools to find bugs.
IA: In spite of the Trustworthy Computing Initiative, Windows 2003 was still hit with the Blaster worm. Why was that?
SC: Blaster was disappointing, but not surprising. Operating systems are very complex – none have zero bugs. Windows 2003 has proven to be more robust than other versions of the operating system. And the proof of this is that we’ve issued a bunch of bulletins that were critical for other versions of Windows, but not for 2003, or not even applicable to 2003, because of Secure By Design or Secure By Default. Having said that, we didn’t get every bug out of Windows 2003. Over time I wouldn’t be surprised if we found other things as well, but we still have to have a very aggressive response process for patching, which I’m confident we do.
IA: How badly was Windows 2003 affected by Blaster?
SC: We know that patch uptake, historically, has been too low. So Blaster did have a huge effect. And of course, part of the problem was with consumers, many of whom are on narrowband, and are less familiar with the need to patch, and didn’t turn on the automatic patching where it was available.
One of the reasons people don’t deploy patches as often as they should is patch quality, and we’re re-designing our testing process to make sure patches are higher quality. Additionally, we’re re-designing the patching process to make it simpler and easier for administrators.
Today, Microsoft has various installer technologies. Ultimately we’ll be down to two – one for the operating system and one for all applications. There will be a consistent interface for all patches. Patches will register with the operating system in a way that you can quickly see if the system is patched. Patches will come with an un-installer, so that if it happens to break an application that’s custom in your environment, you can gracefully get out of the patch. As we do our job, people will be able to patch more quickly.
IA: Will the move to managed code environments (.Net and J2EE) help improve the level of security of applications built on these environments?
SC: Absolutely. One of the concerns is that even as we build more secure platforms, we need independent software developers to build secure applications on top of our platform. Moving forward we’ll have things like the next generation secure computing base, which burns a public/private key pad into a chip set at time of manufacture and seals it. That will give us strong process isolation and compartmentalised memory, and will allow the user to control what executables run on their machine – they can choose to run only code signed by people they trust. They’ll still be able to run anything they want, but people concerned about viruses and worms will have greater control.
IA: Wouldn’t it save Microsoft a lot of money if its code were more freely available for inspection?
SC: We do have programmes that make our code available for inspection. We have a shared source programme in which we give our code to large enterprise customers and academic institutions, and we have the government security programme.
The real issue is that just sharing your code doesn’t make it more secure. The number of people who know how to do security really well is still quite limited, and giving out your source code and letting people look at it doesn’t mean they are schooled in security and know how to find security bugs. But we do share out code – under non-disclosure agreements, of course, because our intellectual property is our key asset.