Why your business can’t afford not to patch

Software patching across an IT estate is a bit like taking a car for its annual service: you know you should do it, but it can often lead to unpleasant surprises, exposing a range of unexpected problems which need to be fixed. Not least because the complexity and interdependencies of software mean that fixing one problem may well introduce another, causing a knock-on effect across systems and the potential for downtime or loss of service.

Yet with cyber-attacks a continuous threat to organisations, the work of maintaining systems security by IT departments is increasingly business critical and requires constant vigilance across a plethora of network devices and applications to prevent a data breach. The breadth and scale of this battle was highlighted by a recent survey of CIOs which identified a wide range of potential network weaknesses and common threats.

The CIOs surveyed named the top 3 common information system vulnerabilities as being related to application security (55%), security awareness (51%), and, perhaps most surprisingly, out-of-date security patches (50%).

> See also: 6 security tasks all businesses should be doing (half are often overlooked)

Businesses would be forgiven for thinking that everyone in their IT team would understand the importance of immediately updating their operating systems, firmware, and security software with the latest patches. However, when conducting penetration tests, we regularly encounter network components where the latest patches have not been implemented.

Most commonly, the reasons for this are found to be either problems with automated patch management systems or because the business decided it could not afford the risk of disruption to services that patching can sometimes cause.

Yet what organisations overlook is that failing to patch systems quickly can leave the business open to vulnerabilities, which could prove far more costly than the disruption to service caused by the patch itself.

GHOSTs in the machine

Three examples that demonstrate the false economy of this approach occurred last year. The Heartbleed and Shellshock vulnerabilities made international news headlines, due to the severity, ease of exploitability, and the risk to sensitive information that the flaws posed. Heartbleed was a vulnerability within OpenSSL, a commonly used library found within numerous systems and applications.

Attacks against systems vulnerable to Heartbleed allowed the disclosure of a small amount of data held in the systems memory, which was enough to potentially retrieve usernames and passwords, or other sensitive data.

News of Heartbleed was followed by the disclosure of a severe vulnerability in the Unix shell ‘Bash’. This vulnerability, known as Shellshock, was identified as being present within the Bash shell since 1989, and once exploitated could potentially allow arbitrary code execution.

The assault on Open Source didn't stop there: earlier this year a research team discovered GHOST, a critical vulnerability in the GNU C Library – common to many Linux implementations. Part of the function of this library is used to convert Internet host names to Internet addresses.

If exploited, GHOST potentially allows an attacker to take control of an entire network system. And while, unlike Heartbleed which affected even the largest corporations, the risk of either Shellshock or GHOST leading to a breach was significantly lower, the possible consequences were severe enough to send shockwaves across the industry.

It’s not just open source

Commercial software is also prone to vulnerabilities: Microsoft recently disclosed MS15-034, a critical vulnerability through their Microsoft Security Bulletins along with a patch. This vulnerability was also widespread, affecting any Windows Servers that had Internet Information Services (IIS) running and any services that interacted with the HTTP API. Within a matter of hours there were Denial of Service exploits surfacing across the internet, and within two days there were remote payload execution exploits for sale on the dark web.

Patching best practice

Failing to patch software leaves organisations vulnerable to a range of unnecessary risks that can easily be avoided. Accepted industry best-practice is to keep firmware, operating systems, services, and applications patched and up-to-date with latest security patches.

Patches should be applied regularly on an agreed schedule, and soon after any newly identified critical vulnerabilities are disclosed. A good patch management system should also keep non-operating system applications and services updated, including third-party software.

> See also: Is Shellshock the final nail in BYOD's coffin?

There is little to lose, and everything to gain from effective patch management. Attackers and researchers are constantly working hard to expose vulnerabilities in software that can be exploited and administrators cannot afford to take anything for granted.

Weaknesses such as GHOST and MS15-034 are not the first wide reaching vulnerability discoveries affecting unpatched software and will almost certainly not be the last. IT managers and system administrators should adopt best practice in terms of prioritising systems patching, so that when the next vulnerability scare arises they spend as little time as possible exposed to the risk of attack.

Sourced from Toby Scott-Jackson, senior security consultant, SureCloud

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...