Is the UK self-driving itself towards a security nightmare?
The security implications behind the UK’s ambition to be a world leading in driverless cars
With self-drive pods being tested in Milton Keynes and Coventry and driverless cars ruled as okay for testing on UK roads earlier this year, it’s clear the British government is going full throttle in its quest to position the UK as the global leader in driverless cars.
Yet, as this technology becomes an ever closer and ever real future for Brits, there are still many questions that need to be answered before the driverless experience can be securely implemented.
Whilst the government commission preoccupies itself with reviewing the Highway Code and debating the ethical questions around responsibility in the event of a collision, there are still important technical concerns that must be fully addressed before driverless cars can get the green light in Britain.
>See also: Are we ready for driverless cars?
Cyber attacks and software vulnerabilities are on the rise; the number of incidents has risen 66% year on year since 2009. And the engineers and IT specialists developing these cars must tackle the weighty decision of whether they’re ready to put people’s lives in this technology’s hands.
The complex network of different technologies is what makes securing driverless cars so difficult. The massive number of sensors, processor and cameras are all potential attack vectors.
Whilst many of these technologies already exist in many new cars, currently they don’t pose a significant threat to the car’s operations as a whole due to the multiple units controlling separate systems. However, a driverless car’s central point of control will introduce a grave risk of being compromised.
The risk doesn’t arise from the fact that the car is driverless. The risk is true of all connected cars, and lies in the software that runs them and the network connections they will have to external data sources, such as cloud-based applications or to other cars.
Just as PCs were once fairly safe before they were networked (except for any viruses transferred via physical floppy disks), connecting to a global network will introduce similar cyber security risks to cars.
The biggest dangers posed to cars are from software vulnerabilities in the network connections that cars are increasingly making – and this threat isn’t unique to driverless cars. Many connected cars are already using software to controls the door locks, brakes and engine. And the risk comes from attackers controlling those software-driven functions from vulnerable mobile app or web interfaces.
We’ve already witnessed how a software vulnerability can be used to compromise elements of the software being used in connected cars. In January researchers from the Allgemeiner Deutscher Automobil-Club (ADAC), a German motoring association, discovered a vulnerability in BMW’s Connected Drive system, which allowed researched to imitate the BMW servers and send remote unlocking instructions to the vehicles.
Whilst the vulnerability in this software has now largely been patched, the incident highlights the significant and innumerable issues that car manufacturers will face in the coming years.
>See also: The revolutionary road to autonomous driving
Thankfully, in this case, only the locking system was affected and no cars have reportedly been stolen using this method. But these vulnerabilities have the potential to cause disastrous results when targeting the operating system within driverless cars.
For example, driverless cars are being built with the capability to exchange data with cloud-based applications for GPS mapping or with other cars to share real-time information about traffic patterns. If attackers were to inject malicious data into these communication channels, they may be able to manipulate the control software in the cars. A “man in the middle” attack, much like the classic coffee shop Wi-Fi attack could then occur.
And the issue is bigger than hackers maliciously targeting individual cars. An infected car could be used to send malicious data on to other cars, as an attacker might then be able to compromise the mapping or traffic data source (much like how a watering hole attack can compromise a website today), to send malware or malicious data to any client connecting to the data source.
It is essential that all software be hardened in the face of malicious data, much like web browser manufacturers have built a security model and sandboxing technology, which allows it to communicate with a potentially hostile web site while keeping the user’s system safe.
There also needs to be clear compartmentalisation and segmentation between the various networks and systems in the vehicle, such as driving, safety, and information/entertainment systems. It is also essential that the software running inside the car can be quickly and securely updated when the inevitable vulnerability becomes publicly known.
With the innumerable grave and dangerous situations that could arise from a compromised car, manufacturers have a responsibility to work closely with the computer security industry to ensure that all software is properly tested and vulnerabilities exposed. Physical and cyber safety has never been so entwined on such a massive scale.
The British government has undertaken a significant task in pursuing this ambition to be a world leader in driverless cars. But political figurehead must look beyond the attractive reputation of being a technological trailblazer in this field, and ensure they don’t compromise their responsibility to their citizens in this pursuit by implementing devices which aren’t secure, and therefore aren’t ready for consumption.
The government has a duty to ensure that all cars are not only safe to drive, but that all connected components have been tested to ensure that malicious data cannot infiltrate the car’s network. Application security testing must play an essential role in securing all software and devices, if we are to ensure that driverless cars don’t become a hacker’s remote-controlled toy.
Sourced from Chris Wysopal, co-founder, CTO and CISO, Veracode