News Insights: New ENISA report finds that autonomous vehicles are “highly vulnerable to a wide range of attacks”

A new European Union Agency for Cybersecurity (ENISA) report has found that autonomous vehicles are “highly vulnerable to a wide range of attacks” that could be dangerous for passengers, pedestrians, and people in other vehicles. Attacks considered in the report include sensor attacks with beams of light, overwhelming object detection systems, back-end malicious activity, and adversarial machine learning attacks presented in training data or the physical world.

News Insights:

Ilya Khivrich, Chief Scientist at Vdoo:

“Resilient and safety-critical systems nowadays must be designed with a potential attacker’s perspective in mind. This problem is especially complicated for systems reliant on machine learning (ML) algorithms, which are trained to behave properly under normal circumstances, and may respond in unexpected ways to engineered manipulation or spoofing of data they receive from the sensors. This is a challenging gap to bridge, and we believe that new tooling will be required to cope with these issues.”

Paul Bischoff, privacy advocate and research lead at Comparitech:

“How easy it is to hack cars depends on the vehicle, what computerized systems are in place, and whether those systems are connected to the internet. A car’s entertainment system is often hooked up to the internet and needs to communicate with any number of sources, so it has a wider attack surface. An autonomous vehicle might need to download system updates over the internet, but it only ever needs to connect to one destination, the update server. That makes securing it a bit easier. Then there are systems that are completely sequestered from the internet that would require physical access to hack. Like many internet-of-things devices, manufacturers often build features first and consider security an afterthought. I think automobile security will continue to improve as time goes on. The ENISA report specifically discusses how AI-driven autonomous driving systems can be tricked into not recognizing or misrecognizing traffic, road conditions, or signs. Autonomous vehicles use painted lines on roads to stay in lane, for example. An attacker could paint false lines on a road or vandalize traffic signs to interfere with the AI. We might see some malicious actors try to hack vehicles and disrupt automated driving AI systems, and those actions could cause traffic accidents. However, we should also consider what incentive a hacker would have to hack a car. There’s no clear or direct financial motive, so I don’t think such attacks will be very widespread or prevalent.”

Chris Hauk, consumer privacy champion at Pixel Privacy:

“Any electronic device with an internet connection or a USB port is a hacking target. I have read reports that a modern luxury vehicle could contain up to 100 million lines of software code. A fully autonomous self-driving vehicle is likely to contain many more lines of code than said luxury vehicle. Much if this code is likely unoptimized and may have not been stringently tested for security holes. This makes a self-driving vehicle an attractive target for the bad actors of the world. The popularity and hype surrounding self-driving vehicles make them an attractive target for hackers searching for another income stream or mischief making target. Hackers could target autonomous cars with malware that could “park” the car, leaving it inactive until a ransom is paid. Theoretically, a hacker could take remote control of a vehicle, stranding the passengers in the middle of a highway or intersection, causing gridlock. Or, like a chapter from a bad science fiction murder mystery, they could remotely drive the vehicle off of a cliff.”