McAfee researchers recently tricked a Tesla into speeding while the cars intelligent cruise control feature was engaged. This news signals, yet again, that completely safe, fully autonomous cars have still not arrived, and it suggests that they face new types of vulnerabilities.
Over the course of 18 months, the researchers, whose report was published today, explored how they could get a Tesla to misread a speed limit by messing with the vehicles ability to see. To make that happen, the researchers placed visual distractions like stickers and tape that could trick the cars camera system into misreading a 35-miles-per-hour speed limit.
While the researchers successfully spoofed the cameras reading in several different ways, they found that just a 2-inch piece of black electrical tape across the middle of the 3 in a 35 MPH speed limit sign could cause the system to read the sign as an 85 MPH sign. In a live test with a 2016 Model S70 using an EyeQ3 camera from MobilEye, they found that, when the Tesla Automatic Cruise Control (TACC) was activated, the vehicles system would attempt to determine the current speed limit with help from the camera.
Thats when those visual distractions that small piece of black tape, in one case could cause the car to misread the speed limit and head toward the 85 MPH speed. (The researchers note that they applied the brakes before the car reached that speed and that no one was hurt during testing.)
This system is completely proprietary (i.e. Black Box), we are unable to specify exactly why the order of operations is essential, Steve Povolny, head of McAfee Advanced Threat Research, told Recode in an email. He also cautioned that the real-world implications of this research are simplistic to recreate but very unlikely to cause real harm given a driver is behind the wheel at all times and will likely intervene. Povolny added that cybercriminals have yet to publicly attempt to hack self-driving cars, although plenty of people are worried about the possibility.
Still, the research demonstrates how self-driving cars, or cars with some autonomous abilities, can fall short. And its not the first time researchers have tricked a car like this. Just last April, similar stickers were used to get a Tesla to switch lanes improperly.
Tesla didnt respond to a request for comment, but a spokesperson from MobilEye argued that the stickers and tape used by McAfee could confuse the human eye, too, and therefore didnt qualify as an adversarial attack.
Traffic sign fonts are determined by regulators, and so advanced driver assistance systems (ADAS) are primarily focused on other more challenging use cases, and this system in particular was designed to support human drivers not autonomous driving, said the spokesperson. Autonomous vehicle technology will not rely on sensing alone, but will also be supported by various other technologies and data, such as crowdsourced mapping, to ensure the reliability of the information received from the camera sensor and offer more robust redundancies and safety.
The researchers also said that they studied a 2020 Tesla vehicle with a new version of the MobilEye camera and did not observe the same problem, though they noted that testing was very limited. The study says that only Teslas produced from 2014 to 2016 that are equipped with the EyeQ3 model camera showed the vulnerability. The researchers also noted that neither Tesla nor MobilEye had expressed any current plans to address this vulnerability in their existing hardware.
But this vulnerability isnt about Tesla. Its about the challenges raised by self-driving car technology and the growing industry that aims to make roads safer for all of us but also requires strict testing and regulation. After all, time has shown that teaching a computer to drive is not as easy as teaching a human.
As Future Perfects Kelsey Piper has explained:
Following a list of rules of the road isnt enough to drive as well as a human does, because we do things like make eye contact with othersto confirm who has the right of way, react to weather conditions, and otherwise make judgment calls that are difficult to encode in hard-and-fast rules.
Such a judgment call might be spotting a weird-looking speed-limit sign and noticing if the car suddenly went more than double the speed limit. As Povolny told Recode, the flaw analyzed by McAfee could be just one of many issues that a self-driving car encounters in both the digital and physical worlds, including classic software flaws, to networking issues, configuration bugs, hardware vulnerabilities, [and] machine learning weaknesses.
So that signals a long road ahead for self-driving cars. After all, the Teslas involved in the McAfee study still requires a human to be in the car and alert, though as several Autopilot accidents have shown, plenty of Tesla drivers overestimate the technology. Lets hope when fully autonomous vehicles are finally on the highways, they wont be so easily distracted.
Open Sourced is made possible by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.
