On Tuesday 11 January, 19-year-old David Colombo, a self-described “information technology security specialist and hacker,” wrote on Twitter that he had found flaws in a piece of third-party software used by a relatively small number of owners of Tesla cars, meaning that hackers could remotely control some of the vehicles’ functions.
So, I now have full remote control of over 20 Tesla’s in 10 countries and there seems to be no way to find the owners and report it to them…
— David Colombo (@david_colombo_) January 10, 2022
According to Colombo, the flaws gave him the ability to unlock doors and windows, start the cars without keys and even disable their security systems. He also claimed that he could see if a driver is present in the car, turn on the vehicles’ stereo sound systems and flash their headlights.
In other words, hacking into the third-party software in question offered him the chance to control pretty much what he wants in most Tesla cars. Considering the fact that a Massachusetts Institute of Technology (MIT) study confirmed that Tesla’s autopilot is unsafe back in September 2021, this potential addition to the infamous cars’ list of dangers came as yet another blow to Elon Musk’s company.
In an interview with Bloomberg, Colombo provided screenshots and other documentation of his research that identified the maker of the software and gave more details on the vulnerabilities it presents. He asked that the publication not publish specifics however, because the affected company had not published a fix at the time of writing. On Twitter, Colombo added that he could access more than 25 Teslas in at least 13 countries, which is why he decided to share this information on the social media platform when he wasn’t able to contact most of the owners directly.
Nevertheless I now can remotely run commands on 25+ Tesla‘s in 13 countries without the owners knowledge.
— David Colombo (@david_colombo_) January 11, 2022
Regarding what I‘m able to do with these Tesla‘s now.
This includes disabling Sentry Mode, opening the doors/windows and even starting Keyless Driving.
[2/X]
‘So what’s wrong exactly?’ some of you might be wondering. According to what Colombo told Bloomberg, “the problem involves an insecure way the software stores sensitive information that’s needed to link the cars to the program.” While it truly depends on who can access such information, in the wrong hands, it could be stolen and repurposed by hackers to send malicious commands to the cars, he continued to explain. He even showed Bloomberg screenshots of a private conversation he had on Twitter with one of the affected owners, who allowed him to remotely honk his car’s horn.
Since then, Colombo has been in touch with members of Tesla’s security team as well as with the maker of the third-party software. Tesla has a “bug bounty” programme where cybersecurity researchers can report vulnerabilities in the company’s products and, if validated, receive payment.
Addition as of 11. Jan 22:33 (CET)
— David Colombo (@david_colombo_) January 11, 2022
Tesla‘s Security Team just confirmed to me they’re investigating and will get back to me with updates as soon as they have them.
[8/8]
This latest discovery goes to show some of the remaining risks of moving to the so-called ‘Internet of Things’, where everything is connected online—thus becoming potentially vulnerable to hacking threats. “Just don’t connect critical stuff to the internet,” Colombo advised. “It’s very simple. And if you have to, then make sure it is set up securely.”
A month ago, towards the end of August 2021, the National Highway Transportation Safety Administration (NHTSA) launched an investigation into Tesla’s Autopilot system after it was found responsible for 11 accidents, resulting in 17 injuries and one death. Now a new study, conducted by the Massachusetts Institute of Technology (MIT), has confirmed how unsafe Elon Musk’s infamous autopilot feature actually is.
Titled A model for naturalistic glance behavior around Tesla Autopilot disengagements, the study backs up the idea that the electric vehicle company’s “Full-Self Driving” (FSD) system is in fact—surprise, surprise—not as safe as it claims. After following Tesla Model S and X owners during their daily routine for periods of a year or more throughout the greater Boston area, MIT researchers found that, more often than not, they become inattentive when using partially automated driving systems. Note here that I went from calling the autopilot a Full-Driving system—which is the term Tesla uses to describe it and therefore entails it is fully autonomous—to then qualifying it of an automated driving system, also known as an advanced driver assist system (ADAS), which is what it truly is.
“Visual behavior patterns change before and after [Autopilot] disengagement,” the study reads. “Before disengagement, drivers looked less on road and focused more on non-driving related areas compared to after the transition to manual driving. The higher proportion of off-road glances before disengagement to manual driving were not compensated by longer glances ahead.” To be completely fair, it does make sense that drivers would feel less inclined to be attentive when they think their car’s autopilot is fully in control. Only thing is, it isn’t.
Meanwhile, by the end of this week, Tesla will roll out the newest version of its autopilot beta software, the version 10.0.1 in this case, on public roads—completely ignoring the current federal investigation when it comes to the safety of its system. Billionaire tings, go figure.
Musk has also clarified that not everyone who has paid for the FSD software will be able to access the beta version, which promises more automated driving functions. First things first, Tesla will use telemetry data to capture personal driving metrics over a 7-day period in order to ensure drivers are still remaining attentive enough. “The data might also be used to implement a new safety rating page that tracks the owner’s vehicle, which is linked to their insurance,” added TechCrunch.
In other words, Musk is aware of the risk the current autopilot system represents, and he’s working hard on improving it, or at least making sure he’s not going to be the one to blame if more Tesla-related accidents happen. How do you say your autopilot is not an autopilot without clearly saying it—and therefore risking to hurt your brand? You release a newer version of it that can easily blame drivers for their carelessness, duh.
“The researchers found this type of behavior may be the result of misunderstanding what the [autopilot] feature can do and what its limitations are, which is reinforced when it performs well. Drivers whose tasks are automated for them may naturally become bored after attempting to sustain visual and physical alertness, which researchers say only creates further inattentiveness,” continued TechCrunch.
My opinion on Musk and Tesla aside, the point of the MIT study is not to shame Tesla, but rather to advocate for driver attention management systems that can give drivers feedback in real-time or adapt automation functionality to suit a driver’s level of attention. Currently, Tesla’s autopilot system doesn’t monitor driver attention via eye or head-tracking—two things that researchers deem necessary.
The technology in question—which is a model for glance behaviour—already exists, with automobile manufacturers like Mercedes-Benz and Ford allegedly already working on implementing it. Will Tesla follow suit or will Musk’s ‘only child’ energy rub off on the company?