A month ago, towards the end of August 2021, the National Highway Transportation Safety Administration (NHTSA) launched an investigation into Tesla’s Autopilot system after it was found responsible for 11 accidents, resulting in 17 injuries and one death. Now a new study, conducted by the Massachusetts Institute of Technology (MIT), has confirmed how unsafe Elon Musk’s infamous autopilot feature actually is.
Titled A model for naturalistic glance behavior around Tesla Autopilot disengagements, the study backs up the idea that the electric vehicle company’s “Full-Self Driving” (FSD) system is in fact—surprise, surprise—not as safe as it claims. After following Tesla Model S and X owners during their daily routine for periods of a year or more throughout the greater Boston area, MIT researchers found that, more often than not, they become inattentive when using partially automated driving systems. Note here that I went from calling the autopilot a Full-Driving system—which is the term Tesla uses to describe it and therefore entails it is fully autonomous—to then qualifying it of an automated driving system, also known as an advanced driver assist system (ADAS), which is what it truly is.
“Visual behavior patterns change before and after [Autopilot] disengagement,” the study reads. “Before disengagement, drivers looked less on road and focused more on non-driving related areas compared to after the transition to manual driving. The higher proportion of off-road glances before disengagement to manual driving were not compensated by longer glances ahead.” To be completely fair, it does make sense that drivers would feel less inclined to be attentive when they think their car’s autopilot is fully in control. Only thing is, it isn’t.
Meanwhile, by the end of this week, Tesla will roll out the newest version of its autopilot beta software, the version 10.0.1 in this case, on public roads—completely ignoring the current federal investigation when it comes to the safety of its system. Billionaire tings, go figure.
Musk has also clarified that not everyone who has paid for the FSD software will be able to access the beta version, which promises more automated driving functions. First things first, Tesla will use telemetry data to capture personal driving metrics over a 7-day period in order to ensure drivers are still remaining attentive enough. “The data might also be used to implement a new safety rating page that tracks the owner’s vehicle, which is linked to their insurance,” added TechCrunch.
In other words, Musk is aware of the risk the current autopilot system represents, and he’s working hard on improving it, or at least making sure he’s not going to be the one to blame if more Tesla-related accidents happen. How do you say your autopilot is not an autopilot without clearly saying it—and therefore risking to hurt your brand? You release a newer version of it that can easily blame drivers for their carelessness, duh.
“The researchers found this type of behavior may be the result of misunderstanding what the [autopilot] feature can do and what its limitations are, which is reinforced when it performs well. Drivers whose tasks are automated for them may naturally become bored after attempting to sustain visual and physical alertness, which researchers say only creates further inattentiveness,” continued TechCrunch.
My opinion on Musk and Tesla aside, the point of the MIT study is not to shame Tesla, but rather to advocate for driver attention management systems that can give drivers feedback in real-time or adapt automation functionality to suit a driver’s level of attention. Currently, Tesla’s autopilot system doesn’t monitor driver attention via eye or head-tracking—two things that researchers deem necessary.
The technology in question—which is a model for glance behaviour—already exists, with automobile manufacturers like Mercedes-Benz and Ford allegedly already working on implementing it. Will Tesla follow suit or will Musk’s ‘only child’ energy rub off on the company?