April 12, 2019 – Ahead it’s Autonomy Investor Day, Elon Musk sat down with Lex Fridman, an autonomous vehicle researcher from MIT to chat about Autopilot and the future of Full Self-Driving. It’s a fascinating glimpse of where Autopilot is headed. Full video, highlights and article is below.
Also, be sure to check out the Autonomy Day video where Tesla goes into full detail about current and future Autopilot and Full Self-Driving plans, including it’s Robo Taxi service!
- Musk believes autonomous cars are worth 5 to 10X ones that aren’t.
- Elon estimates that Tesla’s fleet captures 99% of all consumer-driven autonomous miles, far more data than other manufacturer.
- Musk said because all new Teslas now ship with Full Self Driving hardware, they may appreciate in value with software updates.
- By next year, Elon believes that having a human driver in addition to Autopilot may be less safe than Autopilot by itself.
- Driver Monitoring System will soon become obsolete since Autopilot technology is advancing so rapidly, according to Elon Musk.
Full interview (summary below):
To begin, Lex starts out by saying that his interview is meant to be completely unbiased and that he does not have any interest in Tesla whatsoever. He was asked by Tesla whether he wanted to interview Elon, and he accepted. He notes upfront that while he has great respect for Elon Musk and the work Tesla is doing there are some points where he disagrees with Elon, in particular whether Driver Monitoring Systems (DMS) are needed and/or are beneficial. Lex believes they are useful whereas Elon believes Autopilot safety is advancing so quickly it won’t be needed soon (more below).
Why Did Tesla Build Autopilot?
Elon observed there were two big trends in the auto industry: electrification and autonomy. It quickly became clear to Elon that any car that didn’t have autonomy in the future would be about as “useful as a horse” today. If Tesla didn’t participate in the autonomy revolution, they would have an inferior product and be left behind. To Elon, an autonomous car is ‘arguably’ worth 5 to 10 times more than one that isn’t.
When asked why Tesla decided to create the Autopilot display that is shown to drivers on the screen (where the vehicle is shown in the middle along with recognized lanes, cars and other objects), Elon said that it provides a ‘health check on how the system perceives the environment around it. It provides the user confidence or warns them if the environment isn’t correctly perceived.
How Does Tesla Choose Where to Focus?
Elon was asked how the team determines how to split efforts between data collection, algorithm updates and hardware.
For data collection, Elon said that Tesla has an enormous advantage over any other consumer car manufacturer. Tesla automatically gathers vast amounts of data from the over 400,000 Tesla vehicles on the road that have the full sensor suite installed (meaning 360 cameras, radar, ultrasonics). By comparison, Elon believes that only there are probably no more than 5,000 competitive consumer vehicles on the road with a comparable sensor suite. That means that, as of now, Tesla has access to 99% of the data (in terms of consumer vehicle driving miles).
On the hardware side, they now have the Full Self-Driving computer (aka Hardware 3), which has taken over three years to develop. It has over 10X the processing power of the NVIDIA system in Tesla vehicles that is now being replaced by the FSD Computer. Elon said they have not yet found the limits of the FSD Computer as of yet and that they can run ingest full frame-rate, full resolution images from all the cameras on the current system, something impossible with the NVIDIA system (where they downsample the images). In addition, they’re currently only testing on one of the two Systems on a Chip (SoCs). There are two for redundancy purposes.
Learning by Failures and Edge Cases
Understanding how and when Autopilot fails is critical in helping train the neural network so it can learn how to improve. As has been well documented, one of the ways that Autopilot learns is when a driver manually takes over when a vehicle’s is on Autopilot. Tesla examines these cases to determine whether it was for convenience or if there was an issue. If there’s an issue, Tesla feeds that into the system as a negative example.
One interesting use-case Elon mentioned is how vehicles on Autopilot should handle complex intersections. In particular the vehicle must calculate the optimal path through the intersection. Elon referred to this as the “optimal spline” (in mathematics, there is a notion spline interpolation to determine a path). They do this by looking how vehicles pass through a particular complex intersection. The paths where the vehicle passes through without intervention provide positive reinforcement, while those where the driver intervenes provide negative reinforcement. This is called “imitation learning” for AI / machine learning systems and is used for Tesla’s “path planning” system.
Elon also believes that, in the future, as the autonomous Autopilot technology becomes good enough, he would consider most all human interventions to be treated as issues.
Biggest Leaps so Far
When asked what Elon considers to be the biggest leaps forward with Autopilot so far, he believes there are three main areas: Navigate on Autopilot, Traffic Light Detection and Hardware 3 (aka Full Self-Driving Computer).
Navigate on Autopilot is unlike anything else available for consumers today and besides being able to fully navigate and change lanes from freeway on-ramp to off-ramp, can also overtake slow cars, if desired.
Elon is also very excited by traffic light detection. Currently traffic light detection only warns users on Autopilot when they approach a red stop light (see traffic light detection video example), however, Elon is using a development version that will actually stop the vehicle at a red light and resume when it turns green.
On the hardware, Elon points out that anyone who buys a Tesla vehicle today (as they now are shipping with the Full Self-Driving computer) is buying a car that improves over time as the software is continually updated and the self-driving system improves, adding new features. In his view, you’re buying a car today that is capable of full self-driving in the future. As he puts it, you’re essentially buying something that appreciates in value over time.
When asked about the software and where it needs to go, Elon said that while highway driving is now very good, the next big focus is city driving and the more complex summon scenarios.
Semi-Autonomous vs. Full Autonomy
The interviewer asked Elon how long drivers will still need to be actively attentive while Autopilot is active. Elon said that he believes that in six months, the Autopilot system will be capable enough where a driver no longer need to keep their hands on the wheel. However, regulatory constraints will prevent Tesla from rolling out the hands-free capability to consumers until Tesla can prove to regulators that Autopilot is safer than a human driver. Proving this will take some time to collect enough miles to determine whether the incidents per mile is statistically significant enough for regulators. He believes it needs to be at least 200% safer than a human driver.
He also felt that Tesla gets a disproportionate amount of scrutiny vs. other auto manufacturers. Elon points out that there are over 40,000 automotive death per year in the US, but if Tesla even has four of those, they produce out-sized headlines that regulators over-react to.
Driver Monitoring Systems
Next, the conversation shifted to Driver Monitoring Systems (DMS). These are the systems that determine whether a driver is paying attention to the road, by for example, checking that the driver has their hands on the wheel (like most systems currently do) or like the more sophisticated systems from GM or BMW that use cameras to check where the driver is looking.
Elon believes that Autopilot is improving so much and so quickly that Driver Monitoring Systems and related technology will become a moot point soon. He said, “If something is many times safer than a person, then [by] adding a person, the effect on safety is limited, and, in fact, it could be negative” He believes that by next year, having a person intervene will actually decrease safety.
The interviewer is fond of camera-based monitoring systems and asked whether those are effective. Again, Elon pointed out that such systems won’t be needed if the system drives better than human.
In the future, Elon thinks we’ll look back and that it will seem crazy that we allowed humans to freely drive 2 ton machines around open streets with almost no active safety guardrails.
On Hacking Autopilot
Recently some hackers were able to trick Autopilot by using some simple techniques, like adding markers to a road. The question was posed whether it’s possible to defend against Autopilot hackers in the future. Elon responded by saying that it’s very difficult to trick an AI machine / neural network since you would need to reverse engineer matrix math (what the neural network uses) and that’s almost impossible to do.
Simple tricks like the hackers recently demonstrated can quickly and easily given to the neural network as a negative example so it no longer falls for those hacks and will ignore them in the future.
Other AI Topics
The interviewer wrapped up by asking Elon a couple general AI questions.
The first question was how quickly he thinks we are in danger of general AI / intelligence becoming a threat (vs. “narrow” AI used for specific applications like self-driving). Elon believes we are only missing a few key ideas for general intelligence to take off and that it’s coming quickly and we’ll need to figure out what to do sooner rather than later.
The second question was whether he thought we will we be able to create an AI systems that loves and loves us back like in the movie Her? Musk thinks AI will be very good in convincing you to fall in love with it. From a physics perspective he said, “If it loves you in a way that you can’t tell whether it’s real or not, then it is real.”