Environmental sensing technology makes self-driving cars possible. Without the ability to sense the immediate area around a car, computer systems would have no way of knowing what speed to set, where to turn, when to change lines, or when to apply the brakes. The sensing system must detect every possible movement within its range to make split-second life-saving decisions.
Today, self-driving cars sense the environment using one of three primary mechanisms. Cameras are one of the most popular means since, of course, they can visually sense the environment in the same way humans do with vision. Radar is also often used in conjunction with cameras as a secondary method to detect large objects. LiDAR, a relatively newer technology used in automobiles uses an array of light pulses to measure distance. Some companies use a combination of all three sensors. Historically, most autonomous car companies have relied heavily on LiDAR since, until recently, neural networks weren’t powerful enough to handle multiple camera inputs. Tesla is one of the most notable companies that has placed a big bet on cameras, integrating eight of them into each vehicle, along with a powerful neural network computer.
Here we take a look at LiDAR versus cameras for self-driving cars to understand the pros and cons of each for this particular application.
What Is LiDAR?
LiDAR, which stands for Light Detection and Ranging, is similar to traditional radar in that it uses pulses of electromagnetic radiation to detect nearby objects. Traditional radar uses radio waves. LiDAR uses lasers to see the surrounding environment. LiDAR bounces laser light off of objects at millions of pulses per second, and the car measures the changes in distance when the laser pulses bounce back and hit the car. This system must be fine-tuned and accurate to make instant decisions with faster reaction times compared to humans.

LiDAR’s ultra-fast response times give the car’s computer enough time to react to changing conditions on the road. Waymo, a company under the umbrella of Google’s parent company Alphabet, has a proprietary LiDAR system that it makes in-house. This technology is also the basis for the patent lawsuit Google filed against Uber in the race to develop the safest and most reliable self-driving cars. The winner of the race has an advantage over competitors when these vehicles go mainstream in 2025 and beyond.
Velodyne is by far the biggest supplier of traditional and costly spinning LiDAR units you see on top of cars, but there are dozens of startups trying to make simpler and cheaper LiDAR units. For example, Luminar is a startup that uses a special metal alloy and shoots lasers at 1550 nanometer wavelength (versus traditional 905 nm), allowing for longer range and better detection. Ouster, another startup, is trying to use solid-state surface-emitting lasers from chips, similar to what you might find in a mouse, but higher power. As more and more LiDAR units are deployed, prices will slowly come down.
Advantages of LiDAR
One of the primary advantages of LiDAR is accuracy and precision. The reason why Waymo is so protective over its LiDAR system is because of its accuracy. The Drive reported that Waymo’s LiDAR is so incredibly advanced, it can tell what direction pedestrians are facing and predict their movements. Chrysler Pacificas outfitted with Waymo’s LiDAR can see hand signals that bicyclists use to predict which direction the cyclists should turn.
Another advantage of LiDAR is that it gives self-driving cars a three-dimensional image to work with. LiDAR is extremely accurate compared to cameras because the lasers aren’t fooled by shadows, bright sunlight or the oncoming headlights of other cars.
Finally, LiDAR saves computing power. LiDAR can immediately tell the distance to an object and direction of that object, whereas a camera-based system must first ingest the images and then analyze those images to determine the distance and speed of objects, requiring far more computational power.
Very few car manufacturers actually deploy LiDAR in consumer vehicles. Audi has deployed front-facing LiDAR in some vehicles, like the A8 and A6. Volvo has said they will incorporate LiDAR in their 2022 Volvo XC90. Otherwise, LiDAR is still primarily the domain of ride-hailing service vehicles.
Potential Downsides of LiDAR
Historically, cost was one of the major disadvantages of LiDAR. Google’s system originally ran upwards of $75,000. Today startups have brought the costs of LiDAR units down to below $1,000 in the case of Luminar, and Velodyne even introduced a more limited LiDAR called the Velabit for $100 at CES 2020, but it remains to be seen how many of these will be needed per vehicle. As such, LiDAR costs can still add an insignificant cost to consumer vehicles and will likely still be in the realm of autonomous fleet vehicles for a bit longer.
Interference and jamming are another potential issue with LiDAR as these systems roll out more broadly. If there are a large number of vehicles all generating laser pulses (photons) at the same time, it could cause interference and potentially “blind” the vehicles. Manufacturers will need to develop methods to prevent this interference.
It’s also not clear what so much constant laser activity would do to other mechanical and biological systems in the environment. For example, Velodyne claims that LiDAR operating at 1550 nm, like their competitor Luminar uses, could actually damage the human cornea under some circumstances. At CES 2019, someone reported their camera sensor being damaged from LiDAR. There are still many open questions around the environmental safety of LiDAR at scale.
LiDAR also has a limitation in that many systems cannot yet see well through fog, snow and rain. Ford, another leader in race to develop viable self-driving cars, developed an algorithm that helps its LiDAR system differentiate between individual raindrops and snowflakes. Otherwise, autonomous vehicles would interpret a mass of falling snowflakes as a wall in the middle of the road. Ford proved its concept while undergoing testing in Michigan, but its program still has a lot to learn.
Additionally, LiDAR doesn’t provide information that cameras can typically see – like the words on a sign, or the color of a stoplight. Cameras are better suited for this type of information.
Finally, LiDAR systems can be more visibly obvious when built into the vehicle. Whereas camera systems, as implemented on current Tesla vehicles are almost invisible. However, LiDAR systems are getting smaller each year.
Cars with LiDAR
Audi, dipped its toes into the water early and placed a front-facing LiDAR (the Valeo SCALA unit) into its vehicles such as the A8 in such a way that it’s fairly hidden. However, it’s only front-facing LiDAR at this point.

Volvo has announced they will also offer Luminar’s LiDAR on its 2024 Volvo EC90 electric vehicle as part of their new “Ride Pilot” (fka “Highway Pilot”) system, similar to GM’s SuperCruise, which allows semi-autonomous driving on certain designated highways under ideal conditions.

Tesla Not Using LiDAR
Tesla CEO Elon Musk is not a fan of LiDAR in vehicles and was very blunt at the Tesla Autonomy Day for investors where he said “Anyone relying on LiDAR is doomed”. In the video below, see Tesla explain why they believe LiDAR is not needed and why they believe cameras can handle depth sensing, like LiDAR, and are much better suited for visual environments like driving:
In addition, he noted that it’s better to use precision radar at ~4mm:
Despite these disadvantages, most self-driving car companies favor LiDAR versus cameras, primarily because these companies have spent years investing in it and it’s simpler to process than a camera system. Companies using LiDAR in commercial fleets and robotaxi services can also offset the higher upfront costs of LiDAR and recoup the money downstream by not having a human driver.
Why Cameras?
Cameras in autonomous driving work in the same way our human vision works and utilize similar technology found in most digital cameras today.
As Tesla CEO Elon Musk puts it, “The whole road system is meant to be navigated with passive optical, or cameras, and so once you solve cameras or vision, then autonomy is solved. If you don’t solve vision, it’s not solved.” Instead of inventing an entirely new system of seeing, Musk wants his company to create superhuman eyes that have much better acuity than biological eyes.
To solve the problem, Telsa ensures there are enough cameras around the car (8 in all), that can see in all directions and Tesla cars can learn to see better with their onboard camera technology and react more quickly than human drivers.

Musk’s heavy bet on camera technology sounds far-fetched to many, but as he demonstrated during Tesla’s first Autonomy Day, he believes the camera setup is sufficient for self-driving. In fact, he stated that Tesla has re-evaluated their sensor suite many times and felt there’s no need to adjust it, although in 2021 Tesla removed radar, even further placing bets on cameras only. So far, it’s paying off in the latest version of Autopilot, which has seen significant gains in autonomous features, but still far from consumer-ready. That could be why Tesla is brining radar back with Hardware 4, rolling out in 2023.
Additionally, Mobileye, who originally supplied the Autopilot technology to Tesla in AP1 cars (see AP1 vs AP2) is pushing forward with a camera-based system of its own. It has 12 cameras: three cameras in front, two corner cameras facing forward, two corner cameras facing backward, a rear camera, and four additional cameras to help with parking views.
Here’s a Mobileye camera-based demonstration from CES in January 2020:
Finally, ex-Uber and Waymo engineer claims to be the first person to have driven a fully-autonomous car from coast-to-coast using only six cameras around the car, with no LiDAR.
Overall, it appears that cameras are fully capable of handling environmental sensing, at least as good as humans.
Why Cameras Are So Popular
First, cameras are much less expensive than LiDAR systems, which brings down costs of self-driving cars, especially for end-consumers. They are also easier to incorporate (modern Teslas have eight cameras around the car) since video cameras are already on the market. Tesla can simply buy an off-the-shelf camera and improve it rather than going out and inventing some entirely new technology.
Another advantage is that cameras aren’t blind to weather conditions such as fog, rain and snow. Ford’s LiDAR solution workaround for poor conditions must use software enhancements, but Tesla’s cameras don’t have the same problems as LiDAR’s physical limitations. Whatever a normal human can navigate, so can a camera-based system.
Cameras can also see the world in the same way as humans do and can, in theory, read street signs, interpret colors, etc. unlike LiDAR.
Finally, cameras can easily be incorporated into the design of the car and hidden within a car’s structures, making it more appealing for consumer vehicles.

Cameras Used in Flying Drones
Cameras have also demonstrated impressive in other areas such as drones. For example, Skydio produces the Skydio 2 Drone that can follow subjects automatically and maneuver around obstacles using only 6 cameras around the drone plus NVIDIA Tegra TX2 for computational AI (similar to Tesla’s Full Self-Driving Computer, but less powerful).

The system in the Skydio 2 Drone is an impressive demonstration of what can be done in 3D space using only cameras plus AI, visual odometry, and SLAM (Simultaneous Localization and Mapping).
Here’s an example of the data compiled by the camera system, which looks very similar to LiDAR:

Camera Limitations
Cameras are subject to the same issues humans face when lighting conditions change in such a way where the subject matter becomes obfuscated. Think of situations where strong shadows or bright lights, from the sun or oncoming cars, may cause confusion.
Here’s an example of a low-light situation as demonstrated by Luminar at CES 2022 in Las Vegas. A small child pedestrian, occluded by another car walks out in front of a car and the car must identify the pedestrian and react quickly.

Luminar tested this scenario with the reference vehicle shown above (a modified Lexus) and then compared this to other vehicles (such as a Tesla Model Y) under challenging low-light conditions. The Tesla vehicle, in this example, was unable to detect and avoid the dummy pedestrian child:
In addition, cameras are also relatively “dumb” sensors in that they only provide raw image data back to the system, unlike with LiDAR where the exact distance and location of an object is provided. That means camera systems must rely on powerful machine learning (neural networks or deep learning) computers that can process those images to determine exactly what is where, similar to how our human brain processes the stereo vision from our eyes to determine distance and location.
Until very recently, the neural network / machine learning systems just weren’t powerful enough to handle large amounts of data from cameras in order to process everything in time to make driving decisions. However, the development of neural networks has become much more sophisticated now that they are able to potentially handle real-world inputs better than LiDAR.
Tesla has come up with a forward-looking solution to handle the computing power problem in their cars. They’ve incorporated the computing system in such a way that it is easily swappable for future hardware. In fact, in early 2019 Tesla upgraded from their previous NVIDIA solution to their own silicon called the Full Self Driving Computer (or Hardware 3), significantly bumping up the processing capabilities of their cars.
Sensor Fusion
One viable solution to the LiDAR versus camera debate for self-driving cars is to combine the technology.
Mobileye, while primarily focused on visual perception, also uses inputs from LiDAR and radar to sense the environment and can use that as a secondary, redundant system to the visual perception system. TuSimple, an autonomous truck company, is going the route of combining radar, sensors and cameras.
One thing to keep in mind with sensor fusion is that every data point is another input that the computing systems need to learn and interpret. Whether it’s LiDAR, cameras or a fusion of the two, all of these high-tech systems are only as good as the computer programs written to interpret the data with millisecond precision. Humans have to write and test exact computer codes and algorithms for any physical hardware to work properly.
Considering how many things a human must pay attention to on the road, getting millions of data bits flowing into a car’s computer system properly is no small task. So when fusing multiple sensors, the system must re-learn to work with all of them in harmony.
In this video below, Andrej Karpathy explains why sensor fusion is tricky and better to go all-in on vision alone:
The Future
Given the progress of camera-based vision technology of late, it’s likely that we’ll see a proliferation of camera systems in consumer vehicles, especially given the lower costs and recent advances in AI.
That said, LiDAR will likely remain a staple in ride-sharing vehicles as these companies have heavily invested in it over the years and it provides additional inputs to the perception system, reducing some of the computational complexity. In addition, since ride-sharing vehicles can re-coup the hardware costs by replacing the driver, LiDAR cost is less of an issue than with consumer vehicles. As LiDAR becomes cheaper due to heavy investments from startups, it will likely appear in more consumer vehicles alongside cameras and radar if the LiDAR interference issues can be solved, but likely in a limited set of cars, initially, like the Volvo EX90, Audi and others.
One thing is certain, sensors used by self-driving vehicles will continue to evolve and move past human perception (i.e. vision), as with the incorporation of radar already. Even Tesla, with Hardware 4 is bringing a more precise radar unit back into the mix. While cameras can, in theory, and practice, power self-driving cars, other sensors (whether LiDAR at scale remains to be seen) will continue to be added to the perception stack to aid with increased environmental sensing in the future.