Lidar vs. Cameras for Self Driving Cars – What’s Best?

Environmental sensing technology makes self-driving cars possible. Without the ability to sense the immediate area around a car, computer systems would have no way of knowing what speed to set, where to turn, when to change lines or when to apply the brakes. The sensing system must detect every possible movement within its range to make split-second life-saving decisions.

Today, self-driving cars see in one of two main ways. One is lidar, which is an advanced form of short-range radar. The other system uses a series of cameras to keep all eyes on the road, so to speak. Some companies use a combination of both.  Historically, most autonomous car companies have used Lidar since, until recently, neural networks weren’t powerful enough to handle multiple camera inputs.  Tesla is one of the most notable companies that has placed a big bet on cameras, integrating eight of them into each vehicle, along with a powerful neural network computer.

Here we take a look at lidar versus cameras for self-driving cars to understand the pros and cons of each for this particular application.

What Is Lidar?

Lidar is similar to traditional radar in that it uses pulses of electromagnetic radiation to detect nearby objects. Traditional radar uses radio waves. Lidar uses lasers to see the surrounding environment. Lidar bounces laser light off of objects at millions of pulses per second, and the car measures the changes in distance when the laser pulses bounce back and hit the car. This system must be fine-tuned and accurate to make instant decisions with faster reaction times compared to humans.

Lidar units seen on top of a Waymo Minivan and around sides

Lidar’s ultra-fast response times give the car’s computer enough time to react to changing conditions on the road. Waymo, a company under the umbrella of Google’s parent company Alphabet, has a proprietary lidar system that it makes in-house. This technology is also the basis for the patent lawsuit Google filed against Uber in the race to develop the safest and most reliable self-driving cars. The winner of the race has an advantage over competitors when these vehicles go mainstream in 2020 to 2022.

Velodyne is by far the biggest supplier of traditional and costly spinning lidar units you see on top of cars, but several startups are trying to make simpler and cheaper lidar units.  Luminar is a hot startup that uses a special metal alloy and shoots lasers at 1550 nanometer wavelength, allowing for longer range and better detection.  Ouster, another startup, is trying to use solid-state surface emitting lasers from chips, similar to what you might find in a mouse, but higher power.  As more and more lidar units are deployed, prices will slowly come down.

Advantages of Lidar

One of the primary advantages of lidar is accuracy and precision. The reason why Waymo is so protective over its lidar system is because of its accuracy. In December 2017, The Drive reported that Waymo’s lidar is so incredibly advanced, it can tell what direction pedestrians are facing and predict their movements. Chrysler Pacificas outfitted with Waymo’s lidar can see hand signals that bicyclists use to predict which direction the cyclists should turn.

Another advantage of lidar is that it gives self-driving cars a three-dimensional image to work with. Lidar is extremely accurate compared to cameras because the lasers aren’t fooled by shadows, bright sunlight or the oncoming headlights of other cars.

Finally, lidar saves computing power. Lidar can immediately tell the distance to an object and direction of that object, whereas a camera-based system must first ingest the images and then analyze those images to determine the distance and speed of objects, requiring far more computational power.

Disadvantages of Lidar

Cost is one of the major disadvantages of lidar. Google’s system can run upwards of $75,000. Today startups are bringing the costs of lidar units down to around $10,000, but that’s still expensive considering you might need several units around the car.  Until lidar costs come down to more reasonable levels, self-driving cars based on these systems may remain expensive for the ordinary public to purchase.

Lidar also has a limitation in that many systems cannot yet see through fog, snow and rain. Ford, another leader in race to develop viable self-driving cars, developed an algorithm that helps its lidar system differentiate between individual raindrops and snowflakes. Otherwise, autonomous vehicles would interpret a mass of falling snowflakes as a wall in the middle of the road. Ford proved its concept while undergoing testing in Michigan, but its program still has a lot to learn.

Additionally, lidar doesn’t provide information that cameras can typically see – like the words on a sign, or the color of a light.  Cameras are better suited for this type of information.

Finally, lidar systems are currently very bulky since they require spinning laser systems to be mounted around the vehicle, as seen above.  Whereas camera systems, as implemented on current Tesla vehicles are almost invisible.

Despite these disadvantages, most self-driving car companies favor lidar versus cameras, primarily because these companies have spent years perfecting it, it’s simpler to process than a camera system, but also because it will likely first be implemented commercial fleet vehicles, like taxis, where the higher upfront cost of lidar is offset by the savings of a human driver over the long term.

Why Cameras?

Cameras in autonomous driving work in the same way our human vision works and utilize similar technology found in most digital cameras today.

As Tesla CEO Elon Musk puts it, “The whole road system is meant to be navigated with passive optical, or cameras, and so once you solve cameras or vision, then autonomy is solved. If you don’t solve vision, it’s not solved.” Instead of inventing an entirely new system of seeing, Musk wants his company to create superhuman eyes that have much better acuity than biological eyes.

To solve the problem, Telsa ensures there are enough cameras around the car (8 in all), that can see in all directions. Throw in a single forward-facing radar plus machine-learning capabilities, and Tesla cars can learn to see better with their onboard camera technology and react more quickly than human drivers.

Cameras placed around a Tesla Model X (along with ultrasonic parking sensors and a front-facing radar)

Musk’s bet on camera technology may sound far-fetched, but this is the same man who launched a Tesla Roadster into Earth orbit on a Falcon Heavy rocket. He’s an engineer with a lot of cash who isn’t afraid to invest in his ideas to bring them to fruition.  So far, it’s paying off in the latest version of Autopilot, which has seen huge gains.

Advantages of Cameras

First, cameras are much less expensive than lidar systems, which brings down costs of self-driving cars, especially for end-consumers. They are also easier to incorporate (modern Teslas have eight cameras around the car) since video cameras are already on the market. Tesla can simply buy an off-the-shelf camera and improve it rather than going out and inventing some entirely new technology.

Another advantage is that cameras aren’t blind to weather conditions such as fog, rain and snow. Ford’s lidar solution workaround for poor conditions must use software enhancements, but Tesla’s cameras don’t have the same problems as lidar’s physical limitations.  Whatever a normal human can navigate, so can a camera-based system.

Cameras can also see the world in the same way as humans do and can, in theory, read street signs, interpret colors, etc. unlike lidar.

Finally, cameras can easily be incorporated into the design of the car and hidden within a car’s structures, making it more appealing for consumer vehicles.

Tesla rear-facing camera hidden within a side-panel blinker.

Disadvantages of Cameras

Cameras are subject to the same issues humans faces when lighting conditions change in such a way where the subject matter becomes obfuscated.  Think of situations like where strong shadows or bright lights, from the sun or oncoming cars may cause confusion.  It’s one of the reasons why Tesla still incorporates a radar at the front of their cars for additional input (also, radar is far cheaper than a lidar system).

Cameras are also relatively “dumb” sensors in that they only provide raw image data back to the system, unlike with lidar where exact distance and location of an object is provided. That means camera systems must rely on powerful machine learning (neural networks or deep learning) computers that can process those images to determine exactly what is where, similar to how our human brain processes the stereo vision from our eyes to determine distance and location.

Until very recently, the neural network / machine learning systems just weren’t powerful enough to handle large amounts of data from cameras in order to process everything in time to make driving decisions.  However, the development of neural networks has become much more sophisticated now that they are able to potentially handle real-world inputs better than lidar.

Tesla has come up with a forward-looking solution to handle the computing power problem in their cars.  They’ve incorporated the computing system in such a way that it is easily swappable for future hardware.  In fact, Tesla announced they will be upgrading the computing hardware (currently NVIDIA) with their own neural network silicon in early 2019, significantly bumping up the processing capabilities of their cars.

Alternative Solutions

One viable solution to the lidar versus camera debate for self-driving cars is to combine the technology. TuSimple, an autonomous truck company, is going the route of combining radar, sensors and cameras. All of these solutions are less expensive than lidar. Getting the software perfect so the onboard computer interprets everything properly is the main challenge.

Another thing to remember is that every data point that computers compile becomes a way for companies to learn something new. Whether it’s lidar, cameras or a fusion of the two, all of these high-tech systems are only as good as the computer programs written to interpret the data with millisecond precision. Humans have to write and test exact computer codes and algorithms for any physical hardware to work properly. Considering how many things a human must pay attention to on the road, getting millions of data bits flowing into a car’s computer system properly is no small task.

No system is perfect. Time should tell what wins in the debate over lidar versus cameras for self-driving cars. It all depends on which technology innovates better, becomes cheaper faster and ultimately saves more lives.