Self-driving cars aim to revolutionize the auto industry by saving lives, reducing accidents and lessening the environmental impact of carbon emissions. These high-tech vehicles have a lot of computing power behind them, including artificial intelligence (AI) that helps govern how self-driving cars learn and assess what’s going on in the immediate surroundings.
AI still has a long way to go before it’s ready to make cars completely autonomous, at level 5 autonomy, where a driver doesn’t have to intervene to drive at all. Discover some of the difficult challenges for AI in self-driving cars that companies have to solve before this technology goes mainstream.
Self-driving vehicles need to make their presence known to other cars and pedestrians in an attempt to avoid collisions. Add in quiet hybrids or electric vehicles to the mix, and self-driving cars can be eerily quiet. That can give drivers and pedestrians a false sense of security that there are no cars in the area. AI can make self-driving cars more conspicuous by using headlights, the horn or certain movements to alert others as to is presence. Knowing when to employ these acts is something AI must learn to do. It may be pointless to honk at a car or pedestrian pulling out into traffic when the object or person is 100 feet away, and flashing headlights during the day may not be the best course of action to warn another car.
Predicting Human Behavior In and Out of Cars
Self-driving cars recognize that a pedestrian crossing a street or walking in a street is a hazard. However, what AI doesn’t yet understand is what to do when a car sees a person running on a sidewalk from 50 feet away. If that person is a jogger, the car should correctly anticipate that the jogger should stop at the crosswalk and wait for vehicle traffic to pass. Otherwise, self-driving cars would all stop several feet away from the jogger even though the runner stops before making it to the street.
Predicting if a pedestrian plans to go into a street is one challenge facing AI today. One solution, according to Anca Dragan of UC Berkeley’s electrical engineering and computer sciences department, is to program cars to think like people. Instead of treating people like obstacles that are inanimate objects, self-driving vehicles must realize that people can make decisions. Cars must learn to recognize that when someone looks onto a street, that person isn’t necessarily getting ready to enter into the roadway. Perhaps the person heard a noise and responded to it, or the person heard a friend call out to him from across the street.
This same issue is true when dealing with human drivers. When a self-driving vehicle approaches an exit on a highway, it should be able to recognize that some cars may maneuver into the right lane and slow down ahead of exiting the highway. Despite speed limit postings on the freeway, the vehicle should reliably predict how many cars should use the exit ramp.
Recognizing Road Signs
Self-driving cars typically use GPS to follow the speed limit rather than interpreting speed limit signs. The same is true when coming to a stop, avoiding going the wrong way and staying to the right side of median. Recognizing road signs is a crucial problem for AI to solve when it looks at photographs on the road, especially if there is some kind of emergency that the car is not yet aware of. A warning triangle put out by a stalled truck should indicate to a self-driving vehicle that there may be trouble ahead. A sign that says “Utility Work Ahead 500 Feet” should also alter the car’s behavior.
Cars must make split-second decisions about obstacles in the road, including avoiding debris. Sensors and lidar systems are fairly good at detecting debris in the road, but knowing what to do with these hazards is another matter. AI needs to come up with instant decisions about what option is best for the car when encountering debris, such as swerving, stopping or driving over it.
Left-hand turns represent a thorny issue when it comes to AI. How does a self-driving car gauge the speed of oncoming traffic in a left-hand turn yielding situation? What happens when an emergency vehicle comes through the intersection while the car is yielding for a left turn? Sensors, cameras and lidar can detect all of these items, but the computer software must decide how best to handle multiple contingencies in one situation.
Cybersecurity and data breaches are both concerns when it comes to hackable computer systems. Once a hacker gets into a car’s system, the criminal has to change just [one simple aspect](http://www.ttnews.com/articles/autonomous-vehicles-pose-new-challenges-future-cybersecurity) of a software’s code to effect the car’s safety. For example, a hacker can replace the code that represents a speed limit sign of 30 mph and switch it with one that indicates a 55-mph limit. This could turn a self-driving car into a speeding hazard very quickly, or cause the car to slow down too much when it encounters traffic going the correct speed.
Seeing Through Darkness and Weather
One limitation of lidar and cameras is detecting weather, such as rain, snow and fog. Seeing obstacles at night, especially dark ones, is also a problem that AI needs to work out. Sensors and lidar can see these things, but the difficulty is that the computer program has to recognize what the obstacles are and what to do about them. This was evident in the unfortunate accident in Tempe, Arizona, where an Uber self-driving car struck and killed a pedestrian in March 2018. The victim was wearing dark clothing and cross a street at night.
There may come a time when a self-driving vehicle makes a life-or-death decision, such as choosing between hitting a pedestrian, hitting another car, or driving off the road to possibly injure the driver. There could be a person jogging on a two-lane country road with oncoming traffic and ditch off to the right. The person slips and falls on slick pavement, and the car cannot stop in time. The program assesses the likelihood of survival of a person in a car versus someone walking or a bicyclist, and then makes a decision on how to proceed. Does the car swerve to the right and go in the ditch, swerve to the left and hit the other car head-on, or attempt to hit the brakes with the pedestrian in the road?
AI systems are only as good as the humans that program them. AI relies on three major aspects to be successful: machine learning, data gathering and processing power. Machine learning entails software that learns how to distinguish among various items the computer observes. To do that, AI needs copious amounts of data to digest. A computer needs fast processing power to sift through data, such as road conditions, traffic, signs and weather, so it can make instantaneous decisions.
One solution is to have secure neural networks. A neural network relies on the collective data supplied by hundreds of self-driving vehicles. Each individual car taps into that information to learn how to deal with certain situations. Providing security for these neural nets is crucial to prevent any intrusions and false information.
Connected devices with transmitters also help when sensors have limited information. Waymo and other car companies set up a testing ground in 2015 in Ann Arbor, Michigan, to test city driving conditions, weather and connected devices that assist self-driving cars. For example, a connected device in a traffic light can alert a self-driving car when it plans to change signals. Smart cameras can relay information about the number of pedestrians at a crosswalk several blocks before a car arrives. Cameras can also detect people going in and out of buildings as a way to gauge the number of people on a sidewalk.
The ultimate solution could be to turn over driving to a human driver under certain specialized situations, such as heavy rain, snow, ice or rush hour. That may seem counter-intuitive, but programmers and computer scientists may see limits to self-driving vehicles even as the technology moves towards full automation by the late 2020s or early 2030s.