For many people, the idea of self-driving cars lives in science fiction movies and novels. In reality, this technology might only be years away. And, while the actual mechanics might already exist, computer algorithms designed to make decisions might be a decade off.
The main problem – Decisions With Fatal Results.
The Problem Of Hypotheticals
While all decisions can be broken down to their simplistic roots – an if/then statement – the more complex the hypothetical, the more difficult the resolution. For example, IF traffic is stopped in front of you, THEN we must decelerate and stop. IF a pedestrian has stepped into a crosswalk, THEN we must stop to avoid a collision.
However, what happens when the car is forced into a no-win situation? Every day, drivers encounter complex situations involving both on-road and off-road elements. The more skilled a driver, the better chance he or she has of lessening the impact of an accident. The problem, then, is when a complex hypothetical situation cannot be solved simply.
Consider:
- You’re on a two-lane road.
- The oncoming traffic lane is full of cars.
- The school bus, filled with children, in front of you brakes suddenly to avoid hitting a dog in the road.
- At your speed, you don’t have time to stop.
- But you have time to swerve to your right, off the road.
- There is a group of children near the shoulder of the road operating a lemonade stand.
While the elements of this scenario might be approaching the ridiculous, it illustrates the amount of data that must be immediately recognized and used to calculate a response. What would a computer do?
What Is The Trolley Problem?
A philosophical no-win scenario has existed for decades, but has re-emerged when discussing how a self-driving car reacts to the road.
The trolley problem is a hypothetical where an individual witnesses a runaway trolley. The trolley is on path to hit and injure several people, but, by pulling a lever, the witness can divert the trolley causing it to kill only one person. This decision, based on the notion of greatest benefit versus least cost, is one that computer programmers must identify and resolve.
In addition:
- A mail truck is parked on the shoulder, but can only be passed by crossing the road’s center line. When is it “safe” to cross the center line into oncoming traffic?
- A group of pedestrians stands at a crosswalk. How do we determine whether that group will step into traffic?
- Should the car swerve to avoid hitting a squirrel? A rabbit? A dog? A deer? Where is the cutoff?
- Can the car differentiate between the darkness of a tunnel at night versus a wall?
- With a flashing yellow, semi-protected left-hand turn, how can the car calculate the odds of a safe turn if the oncoming traffic arrives around a corner or up a hill?
While automakers assure us that they are working to resolve these types of hypotheticals, they also note that a self-driving car’s safety is rooted in its ability to perceive potential problems. The countless sensors that computer cars will rely on exist for that reason. Your car might not know for sure whether to swerve to avoid a dog in the road, but it might actually “see” the dog 10 yards off the road, behind shrubs, before it even begins to run toward the road. Your car might, then, begin slowing in anticipation of an issue.
Car accidents and auto defects will always exist. In the next decade, though, your argument with an at-fault driver might ultimately be human versus machine.