Sunday 26 February 2017

Driving Is Social. Autonomous Cars Aren't, Argues Computer Scientist

Autonomous cars are premised on the simple notion that driving is a fundamentally algorithmic activity. It can be understood as a formal system in which all agents act according to well-defined rules. If red: stop. If green: go. If obstacle and traveling fast: swerve. Else: stop. If gap: merge. Put a whole lot of these rules together and we wind up with an autonomous car capable of some incredibly complex behaviors that will suffice in most real-world driving scenarios.

But that's not quite it. Driving is more than a system of formal rules. It's a social activity. Its mechanics involve subtle interactions between humans that reflect those of the not-driving world. We've had hundreds of thousands of years to learn how to predict each other's behaviors at levels that are not just formal, but are emotional and instinctual. This stuff is hard to articulate, let alone program.

This is the general point made by Barry Brown, a human-computer interaction researcher at Stockholm University, in the current issue of Computer: "Driving isn't just a mechanical operation but also a complex social activity. Until the day comes when all vehicles are fully autonomous, self-driving cars must be more than safe and efficient-they must also understand and interact naturally with human drivers. So long as most vehicles on the roadway continue to be operated by people, self-driving car designers must consider how their choices impact other drivers as well as their own vehicles' passengers."

Brown laments the lack of public data made available by self-driving car developers. We just don't have much of an inside perspective. So, Brown responded by aggressively building a wide-reaching outside perspective via public YouTube footage of driver-assisted and fully self-driving cars in the wild. In all, he collected 93 videos taken of mostly Tesla's driver-assistance system, Autopilot, as well as several from similar systems in Volvo and Honda vehicles. Nine were of Google self-driving cars. The formal method used by Brown and colleagues at Stockholm University is described in detail in a separate paper presented recently at the University of Nottingham's Mixed Reality Laboratory. 

"We're in the midst of a global field test of autonomous driving technology, yet results from these tests are proprietary, with little publicly available data," Brown writes. "Occasionally, flaws in the technology are exposed by videos taken by in-car dashcams and passengers' mobile phones and uploaded to social media sites like YouTube, prompting media discussion and sometimes controversy."

What Brown found mostly consists of well-behaving autonomous cars, but in several cases they got it very wrong. What all of these situations had in common was an inability on the part of the car to understand or display social cues. "While people can often discern other drivers' intentions as well as mood or character—aggressive, hesitant, selfish, unpredictable, and so on-based on changes (or the absence of changes) in their car's speed, direction, and so on, [Tesla] Autopilot lacks this ability," Brown says.

Below is one common example, in which the Tesla fails to recognize a polite gesture by another motorist and then proceeds to cut that motorist off. Another autonomous Tesla would be like "whatever," but here the action is perceived as rude.

Here's another (long) one. In this case, a human driver misinterprets a gap left open between two self-driving cars as a polite invitation to merge into that space. In reality, this is just the autonomous system maintaining a safe distance in a situation where human drivers generally don't leave spaces (that is, stopped at a red light). 

In this third example, the self-driving car attempts to "creep" into an intersection guarded by a four-way stop. In situations where two or more drivers hit four-way stops at more or less the same time, this is a precautionary move meant to signal to other cars that they need to wait. But in this case, the self-driving car doesn't creep enough and the other driver doesn't register the Google car's intention.

"As self-driving cars become better at the mechanics of driving, yet still not "behave" exactly like human drivers, they could arouse feelings of anger or frustration," Brown writes. "For example, always slowing down to avoid a collision—a statistically safe but not always the best response—could encourage drivers to cut off or zoom past an autonomous car, ironically increasing the risk of a collision."

The general conclusion here is that we can't program autonomous cars to behave as though every other car is self-driving as well. Until they are, we're stuck with humans and need to build self-driving systems that can better account for that fact.



from Driving Is Social. Autonomous Cars Aren't, Argues Computer Scientist

No comments:

Post a Comment