I've been doing a little reading on the subject and have come across several interesting aspects involved with making driverless cares work. To make a self driving auto work, engineers and programmers have to consider and think thru millions of scenarios that can occur on the roadway and program accordingly. During my reading I came across quite a few fascinating aspects regards to the topic and found this one interesting enough to share-



The Trolley Problem
Philosophers have been thinking about ethics for thousands of years, and we can apply that experience to robot cars. One classical dilemma, proposed by philosophers Philippa Foot and Judith Jarvis Thomson, is called the Trolley Problem: Imagine a runaway trolley (train) is about to run over and kill five people standing on the tracks. Watching the scene from the outside, you stand next to a switch that can shunt the train to a sidetrack, on which only one person stands. Should you throw the switch, killing the one person on the sidetrack (who otherwise would live if you did nothing), in order to save five others in harm’s way?














A simple analysis would look only at the numbers: Of course it’s better that five persons should live than only one person, everything else being equal. But a more thoughtful response would consider other factors too, including whether there’s a moral distinction between killing and letting die: It seems worse to do something that causes someone to die (the one person on the sidetrack) than to allow someone to die (the five persons on the main track) as a result of events you did not initiate or had no responsibility for.

To hammer home the point that numbers alone don’t tell the whole story, consider a common variation of the problem: Imagine that you’re again watching a runaway train about to run over five people. But you could push or drop a very large gentleman onto the tracks, whose body would derail the train in the ensuing collision, thus saving the five people farther down the track. Would you still kill one person to save five?

If your conscience starts to bother you here, it may be that you recognize a moral distinction between intending someone’s death and merely foreseeing it. In the first scenario, you don’t intend for the lone person on the sidetrack to die; in fact, you hope that he escapes in time. But in the second scenario, you do intend for the large gentleman to die; you need him to be struck by the train in order for your plan to work. And intending death seems worse than just foreseeing it.



This dilemma isn’t just a theoretical problem. Driverless trains today operate in many cities worldwide, including London, Paris, Tokyo, San Francisco, Chicago, New York City, and dozens more. As situational awareness improves with more advanced sensors, networking, and other technologies, a robot train might someday need to make such a decision.

Autonomous cars may face similar no-win scenarios too, and we would hope their operating programs would choose the lesser evil. But it would be an unreasonable act of faith to think that programming issues will sort themselves out without a deliberate discussion about ethics, such as which choices are better or worse than others. Is it better to save an adult or child? What about saving two (or three or ten) adults versus one child? We don’t like thinking about these uncomfortable and difficult choices, but programmers may have to do exactly that. Again, ethics by numbers alone seems naïve and incomplete; rights, duties, conflicting values, and other factors often come into play.



If you complain here that robot cars would probably never be in the Trolley scenario—that the odds of having to make such a decision are minuscule and not worth discussing—then you’re missing the point. Programmers still will need to instruct an automated car on how to act for the entire range of foreseeable scenarios, as well as lay down guiding principles for unforeseen scenarios. So programmers will need to confront this decision, even if we human drivers never have to in the real world. And it matters to the issue of responsibility and ethics whether an act was premeditated (as in the case of programming a robot car) or reflexively without any deliberation (as may be the case with human drivers in sudden crashes).












Anyway, there are many examples of car accidents every day that involve difficult choices, and robot cars will encounter at least those. For instance, if an animal darts in front of our moving car, we need to decide: whether it would be prudent to brake; if so, how hard to brake; whether to continue straight or swerve to the left of right; and so on. These decisions are influenced by environmental conditions (e.g., slippery road), obstacles on and off the road (e.g., other cars to the left and trees to the right), size of an obstacle (e.g., hitting a cow diminishes your survivability, compared to hitting a raccoon), second-order effects (e.g., crash with the car behind us, if we brake too hard), lives at risk in and outside the car (e.g., a baby passenger might mean the robot car should give greater weight to protecting its occupants), and so on.



Human drivers may be forgiven for making an instinctive but nonetheless bad split-second decision, such as swerving into incoming traffic rather than the other way into a field. But programmers and designers of automated cars don’t have that luxury, since they do have the time to get it right and therefore bear more responsibility for bad outcomes.