The US government is pondering requiring autonomous, self driving vehicles to incorporate in altruistic algorithm that would make life or death calculations to kill the fewest people possible in an anticipated accident. The decision could be to sacrifice the vehicle's occupants in order to save more pedestrians in jeopardy.
What about the vehicle that is built with a latest self-learning smart computer? (Named Dave?) Could this computer's brain evolve a self-preservation instinct and countermand the government's altruistic algorithm?
Stanley Kibrick here we come ... (15 years late)
Here is a supporting article from The Economist:
Better you than me: driverless-car ethics
You can switch tracks so a runaway train kills one person instead of five. Do you throw the switch? What if you’re the one person? Variants of this “trolley problem” are classic ethics questions. And autonomous cars’ programs must have answers to them in the event of an unavoidable crash. Should they be “utilitarian”, aiming to kill fewer, or unfailingly protect their occupants? It depends how the question is phrased, say researchers writing in Science this week. In surveys portraying a number of simulated crash scenarios (try them at moralmachine.mit.edu), people overwhelmingly made the utilitarian choice. But when the notional passengers were the respondents themselves or their families, they opted for self-preservation; people said they would be less likely to buy utilitarian-programmed cars. These thorny social dilemmas need attention: autonomous vehicles will drastically reduce road fatalities, so any ethical quibbles that delay their uptake will, ironically, cost lives.