Trolley problems

The New Yorker on trolley problems in self-driving cars
Both Di Fabio and Shariff agree that the advent of autonomous vehicles will force us to make our underlying moral calculations explicit. In twenty to fifty years, the majority of cars on the road will likely be driverless. If billions of machines are all programmed to make the same judgement call, it may be a lot more dangerous to cross the street as, say, an overweight man than as a fit woman. And, if companies decide to tweak the software to prioritize their customers over pedestrians, it may be more dangerous to be beside the road than on it. In a future dominated by driverless cars, moral texture will erode away in favor of a rigid ethical framework. Let’s hope we’re on the right side of the algorithm.

Psychology's trolley problem might have a problem (Slate).
[In an experiment where people were asked would push a button to they give an electric shock to one mouse to prevent five mice being given the shock]: [In conversation, t]wo-thirds of Bostyn’s subjects said yes, they would indeed press the button in this scenario... [in practice, a]bout five-sixths of these subjects pressed the actual button, suggesting they were more inclined to make that choice in real life than their fellow subjects were in hypotheticals. Moreover, people’s responses to the 10 trolleyology dilemmas they were given at the start of the experiment—whether they imagined that they’d push the fat man off the bridge and all that—did not meaningfully predict their choices with live mice. Those who had seemed to be more focused on the greater good in the hypotheticals did seem to press the real-life button more quickly, though, and they described themselves as being more comfortable with their decision afterward.
The Trolley Problem Will Tell You Nothing Useful About Morality (Current Affairs).
The trolley problem is repulsive, because it encourages people to think about playing God and choosing which people to kill. .... It warps human moral sensibilities, by encouraging us to think about isolated moments of individual choice rather than the context in which those choices occur. ... And it encourages a kind of fatalism, where everything you do will inevitably be a disaster and moral questions seem hard rather than easy.

There are plenty of moral questions we don’t discuss nearly enough: Is there a moral obligation to help refugees? Is being rich in a time of poverty justifiable? Do you have an obligation to speak out about sexual harassment? What should you do if you know someone is being abused but they explicitly ask you not to say or do anything about it? Are there any justifiable reasons for the existence of borders? Does capitalism unfairly exploit workers? Should you lie to protect an undocumented person? ... One of the hardest moral quandaries is in determining what our priorities should be: in a world filled with a million injustices, do you just pick one at random to address? It’s only because we spend so little time thinking about which questions probably matter more than others that anyone can think trolley problems are a comparably effective use of time.
Because I work adjacent to self-driving car research I get asked about the trolley problem from time to time, and these were three interesting articles about it. (The New Yorker one has less of an agenda and is the best). The trolley problem is the hypothetical question: you see a streetcar out of control and careering down the street to where it will kill five people; if you pull a switch you'll divert it to where it kills one person; should you do it?

Thought 1: although the trolley problem is presented as a philosophical problem, it's not actually one. It's a question in psychology. It doesn't explore anything interesting about morality -- obviously the right thing to do is to make the choice that serves the greater good, if that's clear, and in the example above the right choice is to kill fewer people; what it does is put you face to face with your own squeamishness about directly taking action that kills someone as opposed to avoiding taking action resulting in the deaths of others. It's not about the right thing to do, it's about your relationship with your own culpability. It's interesting here that, per the Slate article, people's answers to the question posed hypothetically reflect a greater bias against taking action if it makes you culpable compared to people's behavior in real life, where in general if the greater-good choice is clear people will make that choice.

Thought 2: moving the trolley problem to the self-driving car space doesn't make it any more interesting philosophically. Human drivers make these decisions under these circumstances, and not very often. Self-driving cars will get in these situations even less often, and will just be programmed to mirror the decisions humans make.

(Thought 2a: There is an interesting effect of this -- self-driving cars will amount to an enormous transfer of liability from the driver to the car maker, and to the end of individual driving insurance. This means that the whole cost of insurance for the car over its lifetime needs to be bundled into the cost of the car itself. This will create enormous incentives to make sure self-driving cars are actually safer than people-driven cars so that the cost will be competitive. Note of course that if with self-driving cars we see a move towards less car ownership, the increased insurance will be paid on the trips rather than the car and the fleet operator may pay claims out of operating expenses rather than having insurance, but the fundamental point is the same -- potential accidents are a cost that has to be paid for and if the individual driver doesn't pay that cost directly, it still needs to be paid somewhere in the system).

The New Yorker article in particular has slightly changed my mind on this.

Thought 3: If you move the decision as to who to hit to an algorithm that's centrally developed and rolled out across the whole fleet, then any issues with that algorithm could result in significant changes to outcomes, compared to what happens currently. Basically, right now, reactions to trolley problem-like situatons are individual and have a large random component. Some people run over grannies, some people run over kids. But if the computer decides that everyone should run over grannies, the world does look different. So it's worth taking time to develop an algorithm carefully and not just reflect exactly what the median driver would do. (Or it's worth having a large random element in the algorithm).

Thought 4: The inverse of thought 3. It could be that there's already an undesirable bias in how people behave. The New Yorker article shows people in high-inequaltiy countries happier to hit homeless people than businessmen, people in Latin American countries happier to kill fat people than people out exercising. Is it right for algorithms to embed these biases? (We've already seen cases where "AI" algorithms built to make hiring decisions have just embedded existing human prejudices to bring in white males for interview, and Democratic Representative Alexandria Ocasio-Cortez was recently subject to some hilarious mansplaining for pointing this out). In fact, German law, the only law in the world on the topic, which states “In the event of unavoidable accident situations, any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited.” (hat tip again to the New Yorker article).

Overall, then, I'm thinking that the trolley problem in self-driving cars is a more interesting engineering problem than I'd previously thought.

However, I do question the level of attention that the trolley problem gets in the literature and in journalism. It depends on a very specific set of conditions, where there are no good choices but you nevertheless get to make a choice. How often does that really happen? As self-driving cars get better and better, won't it happen even less in the future than it does now? Have you ever read about an accident where someone says "I swerved to avoid the baby ducks even though I knew I'd hit that fat person"? Surely the point of self-driving cars is to make accidents so rare that what you happen to be programmed to do in the trolley scenario simply doesn't matter. As the (somewhat polemical) Current Affairs article puts it, "It’s only because we spend so little time thinking about which questions probably matter more than others that anyone can think trolley problems are a comparably effective use of time."

Maybe that's the real lesson of the trolley problem. If you want people to talk about your ideas, don't worry about whether the ideas are significant. Just try to make them sexy.

(PS -- Hello again Livejournal! Yes, I am posting here because it's virtually the same as making a private note.)

Interesting Links for 24-10-2013

(Thanks to andrewducker and FeedThisToThat)

Interesting Links for 19-08-2013

  • On Being Off: The Case of Amanda Knox
    "Preston theorizes that the act of punishment lights up a pleasure zone in our brains, because group evolution relies on cooperation and therefore must include the ability to punish those who don’t cooperate or who are seen as outsiders." I'm interested in this in-group out-group dynamic, in particular between only slightly different groups: iPhone v Android, experimental v theoretical physics, Belfast Catholic v Belfast Protestant. This is a nice article that helps me think about ways that we form judgments about people we half-know.
    (tags: )
(Thanks to andrewducker and FeedThisToThat)

Who do you love? 33: The Moonbase

A thing to love about The Moonbase: The scenes at the start of episode 1 where they put on space suits and jump gently around. This is reminiscent of the spacesuit business at the start of The Web Planet, which was as much about establishing the TARDIS crew's dynamic as it was about establishing the particular setting for this week.

This points to a way the Cybermen are used as recurring villains that differs significantly from the Daleks. When the Daleks show up it's a sign that things are going to go bonkers (with the exception of Power of the Daleks, which is the best-disciplined story of any kind to date). A Cyberman story is one of two types: either a slow-paced Hartnell throwback (this, The Wheel in Space) or a boldly confident statement that this is what the show is like now that doesn't break new ground but executes on old ideas better than you've seen before (Tomb, The Invasion). Maybe it's the effect of the different shapes. The Cybermen are the first classic person-in-a-rubber-suit monsters. For all the emotionless schtick they come with, the fact is that they have very expressive bodies in long and medium shot; also, because they aren't quite human, there's a tendency to think you can get away with putting them on harnesses or using dummies more than you could with stuntmen playing actual huams. They lend themselves to being gracefully contorted in the same way that the Daleks lend themselves to squawking slapstick. So we have the climax of this story, the spacewalk in Wheel in Space, the mad Cyberman in the sewers in The Invasion, all of which would look and feel very different if they had Daleks instead. 

As such the Cybermen are the perfect monster for the increasing professionalism of the show, which is now aiming for “how did they do that?” rather than “what the fuck just happened?”. The show is never in the black and white era going to be as exceptional as it was in Season 2, but episode to episodde it's going to continue to be very pleasant to watch.

Back to the Moonbase: also to love, the pattern of black veins on the back of infected people's hands — I think the first time we've had a special effect of a body changing, a very visceral representation of possession. 


Who do you love? 32: The Underwater Menace

A thing to love about The Underwater Menace: It was clearly SO EXPENSIVE. The sets are huge. They actually flood them. There's an enormous cast. There are fish people. They didn't scrimp and save on the money for this one, that's for sure. As with The Ark you can perhaps ask if this was the right script to spend all this money on. But look at how huge the marketplace is!

Who do you love? 31: The Highlanders

A thing to love about The Highlanders: Polly's best scenes, as she merrily passes the Bechdel test and outsmarts the English in a much saucier way than we're used to. Also, “I should like a hat like that”, the best catchphrase until “Are you my mummy?”. Admittedly, the competition is “Affirmative, Master” and “Excellent”, so maybe this isn't saying much.

And it's farewell, or rather fare-THEE-well, to the historicals. Even at their worst, they tended to be good, offering writers a chance to relax a bit about the setting and really get their teeth into the characters and plot. Only one, The Reign of Terror, is actually bad; all of the others are packed with moments of delight and humanity. The Highlanders is probably the second worst. It's remarkably inconsistent in tone, not sure how to play the scene where the Doctor and Jamie's party are all one kick away from actually being hanged. However, once it settles down and decides it's a romp, it's plenty entertaining. This was perhaps the death of the historicals: as the writers got more self-conscious, the middles of the historicals continued to be easy to write, but it got harder and harder to get the Doctor and his friends into the story and to get them out — and not just in a psychic paper way, in a way that caused as little damage as possible to the known facts. Fortunately, David Whitaker has just shown that the base under siege can be used to tell great stories as well; hopefully they'll use this knowledge sparingly.