Monday, January 24, 2011

Gambling with goodness

I was recently pointed to these essays on reducing suffering. They’re interesting, and most aren’t long. One frightening theme I hadn’t come across before in this context relates to the possibility of scientists creating infinitely many universes in a lab. These would contain an infinite amount of suffering, so the idea is that the expected suffering reduction resulting from making this procedure any less likely will be infinite, so if we can make it less likely then that’s what we should do instead of faffing around with finite effects on the universe. I’m not going to talk about the view that an outcome containing infinitely many suffering-supporting universes would be infinitely bad. What I’m interested in is the role played by expected consequences here. (‘Expected’ is used in the technical sense: an action’s expected resultant goodness is the sum of the products of the goodness of each possible outcome and the probability of that outcome.)

Within consequentialism, there’s room for at least two natural views about how good an action is. One says it’s as good as its expected consequences, and another says it’s as good as its actual consequences. I’ve generally sided with the latter view, but I also thought the dispute had no practical implications since if you're trying to be as good as possible then you'll maximise expected goodness either way. I no longer think that’s right, and the practical implications may be quite serious.

On Deal or No Deal the banker’s offer is typically lower than the expected amount in the contestant’s box, which will be the amount of money left in the unopened boxes divided by the number of unopened boxes. I’ve heard people say the contestants are therefore always irrational to take the offer, but that’s not true. The expected consequences may be better for them if they deal, because of diminishing returns: if someone gave me £10,000 that would probably improve my life less than twice as much as if they gave me £5,000. Benefit needn’t be proportional to prizemoney so expected benefit needn’t be proportional to expected prizemoney. But even if the expected benefits of dealing are less than those of not dealing, it might still be rationally permissible to deal because that way you’re sure of getting something. If you played many times the expectation-maximising strategy would be very unlikely to do worse than the dealing strategy, but if you’re only playing once there’s a significant chance you’re better off dealing. So why not deal? (I don’t know if there’s a rule against a season’s contestants agreeing to maximise their expected prize each time and share their winnings, but there should be.)

Returning to consequentialism: if you’ve got two probability/goodness distributions of much the same shape you should probably go with maximising expectation. Since everyday choices are repeated many times, the expectation-maximising strategy will generally have a similarly shaped distribution to and a higher expectation than the play-it-safe strategy, so you should probably maximise expectation. With unusual situations with very differently shaped distributions it’s not at all obvious though. Here are three thought experiments.

Suppose you’re in a situation where you have two options. On option A two people will die. On option B there’s a 75% chance one person will die and a 25% chance four will. You’re trying to prevent deaths. What should you do? Option B has 0.25 fewer expected deaths, but maybe you should go with A, because B has a significant chance of turning out much worse.

Now suppose option A kills one person, and option B will probably kill nobody, but has a 1% chance of killing 110 people. B has more expected deaths, but maybe you should go with B anyway because it’s 99% certain to be fine and otherwise the poor victim of option A will certainly die.

Now suppose that we could make the infinite-suffering scenario 0.02% less likely by sending everyone to seven years at a gruelling re-education camp between the ages of 16 and 23. Should we do that? Or should we take our chances?

If actions are as good as their expected consequences, then it isn’t up to us what we do if we’re trying to do good. If actions are as good as their actual consequences though, it is up to us. We can risk doing bad things, and if the gambles don’t come off then our actions will indeed be bad, but if they do then our actions will be better than if we’d maximised expectation. So will the world. Now maybe our treatment of risk isn’t up to us, and there’s only one moral attitude to take. But if actual-consequentialism is true, then it is up to us, and that matters.

4 comments:

  1. Thanks for pointing to my collection of essays, Michael.

    I agree with the expected-consequences view even in one-time cases. The actual-consequences view seems to me absurd. For instance, suppose you were presented with a gamble that had a 99% chance of causing a trillion people to be tortured for a billion years each and a 1% chance of saving someone from torture for 1 day. You take the risk. As it turns out, you save the guy from a day of torture. Do we want to say that your action was a 'good' one in this case? Since the purpose of judgments is to reinforce future behavior, and since we decidedly don't want people to take such gambles in the future, I say, no, it was a reckless, immoral decision. In other words, we're trying to reinforce decision procedures here, not good luck.

    By the way, Nick Bostrom's excellent "Infinite Ethics" raises the general question for consequentialists of how to make decisions when almost any action has expected infinite impacts.

    Best wishes with your blog!

    ReplyDelete
  2. Thanks for your comment. I agree that the actual-consequences view conflicts with judgements about recklessness and moral luck, but I think it's probably more defensible than you give it credit for.

    I definitely think the lesson an actualist should draw from Parfit's mineshaft argument should be to think not about the right/wrong distinction but about the better/worse spectrum, but that's not the issue here.

    It's coherent for an actualist to think that recklessness is itself a vice because reckless people tend to do worse things, but not every reckless act is bad. You can try to influence future behaviour by criticising recklessness. But I don't agree that the purpose of moral judgements is to reduce future suffering any more than that's the purpose of zoological judgements. Maybe we should call things wrong or reptiles iff doing so optimises the consequences, but that doesn't mean that's the criterion for something being wrong or a reptile.

    This said, the post wasn't actually defending actualism (though this comment is a bit). The main point was that the debate has practical implications, and I'd previously thought it hadn't. Thanks for pointing me to Nick's thing; I haven't read it yet but it looks interesting.

    ReplyDelete
  3. The trouble with real actions, apart from trivial ones like box-opening, is that you never actually know the probability before you act. You only know that you're 'taking a risk', or 'taking a big risk'. When the risk pays off, you might be inclined to think that the probability of success was greater than you had thought. So in the case of the trillion-deaths guy, I would probably congratulate him on his nerve.

    ReplyDelete
  4. Hi!Your way of writing blog is so nice and interesting one too.Thank you.
    บาคาร่า
    gclub จีคลับ
    gclub casino

    ReplyDelete