Pages

Showing posts with label Deal or No Deal. Show all posts
Showing posts with label Deal or No Deal. Show all posts

Sunday, May 21, 2017

Bad Deal Or No Deal

Consider these two statements:
  • Eating no bread is better than eating any mouldy bread.
  • Any bread at least as good to eat as no bread is not mouldy.


To avoid ambiguity, let’s put them into Mickey Mouse first order logic:
  • (x)[[Mouldy(x) & Bread(x)] → Better(eatingNoBread,eating(x))]
  • (x)[[Bread(x) & ¬Better(eatingNoBread,eating(x))] → ¬Mouldy(x)]


The two formulations are equivalent: they are both false iff there is some mouldy bread the eating of which is no worse than eating nothing. But there’s a difference of emphasis. The first formulation is something you might say if you were taking an uncompromising line on bread: if mouldy bead is all there is, you’d rather have nothing. The second formulation is something you might say if you were taking a compromising line on mouldiness: it can’t be mouldy or you wouldn’t recommend eating it at all. But a difference in emphasis is not a difference in commitment. Asserting either commits you to the same things.


Now, the government’s line on the Brexit negotiations is apparently that no deal is better than a bad deal. People take this as meaning that they’re taking an uncompromising line on deals, and they take it to embody an attitude of cavalier intransigence. But consider these two statements:
  • Reaching no deal is better than reaching any bad deal.
  • Any deal at least as good as no deal is not bad.


And in MMFOL:
  • (x)[[Bad(x) & Deal(x)] → Better(reachingNoDeal,reaching(x))]
  • (x)[[Deal(x) & ¬Better(reachingNoDeal,reaching(x))] → ¬Bad(x)]


The second formulation seems to embody an attitude of roundheaded compromise. Don’t criticize this underwhelming deal, they say, because it’s better than nothing. But the two formulations are formulations of the same commitment. In the mouldy bread case the speaker enjoys a certain amount of latitude because of whatever vagueness and subjectivity there is in the word “mouldy”. In the Brexit case the speaker enjoys latitude because of whatever vagueness and subjectivity there is in the word “bad”.


Now with the mouldy bread case, the speaker is at least committing themselves to something. That’s because some bread is determinately mouldy. Suppose the only food available is determinately mouldy bread, and you say that eating no bread is better than eating any mouldy bread. Half the party eats the bread, against your advice, and half the party goes hungry. One half has a better time and you are open to praise or criticism as a result.


Now consider the Brexit case. While we don’t have a whole party to divide up into people taking your advice and people not taking it, we can still compare the actual world with our dimly assigned probability distributions over the space of counterfactual situations. But the government can always evade criticism, whatever consensus history arrives at on the relative merits of no deal and the available deals. Obviously the government are also the people taking the decision, unless they lose the election, and so they could be open to criticism for taking one option if history judges that other available options would have turned out better. There’s no escaping that. But the particular claim that no deal is worse than a bad deal is entirely hedged.


Take any deal you like. If we decide that it’s worse than no deal, the government says it’s bad and takes the credit for being right. If we decide that it’s better than no deal, the government can just say that it wasn’t a bad deal. Similarly, nobody needs to praise the government however things turn out either. If history judges that the available deal was worse than no deal, the opponent can say that of course there are some deals worse than no deal, but there are plenty of bad deals better than no deal too, and if we’d been able to get one of those then the government would have turned it down and been wrong to do so. The word ‘bad’ and the associated concept are flexible enough that nobody ever needs to admit they were wrong about whether no deal is worse than a bad deal. The government shouldn’t be criticized for taking a bad line; they should be criticized for empty, commitment-free rhetoric.

Wednesday, May 8, 2013

Risky business


I don’t know how many of the millions of people who’ve bought Daniel Kahneman’s Thinking, Fast and Slow have read it, but I have and I thought it was very interesting. One of things he talks about in chapters 25-26 is risk aversion. Lots of people won’t take a bet to either gain $200 or lose $100 on a coin-toss, and that seems to mean they’re risk averse. They stand to gain more than they stand to lose, and the chances are equal, but they won’t take that chance. Regular readers may remember risk aversion coming up once before when I was talking about Deal or No Deal.

Kahneman says that for a long time economists used to think that (or at least idealize that) people were risk averse when it came to money, but not when it came to utility. Your first million makes a bigger difference to you than your second, and maybe it even makes a bigger difference than your second and third put together. In view of that, maybe your last $100 makes more of a difference than your next $200. If that’s right, you’re not rejecting the bet by being risk averse; you’ve just got a proper appreciation of the diminishing marginal utility of money.

The problem with this line of thought is that while it can rationalize bets which seem sensible instances of monetary risk aversion, it can only do so by attributing people utility functions which also rationalize insane-seeming pieces of (monetary) risk aversion. Matthew Rabin showed this in a technical paper, and he and Richard Thaler wrote an entertaining paper about it which references Monty Python’s dead parrot sketch. The idea is that if diminishing marginal value of money is all that is going on, then someone can’t rationally reject one fairly unattractive bet without rejecting another very attractive bet. Their first example is that if someone will always turn down a 50-50 shot at gaining $11 or losing $10, then there’s no amount of money they could stand to win which would induce them to take a 50% risk of losing $100. They have several other examples, including ones which remove the ‘always’ caveat, only demanding that they would still turn down the first bet even if they were quite a bit richer than they are now. The basic idea is the utility of money has to tail off surprisingly quickly to rationalize rejecting the small bet, and if it tails off too quickly you'll have to make odd decisions when the stakes are high. They’ve thought of objections and the reasoning is hard (for me) to argue with.

Now, what Thaler and Rabin reckon is going on is loss aversion. The reason you won’t take the $100-$200 bet is that you recoil in horror at the thought of losing $100. There’s plenty of behavioural economics research (I’m told) showing that people can’t stand losing even if they’re pretty chilled about not gaining, and that’s why Thaler, Rabin and Kahneman think that’s what’s going on. Thaler and Rabin say it’s not just loss aversion either, it’s myopic loss aversion. The reason it’s myopic is that you’d take a bunch of $100-$200 bets if you were offered them at the same time, because overall you’d probably win big and almost certainly wouldn’t lose. But if that’s your strategy then you should take the bets when they arise, and in the long run you’ll probably end up on top.

I agree that people are myopic, and they don’t always see individual decisions as part of a longterm strategy where losses today get offset by the same strategy’s gains tomorrow. I think Thaler and Rabin have missed something when they invoke loss aversion, though. This is because you can set up the “if you reject this bet then you’ve got to reject this attractive bet” argument without doing anything with losses. Suppose I offer people a choice of either $10 or a 50-50 shot at $21. Sure, some people will gamble, but aren’t lots of people going to take the $10? If they haven’t already, some behavioural economists should do that experiment, because if people reject the bet then Rabin’s theorem will kick in just the same as before and lead to crazy consequences. The difference is that this time you can’t explain the difference as recoiling in horror at the prospect of losing $10, because the gamble doesn’t involve losing any money. It just involves not winning some money, and people are relatively OK with that. (Notice that choosing not to gamble also involves not winning some money.) If you object that the non-gamblers want to make sure they get something, then change the set-up (if your budget stretches that far) to either $20 guaranteed or a 50-50 gamble for $10 or $31. It still works, and I bet plenty of people will still take the $20.

Now, what I think is going on is myopic risk aversion. I don’t see that there’s much wrong with risk aversion in itself. If you could choose either a life containing a million hedons or a 50-50 shot at either a thousand or two million, I’d understand if you took the million. Only a real daredevil would gamble. And when John Rawls is putting whole-life choices before people in the Original Position, he won’t assume they’re anything less than maximally risk averse. Maybe Rawls has gone too far the other way, but I’d definitely want to see a pretty good argument before believing that the cavalier attitude of the expected-something maximizer is rationally obligatory.

Now, mostly when we make decisions they’re small enough and numerous enough that a fairly cavalier strategy has a very low risk of working out badly overall. Applying original-position thinking to the minor bets offered by the behavioural economists in the pub is confused. It feels like you’ve got a 50% chance of getting the bad outcome, but seen in the context of a more general gambling habit the chances of the bad outcomes are actually very small even with the cavalier strategy, and since its potential payoffs are much higher, you’d have to be very risk averse overall to turn down the gamble. You’re very unlikely to be that risk averse all things considered, although perhaps Rawls was right that it’s cheeky to make assumptions.

So that’s what I think’s going on. Loss aversion is real, but it can’t do the work Thaler and Rabin want, either in straightforward form or myopic form. I think the real culprit is myopic risk aversion. Overall risk aversion is rationally permissible, but myopia isn’t and can result in individual decisions looking more risky than they really are. Unless the stakes are really high, like on Deal or No Deal.

Monday, January 24, 2011

Gambling with goodness

I was recently pointed to these essays on reducing suffering. They’re interesting, and most aren’t long. One frightening theme I hadn’t come across before in this context relates to the possibility of scientists creating infinitely many universes in a lab. These would contain an infinite amount of suffering, so the idea is that the expected suffering reduction resulting from making this procedure any less likely will be infinite, so if we can make it less likely then that’s what we should do instead of faffing around with finite effects on the universe. I’m not going to talk about the view that an outcome containing infinitely many suffering-supporting universes would be infinitely bad. What I’m interested in is the role played by expected consequences here. (‘Expected’ is used in the technical sense: an action’s expected resultant goodness is the sum of the products of the goodness of each possible outcome and the probability of that outcome.)

Within consequentialism, there’s room for at least two natural views about how good an action is. One says it’s as good as its expected consequences, and another says it’s as good as its actual consequences. I’ve generally sided with the latter view, but I also thought the dispute had no practical implications since if you're trying to be as good as possible then you'll maximise expected goodness either way. I no longer think that’s right, and the practical implications may be quite serious.

On Deal or No Deal the banker’s offer is typically lower than the expected amount in the contestant’s box, which will be the amount of money left in the unopened boxes divided by the number of unopened boxes. I’ve heard people say the contestants are therefore always irrational to take the offer, but that’s not true. The expected consequences may be better for them if they deal, because of diminishing returns: if someone gave me £10,000 that would probably improve my life less than twice as much as if they gave me £5,000. Benefit needn’t be proportional to prizemoney so expected benefit needn’t be proportional to expected prizemoney. But even if the expected benefits of dealing are less than those of not dealing, it might still be rationally permissible to deal because that way you’re sure of getting something. If you played many times the expectation-maximising strategy would be very unlikely to do worse than the dealing strategy, but if you’re only playing once there’s a significant chance you’re better off dealing. So why not deal? (I don’t know if there’s a rule against a season’s contestants agreeing to maximise their expected prize each time and share their winnings, but there should be.)

Returning to consequentialism: if you’ve got two probability/goodness distributions of much the same shape you should probably go with maximising expectation. Since everyday choices are repeated many times, the expectation-maximising strategy will generally have a similarly shaped distribution to and a higher expectation than the play-it-safe strategy, so you should probably maximise expectation. With unusual situations with very differently shaped distributions it’s not at all obvious though. Here are three thought experiments.

Suppose you’re in a situation where you have two options. On option A two people will die. On option B there’s a 75% chance one person will die and a 25% chance four will. You’re trying to prevent deaths. What should you do? Option B has 0.25 fewer expected deaths, but maybe you should go with A, because B has a significant chance of turning out much worse.

Now suppose option A kills one person, and option B will probably kill nobody, but has a 1% chance of killing 110 people. B has more expected deaths, but maybe you should go with B anyway because it’s 99% certain to be fine and otherwise the poor victim of option A will certainly die.

Now suppose that we could make the infinite-suffering scenario 0.02% less likely by sending everyone to seven years at a gruelling re-education camp between the ages of 16 and 23. Should we do that? Or should we take our chances?

If actions are as good as their expected consequences, then it isn’t up to us what we do if we’re trying to do good. If actions are as good as their actual consequences though, it is up to us. We can risk doing bad things, and if the gambles don’t come off then our actions will indeed be bad, but if they do then our actions will be better than if we’d maximised expectation. So will the world. Now maybe our treatment of risk isn’t up to us, and there’s only one moral attitude to take. But if actual-consequentialism is true, then it is up to us, and that matters.