Pages

Friday, June 15, 2018

Harsanyi vs The World

Harsanyi vs The World

Utilitarians and Egalitarians

One of the things impartial consequentialists get to argue with each other over is how much equality matters. If John has one util (utils are units of utility or wellbeing) and Harriet has four, is that better or worse than if they both have two? Or is it just that the first setup is better than the second for Harriet and the second is better than the first for John, and that's all there is to it? I'm inclined towards this last view, but impartial consequentialists - utilitarians and the like - tend to want to say that setups can be better or worse overall, because they want to go on to say that the overall goodness of the setup an action brings about, or will probably bring about, or something like that, determines whether the action was right or wrong. You can't very well say that your action was right for John and wrong for Harriet, and that's all there is to it. Goodness and badness may seem to be plausibly relative to individuals, but it's less plausible to say that about rightness and wrongness.

So, let's suppose you've decided you want to be an impartial consequentialist. You know how good a setup is for each person, and you want to work out how good it is overall. What are your options?

  • You add up everyone's utility. That makes you an aggregating utilitarian.
  • You can take the mean of everyone's utility. That makes you an averaging utilitarian.
  • You can say that sometimes a setup with less total/average utility is better because it's more equal. That makes you an egailitarian.
  • You can say that sometimes a setup with less total/average utility is better because it's less equal. That makes you an elitist, or something - it's not a very popular view and we'll ignore it in what follows.

For a fixed population, the average will be higher whenever the aggregate is higher and vice versa, so it doesn't matter which kind of utilitarian you are. The population is of course sometimes affected by our actions, but I won't have anything to say about that today. I'm thinking about the difference between being a utilitarian and being an egalitarian. Mostly I'd thought that both were pretty live options, and while there were considerations that might push you one way or the other, ultimately you would have to decide for yourself what kind of impartial consequentialist you were going to be, assuming you were going to be an impartial consequentialist at all. But a few days ago someone on Twitter (@genericfilter - thanks) drew my attention to a paper by John Harsanyi which threatens to settle the question for us, in favour of the utilitarian. That was disquieting for me, since my sympathies are on the other side. But Harsanyi's got a theorem. You can't argue with a theorem, can you? Well, of course you can argue with a theorem: you just argue with its premises. So I'm going to see how an egalitarian might go about arguing with Harsanyi's theorem.

Harsanyi's Theorem

I'm actually reliably informed that Harsanyi has two theorems that utilitarians use to support their case, but I'm only talking about one. It's his theorem that says, on a few assumptions that are plausible for people who think von Neumann-Morgenstern decision theory is the boss of them, that the only function of individual utilities that can give you the overall utility is a weighted sum (or average; we're taking the population to be fixed here). Since we're already impartial consequentialists, that leaves us with a straight sum (or average). So the egalitarians lose their debate with the utilitarians, unless they reject one or more of the assumptions.

Now, I think I understood the paper OK. Well enough to talk about it here, but not well enough to explain the details of the theorem to you. Hopefully this doesn't matter. Harsanyi's work may have been new to me but it isn't all that obscure, and you may already be familiar with it. In case you're not, I'll also try to tell you what's relevant to the argument when it comes up, but if you want a proper explanation of the details you'll really have to go elsewhere for them.

Harsanyi proves five theorems. The first three seem kind of like scene-setting, with the real action coming in theorems IV and V. Here they are (Harsanyi 1955: 313-4):

  • Theorem IV: W [the social welfare function] is a homogeneous function of the first order of U1, U2... Un [where these are the utilities of each of the n individuals in the setup].
  • Theorem V: W is a weighted sum of the individual utilities, of the form W = ∑ ai•Ui, where ai stands for the value that W takes when Ui = 1 and Uj = 0 for all j ≠ i.

Some clarifications before we go on. A homogeneous function of the first order means that if you multiply all the Uis by a constant k then that multiplies W by k as well. The utility functions Ui and the social welfare function W aren't just functions on categorical states of the world; they're functions on probability distributions over such states. He shows in theorem III that you can treat W as a function of the individual utility functions, and you can also treat it as a function on probability distributions over distributions of utility.

Theorem V basically says that the utilitarians are right. Harsanyi doesn't seem to have assumed anything as radical as that, and yet that's what he's proved. It's as if he's pulled a rabbit out of a hat. So the question is: where does the rabbit go into the hat? As I say, theorems I-III seemed like scene-setting, so let's start by looking at theorem IV.

Diminishing Returns

You might think, as I did, that egalitarians should be pretty unhappy with theorem IV. To get an idea of why, think about diminishing returns. If you give someone with no money a million pounds, that will make a big difference. Give them another million, and it makes less of a difference. The third million makes less difference again. Now, the utilitarians aren't saying that John having two million and Harriet having nothing is just as good as both having one million. You build that into the utility function, by saying money has a diminishing return of utility. But here's the thing: the returns of giving John or Harriet money should diminish faster from the point of view of the egalitarian social planner than they do from the point of view of the person getting the money. That's because boosting an individual's utility becomes less important to the planner as the person gets better off.

You can get a feel for why by thinking about different people's agendas. John's agenda remains John's agenda, however much of it he achieves. He achieves the first item, and moves down the list to the second. That's just what it is for it to be John's agenda. But the planner's agenda is different. If John's agenda is totally unmet, the planner might prioritize the first thing on John's list. But if John is getting on well with his agenda, the planner can ignore him a bit and start prioritizing the agendas of people who aren't doing so well. John's fine now. And however you take into account the diminishing returns of John's agenda for John, they should diminish faster for the planner.

For me, this idea had a lot of instinctive pull, and I expect it would for other egalitarians. And this idea is the very thing theorem IV contradicts. Theorem IV says that boosting UJohn from 0 to 1 has just as much impact on W as boosting UJohn from 10 to 11, when everyone else's utility is 0. You have to do a little more to generalize it to hold when other people's utilities aren't 0, which is what theorem V does, but this consequence of theorem IV is bad enough. The rabbit is already in the hat at this point. Or so it seems.

It turns out that denying theorem IV, or at least accepting the egalitarian principle that conflicts with it, gives a result which is even worse, or at least to me it seems even worse. It's well established that diminishing returns can manifest themselves as risk aversion. (Though it's possible not all risk aversion can be explained by diminishing returns.) A second million is worth less to you than a first, and this explains why you wouldn't risk the first for a 50% chance of winning a second. Maybe you're already risk-averse enough that you wouldn't do that anyway, but the point is that diminishing returns on a thing X generate risk-averse behaviour with respect to X. So if Harriet's utility has diminishing returns for the planner, this means that the planner will be more risk-averse with Harriet's utility than she is, other things being equal. That result is very odd. What it means is that in two situations where the only difference is whether Harriet takes a certain gamble or not, one will be better for Harriet but the other will be better overall, even though they're both equally good for everyone else. This is a weird result indeed, and Harsanyi makes sure his assumptions rule it out. It seems that it should be possible to be an egalitarian without accepting this weird result.

The Repugnant Conclusion

So if we're granting theorem IV, we could look for the rabbit going in somewhere in the proof of theorem V. I couldn't really see anything much to work with there, though, so I thought I'd try a different tack. To get an idea of the new strategy, think about Derek Parfit's (1984) repugnant conclusion argument against aggregate utilitarianism (and against lots of other positions). The idea of that argument is to show that aggregate utilitarians are committed to saying that for any setup with lots of people all of whom have great lives, there is a better setup in which everyone's life is only just worth living. (The second setup manages to be better because there are so many more people in it.)

Now, you could try setting up the objection by just taking some postulates that aggregate utilitarians are committed to and then formally proving the result. That's more or less how Harsanyi proceeds with his argument. But what Parfit does has a different flavour. He takes his opponent on a kind of forced march through a series of setups, and the opponent has to agree that each is no worse than the last, and this gets you from the nice world to the repugnant world. Here's a version of how it can go. Start with a world A with lots of happy people. Change it to world B by adding in some people whose lives aren't good but are still worth living. That can't make the world worse, because it's just as good for the initial people and the new people's lives are still better than nothing. Then take a third world C which has the same total utility as B but it's equally distributed. That shouldn't make the world worse either. C is like A but with more people who are less happy. Then you repeat the process until you get the repugnant world R. (If your opponent says R is just as good as A, you construct R' by making everyone a tiny bit better off. R' is still repugnant, and if it's not repugnant enough for you then just continue the process again from R' until you find a world that is.)

What I'm thinking is that if Harsanyi's proofs are valid, which they are, then you should be able to get a similar forced-march argument that embodies the proof, but where you move from a world where lots of people are happy to a repugnant world where one person has all the utility and everyone else has none. This argument should be more amenable to philosophical analysis, and once we've worked out what's wrong with it, we should be able to return to Harsanyi's proof and say where the rabbit goes into the hat. I'm not Parfit, and I haven't come up with anything as good as the repugnant conclusion argument. But I have come up with something.

Harsanyi Simplified

What we want to show, without loss of generality, is why one util for John and four for Harriet is better than two utils for each. One and four are placeholders here; what's important is that they'd each go for a 50-50 shot between one and four, rather than a guaranteed two. Here's the forced march:

  • Independent 50-50 shots between one and four for each is better than a guaranteed two for each.
  • A 50-50 shot between one for John and four for Harriet or four for John and one for Harriet is just as good as independent 50-50 shots between one and four for each.
  • One for John and four for Harriet is as good as four for John and one for Harriet.
  • Since these two outcomes are equally good, and a 50-50 shot between them is better than two for each, both are better than two for each.
  • So one for John and four for Harriet is better than two for each.

This doesn't track his theorem exactly, I don't think, but as I understand the important moves, if the egalitarian can explain what's wrong with this argument, then they can explain what's wrong with Harsanyi's. And if they can't, they can't. A couple of things are worth pointing out, before we think about what might be wrong with it.

First, in a way it's more general than Harsanyi's. You don't have to assume that people should be expected utility maximizers. The point of the argument is that whatever risk profiles self-interested people should be most willing to accept for themselves, the corresponding distributions are the best ones for everyone overall.

Second, and very relatedly, it more or less shows that the best setup (at least for a fixed population) is the one that someone self-interested would choose behind the veil of ignorance to be in. Harsanyi was a fan of that idea, I'm told, but that idea drops out of the reasoning behind his theorem. You don't need that idea to prove the theorem.

So, where should the egalitarian resist the argument? To me, given what we saw earlier in the section on diminishing returns, it seems they would have to resist the idea that if a 50-50 shot at outcomes X or Y is better than outcome Z, then at least one of X and Y must be better than Z. (And if X and Y are equally good, both must be better than Z.) This idea is very intuitive, at least considered in the abstract, and if it's wrong then it seems a lot of decision theory won't really be applicable to impartial moral decision-making. It's not just expectation-maximizing that will have to go either, because the move in question is pretty much just dominance reasoning. (If Z is better than X and better than Y, then it's better than a 50-50 shot between X and Y.) When you give up dominance reasoning, I'm not sure how much of decision theory is left.

Longtime readers may remember that I once said I preferred versions of consequentialism that assessed actions by their actual consequences rather than by their expected consequences. Harsanyi does set things up in a way that's much more friendly to expected-consequences versions, but I don't hold out much hope for resisting Harsanyi's argument along these lines. One problem I have with expected consequences was that it's hard to find a specific probability distribution to use which doesn't undermine the motivation for using expected consequences in the first place. (People will be unsure of the probabilities and of their own credences, just as they're unsure of the consequences of their actions.) But Harsanyi doesn't exploit that at all. Another problem I had, and the one I talked more about in the post, was that expectation consequentialism gives a norm about the proper attitude towards risk, and I don't see that there is a plausible source for such a norm. It's just up to us to take the risks and put up with the consequences. Harsanyi does exploit expectation-maximizing to get the strong utilitarian result, but you just need simple dominance reasoning to get the weaker result that inequalities are justified in a setup if people would choose that setup behind a veil of ignorance (with an equal chance of occupying each position).

All of this, even the weaker result, seems to me to be huge if true. I've long been a fellow traveller of impartial consequentialism, and the main reason I keep using the term 'impartial consequentialism' rather than 'utilitarianism' is that I've got egalitarian sympathies. Readers of my recent posts on Hipparchia will have already noticed me losing my enthusiasm for the project. This Harsanyi stuff may force me to give up on it altogther.

References

  • Arrhenius, G., Ryberg, J. and Tännsjö, T. 2017: 'The Repugnant Conclusion', The Stanford Encyclopedia of Philosophy (Spring 2017 Edition), Edward N. Zalta (ed.), URL = https://plato.stanford.edu/archives/spr2017/entries/repugnant-conclusion/
  • Harsanyi, J. 1955: 'Cardinal Welfare, Individualistic Ethics, and Interpersonal Comparisons of Utility', Journal of Political Economy 63(4): 309-321
  • Parfit, D. 1984: Reasons and Persons (Oxford University Press)

2 comments:

  1. I find Harsanyi's Theorem logically pretty compelling. I think the psychological difficulty with accepting it lies largely in the very strict definitions that utility obeys, relative to all the things we think might contribute to utility, e.g. money, material goods etc.

    This is not just a question of diminishing marginal returns, but also direct equality comparison. Imagine you and I have the same amount of everything, and are equally happy. Someone now gives you £10000 and gives me nothing. You would probably be somewhat happier, but I may also be less happy, because my position relative to my peer has declined. However, if we both have 10 utils, and someone gives you 1 more util and gives me nothing (I stay on 10 utils), then *by definition* I am not less happy.

    The first of those scenarios applies to almost anything we could imagine doing to affect utility, whereas the second is closer to the idea of the theorem. So its very difficult to viscerally appreciate what the theorem is telling us. Perhaps an alternative thought experiment would be to imagine what is best for 'society' where that society is composed of groups of individuals who are unaware of each others existence and thus whose utility (plausibly) is not affected by comparison with each other. I personally do not lie awake at night consumed by envy at the living standards on alien worlds.

    ReplyDelete
  2. Thanks for commenting! I don't believe for a moment that if someone gave me £10,000 you'd be anything other than happy for me. But you're right that positional goods and envy muddy our intuitions about equality, independently of the way diminishing returns on money etc do.

    Speaking of planets, here's a thought experiment. Suppose there are three planets that don't know about each other: planet Great, planet Fine, and planet Bad. They have equal populations, and standards of living on each planet corresponding to their names. I'm a travel guide writer going to be sent to live on one of them, and I'd just about prefer to toss a coin to decide whether I'm sent to Great or Bad over being certain of going to Fine.

    Assuming my attitude towards personal risk is the sensible one, it should follow from Harsanyi's argument that the status quo is better than if everyone on all three planets had the living standards they have on planet Fine. (And the galactic administration shouldn't equalize their living standards at the Fine level even if they could.) This result isn't obviously silly, but it does seem very strange to me if it's forced upon us. Personal risk and interpersonal inequality are different things, and it's odd that we can't say different things about them. But a theorem's a theorem, I guess.

    ReplyDelete