Friday, October 12, 2018

Gambling With The Metaphysics Oracle

A Tax On Bullshit

There's a lot of bullshit around. Wherever you look, people are confidently making predictions, often while being paid to so, and by the time we've been able to test these predictions the people who made them are long gone, sunning themselves on a beach somewhere, spending our money and laughing at us. It's a sorry state of affairs. What can we do about it?

One idea, and it's a good one, is to get people to put their money where their mouths are. Offer people bets on their predictions. If they really believe what they're saying, they shouldn't mind having a little bet on it. If they don't believe it and bet anyway, then at least their bullshit is costing them something. People sometimes call betting on predictions a "tax on bullshit". The person I've heard talking most enthusiastically about this idea is Julia Galef, who I suppose is a pillar of what people refer to as the Rationality Community. Apparently she does it in her everyday life. She's always sounded to me as if she has a fun life, but I think I'm probably too much of an epistemic pessimist to fit in well with the Rationality Community myself.

Regular readers may recall that I sometimes worry about bullshit in philosophy. A lot of the claims philosophers make aren't really very testable at all, and so you can keep up your bullshit for thousands of years without ever being found out. Of course, if something isn't testable then it's not practical to bet on it. But lately I've been wondering how philosophers, particularly metaphysicians, would react if we somehow could offer them bets on their claims. Peter van Inwagen (1990), for example, doesn't think tables exist. When we think we're in the presence of a table, he thinks we're really just in the presence of some simples arranged tablewise. But if we could go to an Oracle to settle the question, would he put his simples arranged moneywise where his simples arranged mouthwise are?

Taking The Bet

The simplest response is for the philosopher to just take the bet, and offer us very favourable odds corresponding to how very sure they are that they're right. Maybe the methods we use for answering metaphysical questions aren't so different in principle from the methods we use for answering any other kind of question, and if we've got what we take to be a good argument then we should be confident in its conclusion. I think that plenty of metaphysicians would have no problem at all taking these bets. They mean what they say quite literally and they are confident that their answers are right.

Declining The Bet

A second response is not to take the bet, on the grounds that you don't actually believe the metaphysical positions you've taken. There are at least two ways this could work, one obvious and disreputable and the other less obvious and more respectable. The obvious one is that you're not actually committed to these positions the way you said you were. You were bullshitting, perhaps without properly realizing it, and now you've been found out. The more respectable one is that you are committed to your metaphysical positions, but the mode of commitment you take to be appropriate to metaphysical positions is not belief. It's something else. Helen Beebee (2018) argues for something along these lines, building on Bas van Fraassen's work on the analogous question in the philosophy of science. For Beebee it's largely a response to the phenomenon of expert disagreement in philosophy and concern about the reliability of philosophical methods, while for van Fraassen I understand it's more about the underdetermination of scientific theories by evidence1. For Beebee and van Fraassen, this kind of commitment is less about believing the untestable parts of the theory and more about committing oneself to participating in a particular research programme.

Rejecting The Setup

A third response is to reject the bet on the grounds that you reject the authority of the Oracle. How can you reject the authority of an Oracle? The basic idea is that we can't imagine anything the Oracle could say or do to convince us that our position is wrong, but you have to be careful with this sort of dialectical move. You don't want to be the sort of person who responds to the trolley problem2 (Foot 1967) by saying they'd dive off the trolley and push the workers out of the way, or some other such silliness. This kind of move usually just serves to derail the conversation and prevent us from engaging with the point the thought experiment was trying to make. In the trolley problem you just stipulate that the situation is simple and the outcomes given each action are certain.3 In the Oracle thought experiment you similarly stipulate that the Oracle is reliable (or infallible) and trusted by all parties. Nonetheless, I think that sometimes it does tell us something useful if we push back on the setup.

The cleaned up certain-outcomes version of the trolley problem isn't very realistic, but it's still something we can imagine. With at least some metaphysical questions, however, the Oracle thought experiment might give rise to what philosophers call imaginative resistance. This is what happens when what you're being asked to imagine somehow doesn't make sense to you, to the point that you struggle to imagine it. It can happen in various ways, including when a story is blatantly inconsistent, or when the moral facts in a story as stipulated conflict with the moral judgements we're inclined to make ourselves when given the non-moral facts. I want to suggest that this imaginative resistance is an indication that even if we take for granted that the Oracle is reliable and that we trust it, this might not be enough. We might still disagree with it, for reasons other than our not trusting it.

I can think of a couple of kinds of case where this situation might arise. Both embody a kind of Carnapian attitude towards philosophical questions. First, suppose we're asking the Oracle about whether there are tables, and it says that there are. Van Inwagen could respond in a couple of ways:

  • "Fair enough; that's a weird and oddly cluttered world, and there was no way for me to find out that there were tables, but I guess you're the Oracle here."
  • "Look, if you want to describe the world as having tables in it, that's up to you. I'm going to keep describing it as not having tables. We're both right by the lights of our own description schemes, and choosing between the schemes is a practical matter about which you're not the boss of me."4

Second, suppose that we're asking the Oracle what the correct analysis of knowledge is, and it says the correct one is Robert Nozick's (1981) counterfactual truth-tracking analysis. We point out all the bizarre results this commits us to as outlined by Saul Kripke (2011), and the Oracle just shrugs, says that's what the word "know" means, and presents a bunch of linguistic usage data to back up its claim. Again, two responses:

  • "Fair enough; the word 'know' isn't as useful as we thought it was, and we can be forgiven for thinking Nozick was wrong, but it means what it means and we must accept that."
  • "Look, if that's what the word means, then the word isn't fit for purpose. We need a concept of knowledge we can use for talking about important issues about humans' access to information about the world, and the concept embodied by this linguistic usage data simply can't be used for that."

These two cases are quite similar. The difference is that in the tables case the philosopher says they weren't wrong about anything: the Oracle and the philosopher have just made different choices, and they are the authorities over their own choices. In the knowledge case the philosopher is shown to be wrong about who knows what, but they push back by saying this just means we need different concepts. The cases kind of shade into each other a bit, but I think the distinction is there, and that the knowledge and tables cases are on different sides of it. Van Inwagen would not be surprised to be shown that we talk as if there are tables. Kripke would be surprised to be shown that we talk as if Nozick's is the correct analysis of the concept expressed by the word "know". That's a difference.

Now, even if you take one of these Carnapian lines, the Oracle could still push back and say that actually you're wrong about whose concepts are better. It might not want to do that in the knowledge case; it might agree that our word "know" attaches to a concept that isn't very useful. But the point is that the Oracle knows everything there is to know, and so it might know something that would make you change your mind. The thought here is along the lines of what Carrie Jenkins (2014) argues for and calls Quinapianism: the decisions over which concepts to use to describe the world are up to us the way Carnap thinks, but our views about which concepts are best are revisable in the light of new information the way Quine thinks all our cognitive commitments are. But even if the Oracle's omniscience gives it an advantage over us, what we end up with here is still a philosophical discussion of the more familiar kind. The Oracle makes the case for its recommended set of concepts, but it's still up to us which concepts we end up using.

So What?

I've had a bit of fun thinking about this, but does it tell us anything about anything? I think it does. I'm inclined to take philosophical questions at face value, and to have the same commitments with respect to them inside and outside of the philosophy room. If I'm bullshitting, I'm not consciously or deliberately bullshitting. I've got a lot of philosophical commitments, albeit subject to a great deal of uncertainty, and I'm sincere about them. But I think my responses to this thought experiment vary a lot depending on which philosophical question we're talking about. Sometimes I think I'd take the bet. Sometimes considering being offered a bet makes me feel more uncertain. (I guess in these cases either I'm being called on my bullshit or the feeling of added uncertainty is itself unjustified. Perhaps it's rooted in risk aversion.) And sometimes I come over a bit Carnapian and get one or other kind of imaginative resistance. (I'm not sure I ever feel the way Beebee suggests I should feel, but I don't understand her position terribly well, and in any case I've only run the thought experiment on a few questions.)

This variation is interesting to me. When people are talking about the status of philosophical propositions and beliefs, they sometimes make it sound like they think we should go for the same response to everything, or perhaps one response for ethics and another for everything else. But I feel very torn between the different responses for a lot of questions, and I lean quite different ways for different questions even within the same branch of philosophy. So when Beebee, Carnap and the rest are putting forward views on the status of philosophical disputes, an answer that works well for one dispute may not work so well for another. One more thing to worry about, I guess.


[1] There is a connection here, in that van Fraassen is sceptical about the reliability of scientific methods to verify the parts of theories that go beyond their empirical adequacy, for example by positing unobservable entities. I'm not a good source for van Fraassen though: most of this is coming from Beebee's account of his position.

[2] The trolley problem is a thought experiment where someone driving a train (or trolley) is going to hit five workers on the tracks, and the only way to avoid killing them is to steer down a side track and kill another, different worker. The original puzzle is explaining why the driver ought to steer and kill one to save five, even though in some other situations it's better to do nothing and let five people die than to act to save them at the cost of killing someone else. Foot gave the example of framing someone to prevent a riot, and another common one is killing someone to use their organs, which I think is due to Judith Jarvis Thomson (1985). Since Thomson's paper there has been a large research programme involving variants on the trolley problem. Some people think it is silly and dismissively call it 'trolleyology'. My own view is that it's unfairly maligned, although I do quite like the word 'trolleyology'.

[3] Foot does acknowledge that the trolley case isn't especially realistic and that the worker might be able to get out of the way. But she also notes that the relevant aspects of the outcomes often really are more or less certain in the real-life medical situations she's using the trolley problem to illuminate.

[4] As I understand him, van Inwagen himself is probably enough of a realist about metaphysics that he should go for the first answer rather than the second. But the availability of the second is what I'm interested in here, and other philosophers do apply a Carnapian approach to questions about composition, including the question of whether there are tables. The main person for this approach is probably Eli Hirsch.


  • Beebee, Helen (2018). I - The Presidential Address: Philosophical Scepticism and the Aims of Philosophy. Proceedings of the Aristotelian Society 118 (1):1-24.
  • Foot, Philippa (1967). The problem of abortion and the doctrine of the double effect. Oxford Review 5:5-15.
  • van Inwagen, Peter (1990). Material Beings. Cornell University Press.
  • Jenkins, C. S. I. (2014). Serious Verbal Disputes: Ontology, Metaontology, and Analyticity. Journal of Philosophy 111 (9-10):454-469.
  • Kripke, Saul A. (2011). Nozick on Knowledge. In Philosophical Troubles. Collected Papers Vol I. Oxford University Press.
  • Nozick, Robert (1981). Philosophical Explanations. Harvard University Press.
  • Thomson, Judith Jarvis (1985). The Trolley Problem. The Yale Law Journal 94 (6):1395-1415

Thursday, July 26, 2018



I've been following the Tour de France this year, and yesterday I wrote a poem about it. It's called 'Domestique'. I hope you like it. Here it is.

I am a humble domestique
I ride the Tour de France
My sponsor's name is on my shirt
And also on my pants

And though I will not win today
I must pretend to try
So when the cameras film it all
They're advertising Sky

It's even worse when riding up
An Alp or Pyrenee
Those are the days I'm someone it's
No fun at all to be

I'm not as strong as Froome, of course
But Froomey needs to chill
So he stays in my slipstream, while
I drag him up the hill

If Froomey's feeling peckish, he
Can have my protein gel
And if his bike breaks down, and he
Needs mine, that's his as well

If such a thing were possible
I'd give my very soul
Maintaining Froomey's comfort
Is my one and only goal

I feel I must explain myself
I feel it makes no sense
That Chris gets all the glory, and
It's all at my expense

To really get inside my head
You have to understand
For three short weeks of agony
They pay me ninety grand

Friday, June 15, 2018

Harsanyi vs The World

Harsanyi vs The World

Utilitarians and Egalitarians

One of the things impartial consequentialists get to argue with each other over is how much equality matters. If John has one util (utils are units of utility or wellbeing) and Harriet has four, is that better or worse than if they both have two? Or is it just that the first setup is better than the second for Harriet and the second is better than the first for John, and that's all there is to it? I'm inclined towards this last view, but impartial consequentialists - utilitarians and the like - tend to want to say that setups can be better or worse overall, because they want to go on to say that the overall goodness of the setup an action brings about, or will probably bring about, or something like that, determines whether the action was right or wrong. You can't very well say that your action was right for John and wrong for Harriet, and that's all there is to it. Goodness and badness may seem to be plausibly relative to individuals, but it's less plausible to say that about rightness and wrongness.

So, let's suppose you've decided you want to be an impartial consequentialist. You know how good a setup is for each person, and you want to work out how good it is overall. What are your options?

  • You add up everyone's utility. That makes you an aggregating utilitarian.
  • You can take the mean of everyone's utility. That makes you an averaging utilitarian.
  • You can say that sometimes a setup with less total/average utility is better because it's more equal. That makes you an egailitarian.
  • You can say that sometimes a setup with less total/average utility is better because it's less equal. That makes you an elitist, or something - it's not a very popular view and we'll ignore it in what follows.

For a fixed population, the average will be higher whenever the aggregate is higher and vice versa, so it doesn't matter which kind of utilitarian you are. The population is of course sometimes affected by our actions, but I won't have anything to say about that today. I'm thinking about the difference between being a utilitarian and being an egalitarian. Mostly I'd thought that both were pretty live options, and while there were considerations that might push you one way or the other, ultimately you would have to decide for yourself what kind of impartial consequentialist you were going to be, assuming you were going to be an impartial consequentialist at all. But a few days ago someone on Twitter (@genericfilter - thanks) drew my attention to a paper by John Harsanyi which threatens to settle the question for us, in favour of the utilitarian. That was disquieting for me, since my sympathies are on the other side. But Harsanyi's got a theorem. You can't argue with a theorem, can you? Well, of course you can argue with a theorem: you just argue with its premises. So I'm going to see how an egalitarian might go about arguing with Harsanyi's theorem.

Harsanyi's Theorem

I'm actually reliably informed that Harsanyi has two theorems that utilitarians use to support their case, but I'm only talking about one. It's his theorem that says, on a few assumptions that are plausible for people who think von Neumann-Morgenstern decision theory is the boss of them, that the only function of individual utilities that can give you the overall utility is a weighted sum (or average; we're taking the population to be fixed here). Since we're already impartial consequentialists, that leaves us with a straight sum (or average). So the egalitarians lose their debate with the utilitarians, unless they reject one or more of the assumptions.

Now, I think I understood the paper OK. Well enough to talk about it here, but not well enough to explain the details of the theorem to you. Hopefully this doesn't matter. Harsanyi's work may have been new to me but it isn't all that obscure, and you may already be familiar with it. In case you're not, I'll also try to tell you what's relevant to the argument when it comes up, but if you want a proper explanation of the details you'll really have to go elsewhere for them.

Harsanyi proves five theorems. The first three seem kind of like scene-setting, with the real action coming in theorems IV and V. Here they are (Harsanyi 1955: 313-4):

  • Theorem IV: W [the social welfare function] is a homogeneous function of the first order of U1, U2... Un [where these are the utilities of each of the n individuals in the setup].
  • Theorem V: W is a weighted sum of the individual utilities, of the form W = ∑ ai•Ui, where ai stands for the value that W takes when Ui = 1 and Uj = 0 for all j ≠ i.

Some clarifications before we go on. A homogeneous function of the first order means that if you multiply all the Uis by a constant k then that multiplies W by k as well. The utility functions Ui and the social welfare function W aren't just functions on categorical states of the world; they're functions on probability distributions over such states. He shows in theorem III that you can treat W as a function of the individual utility functions, and you can also treat it as a function on probability distributions over distributions of utility.

Theorem V basically says that the utilitarians are right. Harsanyi doesn't seem to have assumed anything as radical as that, and yet that's what he's proved. It's as if he's pulled a rabbit out of a hat. So the question is: where does the rabbit go into the hat? As I say, theorems I-III seemed like scene-setting, so let's start by looking at theorem IV.

Diminishing Returns

You might think, as I did, that egalitarians should be pretty unhappy with theorem IV. To get an idea of why, think about diminishing returns. If you give someone with no money a million pounds, that will make a big difference. Give them another million, and it makes less of a difference. The third million makes less difference again. Now, the utilitarians aren't saying that John having two million and Harriet having nothing is just as good as both having one million. You build that into the utility function, by saying money has a diminishing return of utility. But here's the thing: the returns of giving John or Harriet money should diminish faster from the point of view of the egalitarian social planner than they do from the point of view of the person getting the money. That's because boosting an individual's utility becomes less important to the planner as the person gets better off.

You can get a feel for why by thinking about different people's agendas. John's agenda remains John's agenda, however much of it he achieves. He achieves the first item, and moves down the list to the second. That's just what it is for it to be John's agenda. But the planner's agenda is different. If John's agenda is totally unmet, the planner might prioritize the first thing on John's list. But if John is getting on well with his agenda, the planner can ignore him a bit and start prioritizing the agendas of people who aren't doing so well. John's fine now. And however you take into account the diminishing returns of John's agenda for John, they should diminish faster for the planner.

For me, this idea had a lot of instinctive pull, and I expect it would for other egalitarians. And this idea is the very thing theorem IV contradicts. Theorem IV says that boosting UJohn from 0 to 1 has just as much impact on W as boosting UJohn from 10 to 11, when everyone else's utility is 0. You have to do a little more to generalize it to hold when other people's utilities aren't 0, which is what theorem V does, but this consequence of theorem IV is bad enough. The rabbit is already in the hat at this point. Or so it seems.

It turns out that denying theorem IV, or at least accepting the egalitarian principle that conflicts with it, gives a result which is even worse, or at least to me it seems even worse. It's well established that diminishing returns can manifest themselves as risk aversion. (Though it's possible not all risk aversion can be explained by diminishing returns.) A second million is worth less to you than a first, and this explains why you wouldn't risk the first for a 50% chance of winning a second. Maybe you're already risk-averse enough that you wouldn't do that anyway, but the point is that diminishing returns on a thing X generate risk-averse behaviour with respect to X. So if Harriet's utility has diminishing returns for the planner, this means that the planner will be more risk-averse with Harriet's utility than she is, other things being equal. That result is very odd. What it means is that in two situations where the only difference is whether Harriet takes a certain gamble or not, one will be better for Harriet but the other will be better overall, even though they're both equally good for everyone else. This is a weird result indeed, and Harsanyi makes sure his assumptions rule it out. It seems that it should be possible to be an egalitarian without accepting this weird result.

The Repugnant Conclusion

So if we're granting theorem IV, we could look for the rabbit going in somewhere in the proof of theorem V. I couldn't really see anything much to work with there, though, so I thought I'd try a different tack. To get an idea of the new strategy, think about Derek Parfit's (1984) repugnant conclusion argument against aggregate utilitarianism (and against lots of other positions). The idea of that argument is to show that aggregate utilitarians are committed to saying that for any setup with lots of people all of whom have great lives, there is a better setup in which everyone's life is only just worth living. (The second setup manages to be better because there are so many more people in it.)

Now, you could try setting up the objection by just taking some postulates that aggregate utilitarians are committed to and then formally proving the result. That's more or less how Harsanyi proceeds with his argument. But what Parfit does has a different flavour. He takes his opponent on a kind of forced march through a series of setups, and the opponent has to agree that each is no worse than the last, and this gets you from the nice world to the repugnant world. Here's a version of how it can go. Start with a world A with lots of happy people. Change it to world B by adding in some people whose lives aren't good but are still worth living. That can't make the world worse, because it's just as good for the initial people and the new people's lives are still better than nothing. Then take a third world C which has the same total utility as B but it's equally distributed. That shouldn't make the world worse either. C is like A but with more people who are less happy. Then you repeat the process until you get the repugnant world R. (If your opponent says R is just as good as A, you construct R' by making everyone a tiny bit better off. R' is still repugnant, and if it's not repugnant enough for you then just continue the process again from R' until you find a world that is.)

What I'm thinking is that if Harsanyi's proofs are valid, which they are, then you should be able to get a similar forced-march argument that embodies the proof, but where you move from a world where lots of people are happy to a repugnant world where one person has all the utility and everyone else has none. This argument should be more amenable to philosophical analysis, and once we've worked out what's wrong with it, we should be able to return to Harsanyi's proof and say where the rabbit goes into the hat. I'm not Parfit, and I haven't come up with anything as good as the repugnant conclusion argument. But I have come up with something.

Harsanyi Simplified

What we want to show, without loss of generality, is why one util for John and four for Harriet is better than two utils for each. One and four are placeholders here; what's important is that they'd each go for a 50-50 shot between one and four, rather than a guaranteed two. Here's the forced march:

  • Independent 50-50 shots between one and four for each is better than a guaranteed two for each.
  • A 50-50 shot between one for John and four for Harriet or four for John and one for Harriet is just as good as independent 50-50 shots between one and four for each.
  • One for John and four for Harriet is as good as four for John and one for Harriet.
  • Since these two outcomes are equally good, and a 50-50 shot between them is better than two for each, both are better than two for each.
  • So one for John and four for Harriet is better than two for each.

This doesn't track his theorem exactly, I don't think, but as I understand the important moves, if the egalitarian can explain what's wrong with this argument, then they can explain what's wrong with Harsanyi's. And if they can't, they can't. A couple of things are worth pointing out, before we think about what might be wrong with it.

First, in a way it's more general than Harsanyi's. You don't have to assume that people should be expected utility maximizers. The point of the argument is that whatever risk profiles self-interested people should be most willing to accept for themselves, the corresponding distributions are the best ones for everyone overall.

Second, and very relatedly, it more or less shows that the best setup (at least for a fixed population) is the one that someone self-interested would choose behind the veil of ignorance to be in. Harsanyi was a fan of that idea, I'm told, but that idea drops out of the reasoning behind his theorem. You don't need that idea to prove the theorem.

So, where should the egalitarian resist the argument? To me, given what we saw earlier in the section on diminishing returns, it seems they would have to resist the idea that if a 50-50 shot at outcomes X or Y is better than outcome Z, then at least one of X and Y must be better than Z. (And if X and Y are equally good, both must be better than Z.) This idea is very intuitive, at least considered in the abstract, and if it's wrong then it seems a lot of decision theory won't really be applicable to impartial moral decision-making. It's not just expectation-maximizing that will have to go either, because the move in question is pretty much just dominance reasoning. (If Z is better than X and better than Y, then it's better than a 50-50 shot between X and Y.) When you give up dominance reasoning, I'm not sure how much of decision theory is left.

Longtime readers may remember that I once said I preferred versions of consequentialism that assessed actions by their actual consequences rather than by their expected consequences. Harsanyi does set things up in a way that's much more friendly to expected-consequences versions, but I don't hold out much hope for resisting Harsanyi's argument along these lines. One problem I have with expected consequences was that it's hard to find a specific probability distribution to use which doesn't undermine the motivation for using expected consequences in the first place. (People will be unsure of the probabilities and of their own credences, just as they're unsure of the consequences of their actions.) But Harsanyi doesn't exploit that at all. Another problem I had, and the one I talked more about in the post, was that expectation consequentialism gives a norm about the proper attitude towards risk, and I don't see that there is a plausible source for such a norm. It's just up to us to take the risks and put up with the consequences. Harsanyi does exploit expectation-maximizing to get the strong utilitarian result, but you just need simple dominance reasoning to get the weaker result that inequalities are justified in a setup if people would choose that setup behind a veil of ignorance (with an equal chance of occupying each position).

All of this, even the weaker result, seems to me to be huge if true. I've long been a fellow traveller of impartial consequentialism, and the main reason I keep using the term 'impartial consequentialism' rather than 'utilitarianism' is that I've got egalitarian sympathies. Readers of my recent posts on Hipparchia will have already noticed me losing my enthusiasm for the project. This Harsanyi stuff may force me to give up on it altogther.


  • Arrhenius, G., Ryberg, J. and Tännsjö, T. 2017: 'The Repugnant Conclusion', The Stanford Encyclopedia of Philosophy (Spring 2017 Edition), Edward N. Zalta (ed.), URL =
  • Harsanyi, J. 1955: 'Cardinal Welfare, Individualistic Ethics, and Interpersonal Comparisons of Utility', Journal of Political Economy 63(4): 309-321
  • Parfit, D. 1984: Reasons and Persons (Oxford University Press)

Friday, May 25, 2018

What We Demand Of Each Other

In the last post I was thinking about Hipparchia's paradox. Hipparchia was a cynic philosopher who lived in Athens in the 4th and 3rd centuries BCE, and she posed the puzzle of why it wasn't OK for her to hit a guy called Theodorus, even though it would have been OK for him to hit himself and morality is supposed to be universalizable. I did try discussing the puzzle a bit, but what I mostly wanted was for moral philosophers to take the puzzle more seriously, to work out how their moral theories can accommodate it, and to start calling it 'Hipparchia's paradox'. I'm not really the kind of person I was hoping would think about it more, but I've been thinking about it a bit more anyway.

I mentioned that we can try resolving the paradox by saying that people were allowed to waive consideration of negative consequences to themselves in the moral evaluation of their own actions. And I worried that if these moral waivers are a thing, then there might be other kinds of moral waivers, and our final theory might end up looking unrecognizable as consequentialism. Some people will be fine with that, of course, but I've long been a bit of a fellow-traveller of (impartial, agent-neutral) consequentialism, and consequentialists (especially impartial agent-neutral ones) are probably the people Hipparchia's paradox is most of a puzzle for.

So, I've been thinking some more about these waivers, and what I'm thinking is that the reason they feel kind of scary is that they're an example of voluntarism1. Voluntarism is the idea that what's right and wrong is fixed in some special way by someone's will. What counts as the will and what counts as the relevantly special way is a bit up for grabs, and some of the disputes over voluntarism will be verbal. But it's not all verbal, and a certain kind of moral philosopher should be scared of voluntarism. A classic version of voluntarism is divine command theory, which is sometimes called theological voluntarism. Divine command theorists say this sort of thing:

  • When something is wrong, it's because it goes against God's will.
  • When something is wrong, it's because God has decreed that it's wrong.

Divine command theory isn't all that popular among moral philosophers nowadays, although it had a pretty good run with them in the middle ages, and it's still alive and well in the moral thinking of some religious people. Its detractors often view it as getting things backwards. Things aren't wrong because they go against God's will; God wants us not to do them because they're wrong. Similarly, when God says something's wrong, that's because it is wrong, not the other way round. This problem is called the Euthyphro problem, after the dialogue Plato wrote about it. The Euthyphro problem isn't just about divine command theory though; it applies in some way to all versions of voluntarism. People don't make things wrong by wanting them not to be done; they want them not to be done because they're wrong, or at least they should. The worry is that anyone adopting a version of voluntarism is taking the wrong side in the Euthyphro problem. That sounds like bad news for the waiver response to Hipparchia's paradox.

Nonetheless, I think it might be worth giving voluntarism another look, at least in the form of these waivers. There are two reasons. First, Hipparchia's paradox does provide a direct argument for waivers. Second, there's a big difference between God being the boss of us and us being the boss of us, or even better, the people our actions have an impact on being the boss of us. Arguments against divine voluntarism may well not carry over to this more worldly form of voluntarism. So now here's the next question: if waivers are a thing, which waivers are a thing? In the previous post I made a list of questions about possible waivers, and I'll repeat that list here, with a bit of commentary explaining why I thought they were worth asking.

  • Can I waive consideration of consequences to myself in the moral evaluation of someone else's actions?

The idea there is that if I volunteer to take one for the team, then it isn't wrong for the team to go along with that. Suppose you want to go to a party which will be pretty good, and I want to go to a different party which will be very good, but one of us has to stay home and wait for the plumber to come and fix the toilet. (We'll assume we'd both enjoy staying home equally.) What I'm suggesting is that if I volunteer to stay home, you don't wrong me in going along with this, even though I would probably enjoy my party more than you would enjoy yours. Now, you might disagree with this assessment of the situation. But the point is quite similar to Hipparchia's paradox: just as Theodorus is allowed to hit himself, people are allowed to sacrifice their own interests for others, even if the sacrifice is greater than the benefit. And even if they can't make the sacrifice without the co-operation of the beneficiaries, the beneficiaries don't do anything wrong in co-operating. If the person making the sacrifice says it's OK, then it's OK. (I'm not saying this is right, but this is the thinking behind the question.)

  • Can I waive consideration of some but not all negative consequences to myself?

I'm not sure I expressed this one as clearly as I could have, but here's what I'm thinking. Maybe it's OK for me to waive consideration of minor things, but not major things. Or maybe it's OK for me to waive consideration of forgone pleasures, but not of positive harms. I won't go into the details of what sorts of things I might not be morally allowed to do to myself, or other people might be wrong to do to me even with my permission, but there's a reasonably venerable tradition of thinking that there is such a distinction to be made. But if there is, you have to wonder what its basis might be.

  • Can I waive consideration of bad things happening to me even if someone else cares about me and so these would also be negative consequences to them?

Nobody is an island, and often if something bad happens to Theodorus, he's not the only person who suffers. If Theodorus hits himself, this might upset his friends, and maybe it's wrong because of that. I think there's a fair bit of pressure from common-sense morality to say that Theodorus hitting himself is nobody's business but his own, and if it bothers his friends then he's entitled to waive that fact from the moral evaluation of his action. There are probably limits to what common-sense morality permits along these lines, and maybe I'm getting common-sense morality wrong. But even if I'm not getting it wrong, I'm not really sure how this dynamic is supposed to work. One possibility is that waiving the harm Theodorus does you by hitting himself is partly constitutive of the very relationship in virtue of which Theodorus hitting himself harms you. While I do think this idea has some superficial appeal, I fear its appeal may be only superficial. But perhaps there's the germ of something workable in there.

  • Can I do this on an action-by-action basis, or at least a person-by-person basis, or do I have to waive it for all people or all actions if I waive it for one?

This is an issue about universalizability and fairness. How arbitrary am I allowed to be in dishing out permissions? One possibility is that we have a lot of latitude about what permissions we can give, but a lot less latitude about what permissions we should give. But I expect we also probably have a fair bit of latitude with the ones we should give, because these permissions are bound up with personal relationships, and we don't have personal relationships with everyone. In particular, waivers in personal relationships might often be part of a mutually beneficial reciprocal arrangement. Being morally in the wrong is bad for you, and personal relationships are difficult, and provided you're both trying hard it might be better not to be morally in the wrong every time you mess up. These waivers probably shouldn't have to be blanket waivers: a certain amount of mutual pre-emptive forgiveness doesn't make it impossible for you to to wrong each other.

  • Are there ever situations where someone can waive consideration of a negative consequence to someone other than themselves?

Part of the issue here is the nobody-is-an-island problem I discussed a couple of questions ago. But the issue also arises in the case of children, and other people who have someone else responsible for their welfare to an extent. It may also arise with God. I think it's quite possible that there just aren't any exceptions of this kind. You're allowed to take one for the team, but you're not allowed to have your children take one for the team. But here's an example I've been thinking about a bit. Suppose that you and I are doing a high-stakes pub quiz together, and we win a family trip to Disneyland. A reasonably fair thing to do would be for us to auction half of the trip between us and have the higher bidder pay the lower bidder for the lower bidder's half. But suppose I just tell you to go ahead and enjoy yourself. My family are losing out here as well as me, but somehow it still feels like I've done something nice, rather than robbing my family of the equivalent of half a trip to Disneyland. I think I'd probably end up coming down on the side of saying I'm wrong to give you my half of the trip, although perhaps the matter is complicated by the fact that my family weren't on the team, so it's my prize not theirs. But letting you have the trip does still put my family out. I'm really not sure what I think about this. But I think it's likely people do make this kind of collective sacrifice from time to time, and that they feel like they've done a good thing and not a bad thing.

I think that a moral theory that incorporated these kinds of waivers in a big way might have some mileage in it. There are plenty of worries about it, of course. I'll talk about two.

First, how freely does someone have to be giving these permissions? People make decisions with imperfect information and imperfect rationality, and they also make them under conditions of oppression. It's a common criticism of libertarian capitalism that letting people make whatever contracts they want will lead to a lot of inequality of outcome resulting from unequal bargaining positions. Most countries don't want the economically disempowered bargaining away their kidneys, and maybe we don't want people bargaining away the fact that harming them is wrong. I think some libertarian rhetoric makes it sound as if they think that contracts actually do have this wrongness-nullifying effect, but it's possible they don't say this, and if they do then I'm really not optimistic about them being right. You might be able to imagine idealized situations where the waivers look plausible, but the reality of it might look pretty hideous in some cases. And when you're doing ethics, hideousness detracts from plausibility.

My second worry is that constructing a theory of moral waivers might be joining what I think of as the excuses industry. Impartial consequentialism is notoriously demanding, especially in our interconnected world. But I don't think that should be surprising really: we don't expect it to be easy doing the right thing all the time. A long time ago I wrote about how the supposedly counterintuitive results of impartial consequentialism seemed to me to appeal to either selfishness, squeamishness, or a bit of both. I still feel the pull of that line of thought, and although I'm not really an impartial consequentialist myself, I am as I say a fellow traveller. Some people try to construct theories that don't have these demanding results, but I don't really want to be in the business of constructing theories that are basically elaborate excuses that allow us to live high while other people die. I hope that's not all I'd be doing, and I don't think it's all that other opponents of impartial consequentialism are doing, but I do think it's a trap you have to be careful not to fall into.

With those worries out in the open, I'll sketch the basic outline of the theory I've got in mind. You start with a background of some kind of impartial consequentialism, and then overlay the waivers. Morality might legitimate us making some very heavy demands on each other, but we don't have to actually make these demands. I guess the way it works is these waivers will create a category of supererogatory actions - actions which are good but not obligatory - which impartial consequentialism sometimes struggles to accommodate. If someone's waived a harm it's still better not to cause the harm, but it's not obligatory. I'm imagining the theory as being most distinctive in its treatment of morality within personal relationships. I mentioned earlier that some reciprocal waiving might be a common or even constitutive feature of some relationships. Perhaps it could be extended to involve relationships between people who don't know each other as well or at all, but who are members of the same community. If I was going to think seriously about that then I'd need to learn more about communitarian ethical theories. I'm really not very familiar with how they work, but from what I've heard they sound pretty relevant.

The post a few days ago closed with this argument against consequentialism:

  • Hipparchia's paradox shows that fully agent-neutral consequentialism is absurd.
  • The only promising arguments for consequentialism are arguments for fully agent-neutral consequentialism.
  • So there are no good arguments for consequentialism.

It's not great, really, and I said so at the time. But let's think about how all this waivers stuff started with Hipparchia's paradox. You could just look at the paradox and say "waivers are in, impartial consequentialism is out", and then merrily start constructing theories with waivers all over the place. I think that would be a mistake. An alternative, which I don't think would be the same kind of mistake, is to look at other candidates for waivers that are somehow similar to the original case. The best case I've got in mind is when a group of people see themselves as somehow on the same side, and so individual team-members' failures aren't moral failures, even though they could have done better and the other members of the team would have been better off. The team has a sufficient unity of purpose that the members view the team as analogous to an individual. Team members don't press moral charges against team members just as Theodorus doesn't press moral charges against himself.

One last thing about waivers is that you might share a lot of the intuitions about the examples but try to incorporate them within a straight impartial consequentialist theory. Maybe people being able to take advantage of the waivers without feeling guilty about it turns out to have the best consequences overall. There's a long tradition of moral philosophers doing this sort of thing. The main trick is to distinguish what it is for an action to be right or wrong with the information we use to decide whether an action is right or wrong. In John Stuart Mill's Utilitarianism he makes this move several times, and in fellow utilitarian RM Hare's Book Moral Thinking: Its Levels, Method, and Point those were the sorts of levels he was talking about. (At least if I remember them right.)

I used to be quite impressed with this move. Now I'm not so sure. The reason I've long been a fellow-traveller of impartial consequentialism without properly signing up is that I'm also a bit of an error theorist. I read JL Mackie's Ethics at an impressionable age, and while I'm a lot more sceptical about philosophical conclusions than I used to be, it's still got some pull for me. But maybe the reason it's got all this pull is because I'm thinking about moral facts in terms of what Henry Sidgwick (I think) called 'the point of view of the universe'. On that topic I read something once about the conflict between deontological and consequentialist ethics that stayed with me. The point was that deontologists shouldn't argue that sometimes there are actions we should do even though things will be worse if we do them. To concede that is to concede too much to the consequentialist: it makes things too easy for them if that's the position they have to attack. The consequentialist needs to earn the claim that there's some meaningful way in which consequences, or the whole world, can be good or bad simpliciter rather than just good or bad from the point of view of a person, or a particular system of rules, or perhaps something else. It's true: the consequentialist needs this claim and the deontologist doesn't. It's non-trivial and the deontologist should make the consequentialist earn it. And I don't think they have earned it. I can't remember where I read this, unfortunately. I'd thought it was in a Philippa Foot paper, but I re-read the two papers I thought it might be in (Foot 1972 and 1995), and while re-reading them was rewarding I couldn't find it in either. I still think it's probably her. If you can tell me where it's from, please do so in the comments. [UPDATE 24/6/18: She makes the point in Foot 1985: 'Utilitarianism and the virtues'.]

Anyway, maybe error theory wouldn't have the same pull for me if I got away from the idea of the point of view of the universe and instead thought about morality as being fundamentally about human relationships, collective decision-making and what have you. The levels move takes the manifest image of morality and explains it in terms of something more systematic at a lower level. But this systematic lower level is where the point of view of the universe is, and that's what threatens to turn me into an error theorist. The manifest image is where the human relationships and collective decision-making are, and maybe those aren't so weird. The dilemma arises because the lower level stuff about how good the universe is can seem more plausibly normative, while the higher level stuff about relationships is more plausibly real.

Of course, as things stand with the theory I've got in mind there's still an impartial consequentialist background with the waivers laid on top. The impartial consequentialist background is as weird as ever, and you can't have a moral theory that's all waivers. But maybe this could be a transitional step on the way to me having a more accurate conception of what moral facts are facts about, and perhaps eventually losing interest in moral error theory altogether. That might be nice.


[1] I'm a little unsure about the terminology here. It's pretty established to call divine command theory 'theological voluntarism', and I'm fairly sure I've seen 'voluntarism' used more generally to include non-theological versions like the waiver theory I'm talking about here. But 'voluntarism' also seems to be used to refer to theories according to which the moral properties of an action depend on the will with which the action was performed. (This idea is important in Kant's ethics.) The two ideas could overlap, but it's not obvious that they have to. So if you've got strong views about what 'voluntarism' means and think I'm using it wrong, then I apologize. And when you're discussing this blogpost with your friends, you should be careful how you use the word. But I don't know another word for the thing I'm talking about, and I think I've heard people calling it 'voluntarism', so that's the call I've made.


  • Foot, P. 1972: 'Morality as a system of hypothetical imperatives', Philosophical Review 81 (3):305-316
  • Foot, P. 1985: 'Utilitarianism and the virtues', Mind 94 (374):196-209
  • Foot, P. 1995: 'Does moral subjectivism rest on a mistake?', Oxford Journal of Legal Studies 15 (1):1-14
  • Hare, R. M. 1981: Moral Thinking: Its Levels, Method, and Point (Oxford University Press)
  • Mackie, J. L. 1977: Ethics: Inventing Right and Wrong (Penguin Books)
  • Mill, J. S. 1863/2004: Utilitarianism, Project Gutenberg ebook #11224,
  • Plato c.399-5 BCE/2008: Euthyphro, trans. Benjamin Jowett, Project Gutenberg ebook #1642,

Monday, May 21, 2018

Hipparchia's Paradox

The most famous cynic philosopher was Diogenes of Sinope, who lived in an old wine jar and told Alexander the Great to get out of his light. But he wasn’t the only cynic; there was a whole bunch of them. The second or third most famous cynic was Hipparchia. (The third or second was Crates, Hipparchia’s husband.) Hipparchia doesn’t seem to have written much if anything, as tended to be the way with the cynics, but history has recorded at least one of her arguments, via an anecdote about an exchange she had with some jackass called Theodorus at a party one time. Here’s how Diogenes Laertius (not to be confused with Diogenes the jar-dweller) tells it:
Theodorus, the notorious atheist, was also present [at Lysimachus’s party], and she posed the following sophism to him. ‘Anything Theodorus is allowed, Hipparchia should be allowed to do also. Now if Theodorus hits himself he commits no crime. Neither does Hipparchia do wrong, then, in hitting Theodorus.’ At a loss to refute the argument, Theodorus tried separating her from the source of her brashness, the Cynic double cloak. Hipparchia, however, showed no signs of a woman’s alarm or timidity. Later he quoted at her lines from The Bacchae of Euripides: ‘Is this she who abandoned the web and women’s work?’ ‘Yes,’ Hipparchia promptly came back, ‘it is I’. But don’t suppose for a moment that I regret the time I spend improving my mind instead of squatting by a loom.’ [Lives of the Ancient Philosophers 6: 96-8; pp45-6 in Dobbin]
I’ve quoted the context as well as just the argument, the alternative being to quote it out of context. I think it’s pretty clear that Hipparchia is the winner of this story, although it’s possible the reality of the situation was pretty unpleasant for everyone concerned. But having acknowledged the context, I’d like to think a bit about the argument in isolation. Here’s the argument laid out neatly:
  • Anything Theodorus is allowed, Hipparchia should be allowed to do also.
  • If Theodorus hits himself he commits no crime.
  • So neither does Hipparchia do wrong in hitting Theodorus.
The first premise is about universalizability: morality is supposed to apply equally to everyone. It’s a bit less clear what the theoretical basis of the second premise is. It seems like a part of most people’s common sense morality that if someone wants to hit themselves then that’s their own business, and while it might be inadvisable, it isn’t immoral. Common sense morality changes from place to place, but I guess this is part of it that my society has in common with Hipparchia’s. You could explain the truth of the second premise in various ways, some of which will mean qualifying or restricting it, and I think that how exactly we explain it will affect how the paradox gets resolved. The conclusion is meant to be absurd, showing that something is wrong with either the premises or the inference.
I think the most obvious way to try to resolve the paradox is to interpret the permission in the second premise as being explained by a general permission for people to hit themselves, rather than a general permission to hit Theodorus. The action that Theodorus is allowed to do is hitting oneself, not hitting Theodorus. Hipparchia is allowed to do the action hitting oneself too, so universalizability is saved.
There’s a problem with this, though: Theodorus is also allowed to do hitting Theodorus. He’d better be, because if an action is immoral under some description, then it’s immoral. This means there is something he’s allowed to do and Hipparchia isn’t, and so universalizability isn’t saved. Universalizability isn’t the idea that some of morality applies equally to everyone; it’s the idea that all of morality applies equally to everyone. Now, I don’t mean to be disingenuous. I’m not saying that Hipparchia’s paradox shows that universalizability is bunk; I’m just saying there’s more work to do. I don’t think there can be much doubt that it somehow matters that the description of the action as hitting oneself applies to Theodorus’s action and not Hipparchia’s. It just doesn’t resolve the paradox completely, and it’s perhaps more of a restatement of the paradox than anything else. Sometimes a restatement of a paradox is more or less all you need, but in this case I don’t think the restatement is enough.
Here’s another line of attack. Maybe on any given occasion it really is only OK for Theodorus to hit Theodorus if it’s OK for Hipparchia to hit Theodorus. The difference is that occasions when he hits himself will be those rare occasions when he wants to be hit, whereas occasions when she hits him are likely to be occasions when he doesn’t want to be hit. (And also he won’t hit himself harder than he wants to be hit.) This kind of reasoning is behind some anti-paternalist thinking in political philosophy. The classic anti-paternalist work is On Liberty, which was published under John Stuart Mill’s name but was probably coauthored with Harriet Taylor Mill, if you take its dedication literally. (It’s possible the Mills were the greatest philosophical power couple since Hipparchia and Crates. I can’t think of a greater one in the roughly 2150 years betweeen them, although perhaps you can, and perhaps there’s an obvious one I’m missing. [UPDATE: A friend pointed out I forgot Abelard and Heloise.]) They argued that the state shouldn’t be interfering with you if you’re not doing anyone else any harm. Here they are:
The object of this Essay is to assert one very simple principle, as entitled to govern absolutely the dealings of society with the individual in the way of compulsion and control, whether the means used be physical force in the form of legal penalties, or the moral coercion of public opinion. That principle is, that the sole end for which mankind are warranted, individually or collectively, in interfering with the liberty of action of any of their number, is self-protection. That the only purpose for which power can be rightfully exercised over any member of a civilised community, against his will, is to prevent harm to others. His own good, either physical or moral, is not a sufficient warrant. [On Liberty: p17]
People disagree over how far you can reconcile this with the consequentialism you find in Utilitarianism, but if you’re trying to reconcile them it usually goes roughly as follows. People will do things that have good consequences for themselves, so if their actions don’t have bad consequences for anyone else then they don’t have bad consequences for anyone. Given consequentialism, that means the actions aren’t bad. That means the state shouldn’t be interfering with them. It’s a bit of a Swiss cheese of an argument, and I think it remains so even if you’re properly doing justice to it, but I also think they were on to something important.
A classic example of paternalism is seatbelt laws. Idealizing a bit, the set-up is this: by not wearing a seatbelt you’re not putting anyone at risk but yourself. But by having laws demanding people wear seatbelts, you can save lives. Let’s consider a couple of things a libertarian might have to say about this:
  • “If I value my life so much and my convenience so little that the small chance that wearing a seatbelt will save my life is worth the inconvenience of wearing one, then I will wear a seatbelt.”
  • “The only person who stands to get hurt here is me, and I’m fine with it. Mind your own business.”
The first is a simple consequentialist argument: we don’t have to worry about people not wearing seatbelts in situations where the expected consequences are negative. (It also takes the relative value of someone’s life and convenience to be the relative value they themselves assign to them, but maybe that’s not so silly at least in the case of most adults.) The second libertarian response is harder to categorize. It can still be made out as consequentialist in a way, but it says that people are allowed to waive consideration of negative consequences to themselves. The first objection, where it applies, flows straightforwardly from a simple consequentialism that says the right thing to do is the thing with the best consequences. The second applies more generally, but it says that sometimes it’s OK to do the thing that doesn’t have the best consequences. If we’re allowing people to waive consideration of consequences to themselves in the moral evaluation of their own actions, this raises questions about what other kinds of waivers are allowed:
  • Can I waive consideration of consequences to myself in the moral evaluation of someone else’s actions?
  • Can I do this on an action-by-action basis, or at least a person-by-person basis, or do I have to waive it for all people or all actions if I waive it for one?
  • Can I waive consideration of some but not all negative consequences to myself?
  • Can I waive consideration of bad things happening to me even if someone else cares about me and so these would also be negative consequences to them?
  • Are there ever situations where someone can waive consideration of a negative consequence to someone other than themselves?
None of these seem to me like they have obvious answers, with the possible exception of the last one, even if we grant that people can waive consideration of harm to themselves in the moral evaluation of their own actions. I expect some readers will think some of the answers are fairly obvious (and that the last one is obviously obvious), or will at least have views on some of the questions, perhaps based on the literatures which presumably exist on each of them. To be clear, I’m not saying that a consequentialism with a self-sacrifice caveat can’t be made coherent. You could say that an action is permissible iff it either maximizes expected utility or has an expected utility for other people at least as high as the expected utility for other people of some permissible action. That seems to get the right results. The problem I have is that if waivers are a thing, then there are other waivers we might want to include in our theory as well, and after a while our theory might end up not looking much like consequentialism at all.
One way to avoid these questions is to deny that people can waive consideration of themselves in the first place. But then Hipparchia’s paradox comes back, at least a little. The problem with this simple consequentialist response to the paradox is that people don’t always do what’s best for them. Unless we supplement the response somehow, it will mean that whenever Theodorus hits himself and it isn’t what’s best for him, he is doing something wrong after all. (At least when he had enough information to work out that it probably wouldn’t be best for him.) Is this what we want to say?
I can sort of see how some people might want to bite this bullet. If you’re an agent-neutral consequentialist, then you think that the only information relevant to whether an action is wrong or not is how good its consequences are. Who did the action isn’t relevant. So this kind of consequentialist should say that Theodorus hitting himself really is immoral whenever it’s inadvisable. If someone gets on their high horse with you about how you’re not doing what’s best for yourself, they actually do have the moral high ground. Perhaps this is right. But it’s weird.
I don’t really feel like I’ve got very far with this. But my main aim was to present the argument as something worth thinking about, because I do think it’s worth thinking about. I’ll close by presenting another argument, which is also a Swiss cheese of an argument, but which I’m also worried might be on to something.
  • Hipparchia’s paradox shows that fully agent-neutral consequentialism is absurd.
  • The only promising arguments for consequentialism are arguments for fully agent-neutral consequentialism.
  • So there are no good arguments for consequentialism.
  • Dobbin, R. 2012: Anecdotes of the Cynics, selected and translated by Robert Dobbin. Penguin Random House.
  • Mill, J. S. 1859/2011: On Liberty, Project Gutenberg ebook #34901,
  • Mill, J. S. 1863/2004: Utilitarianism, Project Gutenberg ebook #11224,

Sunday, May 6, 2018

Comparing Size Without (Much) Set Theory

At the end of my last post, I said that I’d like to know whether it’s possible to make sense of there being more Xs than Ys when there are uncountably many of each, without using set theory. I’m not a proper mathematician, as I expect will become painfully apparent to any proper mathematicians reading this, but I’ve tried to hack something together that might sort of work. It uses plural quantification, which George Boolos (1984) has argued isn’t set theory in disguise. It does use some actual set theory too. But hopefully it’s a start.

Georg Cantor, and apparently David Hume before him, came up with a rule for comparing the sizes of infinite collections. If the Xs and the Ys can be paired off one-one, then there are the same number of each. If the Xs can be paired of one-one with some of the Ys, there are at least as many Ys as Xs. In set theory, you can use this idea to make a nice precise open formula expressing that a set x is at least as big as a set y, in terms of there being another set z that represents this one-one pairing. The usual way is to make it a set of ordered pairs with one member from each of x and y, having previously said what it is for a set to count as an ordered pair.

Since this set-theoretic version of “at least as big as” relies on there being a set in the model to represent the correspondence whenever there is such a correspondence, you can sometimes get models that don’t give the results about which sets are bigger than which that you intuitively might think they ought to. That’s how you end up with things like Skolem’s paradox, which is the puzzle of how set theories that say (under their intended interpretations) that there are uncountably large sets can have models with only countably many things in the domain. We can sort of ignore this here, although if you know a lot more than I do about Skolem's paradox it may help to keep it in mind.

Suppose I want to do this pairing thing without set theory. One thing I could do is take “at least as many” as primitive, so I’ve got a predicate Xs ⪰ Ys, which takes plural terms on both sides, and is true just when there are at least as many Xs as Ys. That’s not really legitimate for this project though, because the good standing of the concept is exactly what we’re trying to establish. What I’ll suggest is that we use just enough set theory to make comparisons of size, but not all the extra stuff that leads to indeterminacies in which model we're talking about and whether or not the continuum hypothesis is true in it.

What do we need to define “Xs ⪰ Ys”? We’ve already got plural quantification, outsourcing the defence of its set-theoretic innocence to Boolos, as is traditional. What I’m suggesting is adding just ordered pairs of things which aren’t themselves ordered pairs, and then saying that there at least as many Xs as Ys whenever there are some ordered pairs representing a one-one correspondence between some of the Xs and all of the Ys. And then you throw away the ordered pairs again, since ordered pairs are fictional and all.

So, you start off with the model M you’re interested in, and you want to extend it to a model M+ with Xs ⪰ Ys defined in it. To do that, you take another model N which is the same except you add in a bunch of ordered pairs. Whenever there are one or two things in the domain of M, there are the ordered pairs of them in N. None of the ordered pairs are duplicated, and there’s nothing else in N. Then you can define Xs ⪰ Ys as being true in M+ iff it’s true in N that whenever there are some ordered pairs Ps such that nothing is the first of more than one p in Ps, and nothing is the second of more than one p in Ps, and all the firsts of a p in Ps are in Xs, and all the seconds of a p in Ps are in Ys, and all the Ys are the second of some p in Ps.

I’ve tried to be careful not to introduce any general set-theoretic stuff in the definitions, except for the ordered pairs. The idea is that given a model M of plural logic without ⪰, we can always pin down a unique model N, and then we can define a new model M+ of plural logic with ⪰ in terms of M and N. The M+ models constructed in this way are the admissible models for plural logic with ⪰. The way this definition goes is supposed to be unaffected by what is and isn’t true about the universe of sets out there, if it even is out there, and in particular it’s unaffected by the truth or otherwise of the set-theoretic version of the continuum hypothesis. This means we should be able to express the non-set-theoretic version of the continuum hypothesis that I talked about in the last post, purely in terms of plural logic and without leaving any hostages to set theory.

A potential source of problems is that the model theory for plural logic, just like the model theory for most things, tends to be given in terms of set theory. Can you avoid that, and just give it in terms of plural logic? I sort of expect you could, perhaps with a little extra stuff but way short of full set theory, although I’m not sure whether this is something anyone has taken it upon themselves to do. The idea would be that instead of saying things like “a model M is an ordered pair <D, V> where D is a set of objects and V is a valuation function”, you say “a model M is defined by some things the Ds, which are its domain, and…”. (This is the point at which it becomes difficult.) If set theory does turn out to be indispensible to the model theory, then there will always be a suspicion that the definitions are hostage to set theory. It’s a little bit like the problem of doing the model theory for non-classical logics in classical logic, or giving a model theory for variable domains modal logic without committing yourself to a possibilist ontology. I don’t really want to get into this debate because in debates like this there’s always a danger you’ll find Tim Williamson on the other side.

So, I’m not going to present a non-set-theoretic semantics for plural logic, and I’m also not going to defend the set-theoretic innocence of plural logic with a set-theoretic semantics. But when I try to formalize the method for constructing the M+s, I’ll try to mention sets as little as possible. In particular, the ordered pairs in the domain of the intermediate model won’t actually be ordered pairs. But the domains will be sets, and the extensions will be sets of ordered n-tuples of objects and/or sets, the way you’d normally do it if you weren’t worried about set theory. The idea is that if plural logic can be set-theoretically innocent unless the subject matter happens to be sets, then this construction is set-theoretically innocent too. The model theory helps us clarify what we're saying, but you still only have to commit to the entities in the domains.

Here’s the syntax of the language L. It doesn’t have ⪰ in it yet; adding that will make L+.
  • Singular names a, b, c, etc
  • Singular variables x, y, z etc
  • Plural names C, D, E, etc
  • Plural variables X, Y, Z etc
  • Predicates P, Q, R etc, which can be any finite number of places ≥ 1, and which can be singular or plural in each position.
  • A binary “one of” predicate <, singular in the first position and plural in the second.
  • A binary “among” predicate ⊑, plural in both positions.
  • A binary identity predicate =, singular in both positions. (Plural identity can be defined in terms of = and < in the normal way if need be. (Our language L can't express many-one identities, even though regular readers will recall that I think some many-one identities are true.)
  • Atomic wffs composed out of predicates and names or variables in the normal way.
  • Compound wffs composed from wffs and & and ¬ in the normal way, with other connectives defined as normal.
  • Quantifiers ∃ and ∀. ∃! is defined as a unique-existence quantifier in the normal way.
  • If φ is a wff and v is a singular or plural variable, ∃vφ and ∀vφ are wffs.

Now the semantics:
  • A model M is an ordered pair <D, V> where D is a set of objects and V is a function on members of L.
  • An assignment A is a function from singular variables to members of D and from plural variables to non-empty subsets of D.
  • If t is a singular name, VA(t) = V(t) ∈ D.
  • If t is a singular variable, VA(t) = A(t) ∈ D.
  • If t is a plural name, VA(t) = V(t) ⊆ D, and must be non-empty
  • If t is a plural variable, VA(t) = A(t) ⊆ D, and must be non-empty
  • If P is an n-place predicate, V(P) is a set of n-tuples <o1, o2, …, on>, where oi ∈ D when P is singular in the ith place, while oi is a non-empty subset of D when P is plural in the ith place.
  • V(<) is the set of ordered pairs <x, y> where y is a subset of D and x is in y.
  • V(⊑) is the set of ordered pairs <x, y> where y is a subset of D and x is a subset of y.
  • V(=) is the set of ordered pairs <x, x> where x is in D.
  • If P is an n-place predicate and t1 … tn are terms, then VA( = T if <VA(t1), …, VA(tn)> ∈ V(P), and  VA( = F otherwise.
  • The values for & and ¬ are assigned truth-functionally in the normal way.
  • VA(∃vφ) = T iff there is an assignment B which differs from A at most in the value for v, such that VB(φ) = T, and VA(∃vφ) = F otherwise.
  • VA(∀vφ) = T iff all assignments B which differ from A at most in the value for v are such that VB(φ) = T, and VA(∀vφ) = F otherwise.
  • M(φ) = T iff VA(φ) = T for all assignments A, and M(φ) = F otherwise.
  • Σ ⊨ φ iff for every model M such that M(ψ) = T for all ψ ∈ Σ, M(φ) = T as well.

This is the basic logic. It isn’t supposed to be original. It’s supposed to be unoriginal, because if it was original I’d be in danger of having to defend its set-theoretic innocence myself, instead of outsourcing the job to Boolos. (I'm not sure if Boolos himself is the first person to formalize the model theory along these lines, but other people in the tradition use a set-theoretic model theory and lean on Boolos for the case for ontological innocence. I think they do, anyway. If I'm honest it's a long time since I read Boolos's paper. I think he says something pretty persuasive about how when you eat a bowl of Cheerios you're eating the Cheerios, not a set of Cheerios.) I felt it was important to write it down so you could see just how much set theory is involved, and what it's doing.

Now we construct a model M+ = <D, U> from a given model M = <D, V>. We start by constructing a model N = <E, W>.
  • E = D ∪ G, where G is the set of objects representing ordered pairs of members of D.
  • D and G are disjoint.
  • Introduce two binary predicates P1 and P2 which are undefined in M. These are singular in both positions. You can think of N as a model of an expanded language L*.
  • For every object x in D, there is one object y in G such that <x, y> is in W(P1) and W(P2).
  • For every two objects x and y in D, there is exactly one object z in G such that <x, z> is in W(P1) and <y, z> is in W(P2).
  • Nothing else is in G.
  • For every object x in G, <y, x> is in W(P1) and <y, z> is in W(P2) for only one y and only one z.
  • Nothing else is in W(P1) or W(P2).
  • Let A be an assignment on D, and let B be an assignment on E that extends A.
  • Now we define the extension of ⪰ in M+, that is U(⪰), in terms of the assignments A relative to which W evaluates an open sentence with two free plural variables X and Y as true.
  • Let φ be ∃Z[∀y(y<Y → ∃!z[z<Z & P1yz]) & ∀z(z<Z → ∃x[x<X & P2xz & ∀w[(w<Z & P2xw) → w = z]])]
  • U(⪰) = {<s, t>: WA(φ) = T for some assignment A such that A(X) = s and A(Y) = t}
  • In words, φ is meant to mean “There are some things [stand-ins for ordered pairs] such that every Y is the first of exactly one of them, and each of them has a distinct X as its second.”
  • Now we can say that a model of L+ is admissible iff it is the model M+ for some admissible model M of L.

That’s the proposal formalized. There’s a lot of set theory in the formalization, and indeed there’s so much that you could be forgiven for forgetting that I was trying to avoid set theory at all. But I was trying to avoid set theory. There are two things set theory is doing there. One is to construct the models of plural logic. I already said I wasn’t going to try finding a non-set-theoretic model theory for plural logic. The other thing is a very weak set theory that adds something equivalent to ordered pairs to the domains of the intermediate models (N in the construction), but the ordered pairs don't themselves form further ordered pairs. How should we interpret this? I think the most principled way for a fictionalist about sets like me is to interpret those models as representing a fiction. The fiction says that every one or two objects that aren’t themselves ordered pairs form one or two ordered pairs respectively. (So for every non-pair x there’s <x, x> and for every non-pair x and non-pair y there are <x, y> and <y, x>.) When it’s true in the ordered pairs fiction that there are some ordered pairs representing a one-one correspondence between some of the Xs and all the Ys, it’s true in reality that there are at least as many Xs as Ys.

The point of using the ordered-pairs fiction instead of the full-ZFC fiction is that the ordered-pairs fiction specifies a single fully determinate model N, given a model M for reality-minus-size-comparison-facts. You then use this to get a model M+ for reality-including-size-comparison-facts. Full ZFC doesn’t specify a single model, and the different models may have different one-one correspondences in them, which will give you different size-comparison facts. The models themselves are set-theoretic objects. I’m not sure how much of a problem that is. I think the kind of answer I’d like to give is along the lines people give for variable domains modal logic: we already understand plural logic, and the use of this model theory is just supposed to precisify which particular thing that we already understand we’re talking about. Someone who thinks you can’t understand plural logic without set theory won’t buy that, and those are the people I’m referring to Boolos.

Maybe a promising way around this would be to construct a model theory for plural logic along the same lines as the ordered-pairs fiction itself. There’s a whole lot of ZFC not being used in the model theory, so maybe you could have a much lower-powered fiction which could still do the job but didn’t have the underspecification you get with ZFC. I only have a vague idea of how that might go though, and there could be straightforward reasons why it wouldn’t work.

In closing I’d like to make it clear what my ambitions are. I claimed in my last post that we could understand a version of the continuum hypothesis independently of set theory. The continuum hypothesis without sets, or CHWS, is a statement about how many real numbers there are. To make sense of CHWS without using sets, we need to understand how there can be more Xs than Ys when there are uncountably many of each. Normally we do that using sets. I’ve been trying to show how we might do it while avoiding full ZFC and its indeterminacies, using only plural logic and the much lower-powered and more determinate fiction of ordered pairs. I’m trying to show that we can understand CHWS as something with a determinate answer, even if we’re fictionalists about sets. I’m not trying to offer any reason for optimism that we could ever settle CHWS. And if I had to guess, I’d say we probably never will.


  • Boolos, George (1984). To be is to be a value of a variable (or to be some values of some variables). Journal of Philosophy 81 (8):430-449.