Pages

Saturday, December 22, 2018

Rationalism And Science

I've been thinking a bit about rationalism, empiricism and science, and I tried writing some notes to organize my thoughts about it. It's all pretty half-baked and unsatisfactory, but I've turned it in into something I can put here anyway, in case it's the kind of thing anyone likes to read.

Rationalism vs Empiricism

  • Roughly: empiricists think knowledge only comes from experience, while rationalists think pure reason has an important role.
  • Rationalists tend to think experience has some role too, although they might think that the kind of knowledge that involves experience isn't so good, or perhaps isn't knowledge strictly speaking at all.
  • Empiricists sometimes think it's OK to get mathematical and/or logical knowledge from pure reason. Maybe you can get some knowledge of analytic truths that way too, although under the influence of Quine (and Morton White, usually via Quine) they might not think there are any properly analytic truths.
  • If you want to firm up this second source of knowledge as part of your position, you can appeal to Hume's Fork. Hume's Fork is the idea summed up in this quote from the end of his Enquiry Concerning Human Understanding (emphasis in original):
    "If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion."
    It takes some interpreting to get this passage to be arguing for exactly the empiricism-plus-maths-and-logic position, but it's in the ballpark. If you do adopt this package, you might even be able to get away with calling yourself a logical empiricist, like Rudolf Carnap. Carnap is very in at the moment, and is also generally considered to have been a great guy.

Descartes And Leibniz

  • They were scientists and mathematicians, but they were also metaphysicians.
  • As philosophers, we tend to look more at their metaphysics than their science, unless we're specialists. The science is more obsolete than the metaphysics (although perhaps not more wrong), science isn't really taught via 300-year-old texts even when it isn't obsolete, and we're not studying science anyway. And we're probably even less interested in reading their maths than in reading their science.

The Appeal Of Empiricism

  • When you're reading their metaphysics, sometimes they will invoke principles of pure reason to derive their metaphysical conclusions. For example, Leibniz invokes such useful gizmos as the Principle of Sufficient Reason and the Identity of Indiscernibles. Descartes says things like there must be at least as much reality in an efficient and total cause as in the effect of that cause (from Desmond M Clarke's 1998/2000 translation of Descartes' third Meditation). And Spinoza, the other member of the Big Three rationalists, takes this kind of thing to a whole new level with an axiomatic presentation of his grand metaphysical system in the Ethics. You have to wonder what justifies their principles, and sometimes it can seem like they're just making up premises on the fly to get themselves out of an argumentative tight spot. I suppose this is particularly the case with Descartes, who makes things even worse by appealing to the light of nature, or the natural light of reason, and we often read Descartes when we're young and impressionable. Perhaps harping on about the natural light of reason makes a good impression on some young philosophy students, but it didn't make a good impression on me.
  • The metaphysical conclusions that they derive aren't even especially attractive to a lot of us. The existence of God, the non-physical soul, pre-established harmony and so on. We feel like we could easily get by without them, and then we wouldn't have to accept the premises, or the embarrassing Light Of Nature methodology by which they arrived at them.
  • Science, on the other hand, is an empirical discipline. We're quite sure of that. So while we might not know exactly what goes on across campus in the science departments, we don't really worry that anything of importance will be lost to science if we become empiricists. So we can keep science much as it is, while taking a suitably unenthusiastic attitude towards metaphysics, saying that metaphysical theses are either unknowable, or probably wrong, or Not Even Wrong.
  • We can still do a little bit of metaphysics, criticizing theses for conflicting with our best science, or for being internally incoherent. And we can also sometimes have a go at offering an empiricist critique of some of the things scientists get up to, when we take them to be straying into metaphysics. The interpretation of quantum mechanics is a good source of material there, although it is often difficult for philosophers to understand the relevant science well enough to get taken seriously. (And even if they do understand the science, getting taken seriously still isn't a given.)

Pure Reason In Science?

  • All this seems like a very nice package, but there's a fly in the ointment, which is that scientific methodology may actually include an awful lot of rationalism. Sure, it's an empirical discipline, in that scientists gather data and do experiments and so on. But they also spend a lot of time filling whiteboards with equations, and they're even known to have flashes of inspiration in the shower, as if their problem has been suddenly illuminated by the light of nature. (Archimedes' alleged eureka moment in the bathtub is not a good example of this, since his flash of inspiration was about his displacement principle, and one has experiences relevant to that in a bathtub. But it does happen to scientists working on things besides bathtubs too.)
  • Empiricists do have resources to push back on this line of thought. They're still allowed to use pure reason for maths, and for logical inferences. Perhaps that's what's going on with the whiteboards and the flashes of inspiration, and so under closer examination it will turn out that scientists aren't doing anything outside the empiricist's rules.
  • Now, the closer examination of actual scientific practice is something that has been done an awful lot by other people, under the auspices of history and/or philosophy of science. I blush to confess that I do not have much familiarity with any of this literature. If you do, then please do correct me if I'm wrong, because I'm going to make a naive pessimistic case that under closer examination what we'll find is that science is in fact a bit of a Cartesian free-for-all.

The Problems Of Induction

  • Induction, roughly, is when you take some observations, find a general rule that fits the observations, and then apply that rule to make predictions about new observations. If it didn't work, then it'd be hard to see how science could work. But there are two reasons why it is hard to see how induction could possibly work.
  • The classic problem of induction is the problem of justifying the idea that old patterns can be expected to persist at all. We set up the dilemma as follows. (It'll be based on AJ Ayer's presentation of the problem in chapter two of Language, Truth and Logic, although Ayer himself thinks the problem's very hopelessness makes it a pseudo-problem as traditionally conceived.) Either inductive reasoning is justified deductively or inductively. An inductive justification would point to how old patterns have persisted after being identified before, and this is itself a pattern we can expect to persist into the future. But this reasoning seems circular. The deductive justification is in bad shape too, on the grounds that there just isn't any logical inconsistency in patterns being broken. A coin landing heads twenty times in a row is consistent with its landing heads the twenty-first time, as an inductive reasoner would presumably predict, but it's also consistent with its landing tails the twenty-first time, providing a counterexample to any attempted deductive justification of induction.
  • The new riddle of induction, which we have Nelson Goodman to thank for, is still a problem even if you can solve the classic problem of induction. This time the problem is no longer showing that patterns will persist, but deciding which patterns we should expect to persist. There are lots of patterns consistent with any dataset, and we will get different predictions depending on which patterns we identify. Of course, there are some patterns that humans identify more readily than others. But justifying this as anything more than a cognitive bias, while still working within the empiricist's rules, is difficult.
  • Rationalists, as we've learned, can appeal to the light of nature to get themselves out of just this kind of tight spot. Can't justify induction inductively or deductively? Just appeal to the light of nature! Not sure which pattern is more likely to persist? Let the light of nature show you the way.
  • This is of course a bit flippant. Rationalists do have resources to invoke premises when an empiricist would be in a tight spot, but they can't do this arbitrarily, at least not by their own lights. Are the resources you need to deal with induction the kind of resources rationalists would appeal to?
  • I think they are, or could well be. To get a handle on why, consider Plato's theory of forms. One of the problems the forms are supposed to solve is how we can have knowledge of general things, even though we only experience particular things. Plato's idea is that knowledge of general things is knowledge of transcendent forms, which are somehow reflected in the particular things, and which we apprehend with the intellect rather than with the senses. Mathematical knowledge is knowledge of forms, but scientific knowledge is knowledge of forms too. Aristotle didn't think that forms were transcendent, but (as I understand it) he still thought that scientific knowledge was knowledge about forms, or essences if that's different, and that in any case these are apprehended by the intellect. The exact mechanism by which the intellect apprehends the forms is hard to pin down - Plato suggested we remember them from directly experiencing them before we were born - but the idea of scientific knowledge being knowledge of something general apprehended by the intellect has had real staying power, and is the kind of thing rationalists in particular are into.
  • This gives rationalists a bit of purchase on the problems of induction. For the classic problem, the idea is that we can learn something about events before we observe them, because the same forms show up in those events as the ones we have already observed. For the new riddle, the idea is usually that not all rules are equal because some correspond simply to forms, and some don't. Not every gerrymandered property you might think of corresponds to a form, and the ones that don't aren't as likely to fit in robust generalizations.
  • You can try to rerun the arguments against the rationalist if you like. How do we know that forms will carry on behaving the same way we're used to? How do we know which properties correspond to forms? Well, because we examine them with our intellects and discover which forms there are and that they behave uniformly. How do we pull that off? Good question! But it seems we do pull it off, since science works. Denying that it could possibly work flies in the face of the evidence, and denying that it could work this way risks begging the question against the rationalist.
  • So, we've got an argument that science has a problem that rationalists have the resources to solve and empiricists don't. For that, we didn't need to look at the actual practice of science at all. But now things get a little trickier. I'm going to make two suggestions, neither of which I have the expertise to properly back up.
    • First, a large part of what Descartes and Leibniz liked about being rationalists was that it gave them the resources as scientists to respond to just this kind of problem.
    • Second, not much has changed. Scientists are still leaning heavily on roughly the same rationalist methodology that Descartes was into, and being wildly successful with it too.

How Descartes And Leibniz Did Science

  • In the seventeenth century it was possible to feel really good about the prospects for science. The progress they were making made understandable a level of optimism that may only have been equalled before or since by physicists in the late 1920s. They really felt like they would soon have it all figured out, and that when they did the answers would be simple, beautiful, and powerful.
  • Descartes and Leibniz also took it more or less as a methodological axiom that the world was nicely ordered along principles that could be understood and discovered by humans. This is an idea that goes back at least to Plato's Timaeus, and on some accounts goes all the way back to Thales and is what sets philosophers like him apart from the storytellers like Homer and Hesiod who came before him. The Stoics were big on the idea, it persisted in some form throughout the middle ages, and in the seventeenth century it was as popular as ever. It's an appealing idea, and people working with the idea keep making discoveries and building cool stuff, while sceptics just bring everybody down. When asked to justify this methodological axiom, neither Descartes nor Leibniz was above talking about God. Both of them thought that God in his goodness had given us rational faculties which when used properly would be able to get us significant scientific knowledge of the world. Leibniz in particular also thought that God's perfection meant that he would make the world excellent - the best of all possible worlds, as they say - and that we could work out what would count as excellent, and use this to guide our theorizing both in science and metaphysics.
  • So, how do you do science with this attitude? Well, prompted by observations and guided by the light of nature, you come up with a beautiful, simple, powerful theory of how the world works. You make the theory nice and mathematical, and maybe come up with some new maths like calculus (Leibniz) or co-ordinate geometry (Descartes) especially for the purpose. You do experiments to test the theory. If the predictions are borne out, great! If not, maybe there's something wrong with the experiments. Or maybe there's something wrong with the theory. Eventually you come up with something nice that fits the results you're getting. And because the world does in fact conform at least approximately to the kind of principles that seem simple and beautiful to seventeenth-century optimists, the whole thing went swimmingly.

How We Do Science Now

  • Basically what I want to say is that we do things much the same way. We might not think that's what we're doing, but it is. More or less.
  • When scientists are explaining how science progresses now, they'll sometimes offer an account based on Karl Popper's ideas. I haven't read Popper, and I expect the scientists often haven't either, because the account isn't really very plausible and it'd be uncharitable to attribute it to Popper himself. Here's how it goes. Scientists look at the data they're already aware of, and they come up with a theory that fits. The important thing about the theory is that it be falsifiable, in that there are experiments they could do or observations they could make that would show that the theory was false. Then they do experiments and/or make observations. If the results are what the theory predicts, the theory is more likely to be true. If the results aren't what the theory predicts, the theory is false and so they go back to the drawing board.
  • When pressed, people usually admit that this does not really reflect scientists' actual responses to data not fitting their theories. There might be something wrong with the experiment, or maybe the theory does predict it after all but there was something in the initial conditions you weren't aware of. A classic example is the orbit of Uranus: it didn't fit with the predictions people made for it using Newtonian mechanics and gravity, but that's just because they didn't know Neptune existed. Include the gravitational effects of Neptune and Newtonian mechanics and gravity aren't falsified after all. That's how Neptune was discovered, or so I'm told. (The hero of that story is Urbain Le Verrier.) They tried to pull the same trick with a planet inside Mercury, which they called Vulcan, but there it turned out Newtonian mechanics and gravity really were the problem, and Einstein's theory of gravity explains it much better. But there was a long time in between unsuccessfully looking for Vulcan and rejecting Newtonian gravity.
  • OK, so let's put aside naive falsificationism. How does science work, then? Well, I'd suggest they do pretty much what Descartes did. You have a sense of the kind of data you're trying to fit your theory into, and then you make something up that you're happy with, guided by the light of nature. Your theory will have some blanks in it like the mass of the electron or whatever, and when you find some way of taking measurements to fill in the blanks, you do. Then you cling on to the theory like grim death in the face of both friendly and unfriendly data until finally you come up with something you're not quite as unhappy with. This account adopts some themes people tell you come up in Kuhn and Lakatos - I haven't read them either - and what it agrees with Popper about is that the whole process works much more smoothly if your theory is the kind of thing that data can be friendly or unfriendly to. (That's why it doesn't work so well for metaphysics, I suppose.)
  • The account is a total caricature, of course. But I'm hoping it's a caricature of the actual practice of science, unlike naive falsificationism, which is a caricature of what some scientists imagine they must be doing because rationalism somehow doesn't seem hard-nosed enough. Now, while it might sound like I'm being a bit mean to scientists, that's not my intention. If the rationalists are right, then this could be exactly what scientists ought to be doing. And whatever they're doing, they seem to be doing it very well.

The Moral Of The Story

  • Sometimes people give the rationalists a hard time. We do sort of acknowledge that Descartes, Spinoza and Leibniz were great philosophers - I mean, everyone says so, right? - but when we look at their philosophical systems what we see are bad arguments for false conclusions. It's all too easy to attribute this to a bad methodology, saying that the problem is that they were rationalists. Maybe we should ditch the light of nature, stick with empirical methods plus maths and logic, embrace science and be quietist about metaphysics.
  • What I'm suggesting is that this line of thought has a great big hole in it, which we don't notice because we don't think of the rationalists as scientists. Or perhaps we do think of them as scientists, but don't think of that as relevant to their philosophy. But it is relevant. They were scientists, they were good at it, they did science as rationalists, and we're still doing that today. If you want to assess rationalism at its best, then don't consider it as a method in metaphysics; consider it as a method in science. That's the moral of the story. The question is, is it a true story?

What's Wrong With This Picture?

  • I mentioned earlier that there's a lot of stuff here that I'm pretty ill-informed about. Maybe you thought I was being disingenuous, but I wasn't. There's a lot to learn here, and I haven't learned it. And if you have, you probably don't need me to tell you that. The evidence is right there on the page.
  • Nonetheless, this is where I am at the moment with this stuff. From where I'm sitting, it looks like you can't really get anywhere in science as an empiricist, the actual practice of science bears this out, and the rationalists should be given a bit more credit for it. So, what might I be wrong about?
  • First, I might be wrong about empiricism's platform. Maybe they've got some clever solutions to the issues with induction that I was worrying about. Or maybe they're willing to embrace a greater level of scepticism than I appreciate.
  • Second, I might have Descartes and Leibniz wrong. I've read a fair bit of both, but there's an enormous amount left that I haven't read, and I do find the scientific practice they envisage all a bit mysterious. That's why I describe it in these scathing terms, before insisting that what I described is the tried and tested scientific method that built the modern world. I think we should have a fairly low prior that a method that sounds like that could build a world that looks like this.
  • Third, I might have modern science wrong. Who knows what these people get up to? They don't even seem to know. But those historians and philosophers of science I mentioned before probably have some idea, and I could try checking some of that out. I could at least read Kuhn. Everyone reads Kuhn.
  • Fourth, I might be wrong about something else. I don't know what. But the whole situation is very unsatisfactory, and I think I must be missing something.

Reading List

  • So, I've got some reading to do! If you've got this far and you think I'm a doofus who should calm down and read X and then everything will become clear, then please point me towards X in the comments. In the meantime, here are some things I could look at.
  • The logical positivists/empiricists in the first half of the twentieth century were often pretty clued up about science, but they were also pretty thoughtful and self-aware about their empiricism. So perhaps I should read some Schlick or something.
  • I found a book in a second hand shop the other day by Daniel Garber called Descartes' Metaphysical Physics. I probably won't read it all, because I am a non-serious person, but it looks relevant and I'll probably read some of it.
  • I could have a look at some of Descartes and Leibniz's more scientific stuff, and see how they talk about what they're doing.
  • I've only got a pretty sketchy understanding of Plato and Aristotle's understanding of form and how it helps with epistemology, so I should probably read something about that.
  • Like I say, I could read Kuhn, at least The Structure of Scientific Revolutions. I should probably read something by Lakatos too. Popper is less of a priority.
  • I saw an NDPR review of a book called Platonism at the Origins of Modernity: Studies on Platonism and Early Modern Philosophy, and it looked like some of the papers in that would be relevant.
  • Once I've got through that lot I'll probably be down multiple rabbit-holes, so for me to put anything else on the list now would probably be a bad idea. But if you can point me to anything that would sort all this out for me, I'm all ears.

Friday, October 12, 2018

Gambling With The Metaphysics Oracle

A Tax On Bullshit

There's a lot of bullshit around. Wherever you look, people are confidently making predictions, often while being paid to so, and by the time we've been able to test these predictions the people who made them are long gone, sunning themselves on a beach somewhere, spending our money and laughing at us. It's a sorry state of affairs. What can we do about it?

One idea, and it's a good one, is to get people to put their money where their mouths are. Offer people bets on their predictions. If they really believe what they're saying, they shouldn't mind having a little bet on it. If they don't believe it and bet anyway, then at least their bullshit is costing them something. People sometimes call betting on predictions a "tax on bullshit". The person I've heard talking most enthusiastically about this idea is Julia Galef, who I suppose is a pillar of what people refer to as the Rationality Community. Apparently she does it in her everyday life. She's always sounded to me as if she has a fun life, but I think I'm probably too much of an epistemic pessimist to fit in well with the Rationality Community myself.

Regular readers may recall that I sometimes worry about bullshit in philosophy. A lot of the claims philosophers make aren't really very testable at all, and so you can keep up your bullshit for thousands of years without ever being found out. Of course, if something isn't testable then it's not practical to bet on it. But lately I've been wondering how philosophers, particularly metaphysicians, would react if we somehow could offer them bets on their claims. Peter van Inwagen (1990), for example, doesn't think tables exist. When we think we're in the presence of a table, he thinks we're really just in the presence of some simples arranged tablewise. But if we could go to an Oracle to settle the question, would he put his simples arranged moneywise where his simples arranged mouthwise are?

Taking The Bet

The simplest response is for the philosopher to just take the bet, and offer us very favourable odds corresponding to how very sure they are that they're right. Maybe the methods we use for answering metaphysical questions aren't so different in principle from the methods we use for answering any other kind of question, and if we've got what we take to be a good argument then we should be confident in its conclusion. I think that plenty of metaphysicians would have no problem at all taking these bets. They mean what they say quite literally and they are confident that their answers are right.

Declining The Bet

A second response is not to take the bet, on the grounds that you don't actually believe the metaphysical positions you've taken. There are at least two ways this could work, one obvious and disreputable and the other less obvious and more respectable. The obvious one is that you're not actually committed to these positions the way you said you were. You were bullshitting, perhaps without properly realizing it, and now you've been found out. The more respectable one is that you are committed to your metaphysical positions, but the mode of commitment you take to be appropriate to metaphysical positions is not belief. It's something else. Helen Beebee (2018) argues for something along these lines, building on Bas van Fraassen's work on the analogous question in the philosophy of science. For Beebee it's largely a response to the phenomenon of expert disagreement in philosophy and concern about the reliability of philosophical methods, while for van Fraassen I understand it's more about the underdetermination of scientific theories by evidence1. For Beebee and van Fraassen, this kind of commitment is less about believing the untestable parts of the theory and more about committing oneself to participating in a particular research programme.

Rejecting The Setup

A third response is to reject the bet on the grounds that you reject the authority of the Oracle. How can you reject the authority of an Oracle? The basic idea is that we can't imagine anything the Oracle could say or do to convince us that our position is wrong, but you have to be careful with this sort of dialectical move. You don't want to be the sort of person who responds to the trolley problem2 (Foot 1967) by saying they'd dive off the trolley and push the workers out of the way, or some other such silliness. This kind of move usually just serves to derail the conversation and prevent us from engaging with the point the thought experiment was trying to make. In the trolley problem you just stipulate that the situation is simple and the outcomes given each action are certain.3 In the Oracle thought experiment you similarly stipulate that the Oracle is reliable (or infallible) and trusted by all parties. Nonetheless, I think that sometimes it does tell us something useful if we push back on the setup.

The cleaned up certain-outcomes version of the trolley problem isn't very realistic, but it's still something we can imagine. With at least some metaphysical questions, however, the Oracle thought experiment might give rise to what philosophers call imaginative resistance. This is what happens when what you're being asked to imagine somehow doesn't make sense to you, to the point that you struggle to imagine it. It can happen in various ways, including when a story is blatantly inconsistent, or when the moral facts in a story as stipulated conflict with the moral judgements we're inclined to make ourselves when given the non-moral facts. I want to suggest that this imaginative resistance is an indication that even if we take for granted that the Oracle is reliable and that we trust it, this might not be enough. We might still disagree with it, for reasons other than our not trusting it.

I can think of a couple of kinds of case where this situation might arise. Both embody a kind of Carnapian attitude towards philosophical questions. First, suppose we're asking the Oracle about whether there are tables, and it says that there are. Van Inwagen could respond in a couple of ways:

  • "Fair enough; that's a weird and oddly cluttered world, and there was no way for me to find out that there were tables, but I guess you're the Oracle here."
  • "Look, if you want to describe the world as having tables in it, that's up to you. I'm going to keep describing it as not having tables. We're both right by the lights of our own description schemes, and choosing between the schemes is a practical matter about which you're not the boss of me."4

Second, suppose that we're asking the Oracle what the correct analysis of knowledge is, and it says the correct one is Robert Nozick's (1981) counterfactual truth-tracking analysis. We point out all the bizarre results this commits us to as outlined by Saul Kripke (2011), and the Oracle just shrugs, says that's what the word "know" means, and presents a bunch of linguistic usage data to back up its claim. Again, two responses:

  • "Fair enough; the word 'know' isn't as useful as we thought it was, and we can be forgiven for thinking Nozick was wrong, but it means what it means and we must accept that."
  • "Look, if that's what the word means, then the word isn't fit for purpose. We need a concept of knowledge we can use for talking about important issues about humans' access to information about the world, and the concept embodied by this linguistic usage data simply can't be used for that."

These two cases are quite similar. The difference is that in the tables case the philosopher says they weren't wrong about anything: the Oracle and the philosopher have just made different choices, and they are the authorities over their own choices. In the knowledge case the philosopher is shown to be wrong about who knows what, but they push back by saying this just means we need different concepts. The cases kind of shade into each other a bit, but I think the distinction is there, and that the knowledge and tables cases are on different sides of it. Van Inwagen would not be surprised to be shown that we talk as if there are tables. Kripke would be surprised to be shown that we talk as if Nozick's is the correct analysis of the concept expressed by the word "know". That's a difference.

Now, even if you take one of these Carnapian lines, the Oracle could still push back and say that actually you're wrong about whose concepts are better. It might not want to do that in the knowledge case; it might agree that our word "know" attaches to a concept that isn't very useful. But the point is that the Oracle knows everything there is to know, and so it might know something that would make you change your mind. The thought here is along the lines of what Carrie Jenkins (2014) argues for and calls Quinapianism: the decisions over which concepts to use to describe the world are up to us the way Carnap thinks, but our views about which concepts are best are revisable in the light of new information the way Quine thinks all our cognitive commitments are. But even if the Oracle's omniscience gives it an advantage over us, what we end up with here is still a philosophical discussion of the more familiar kind. The Oracle makes the case for its recommended set of concepts, but it's still up to us which concepts we end up using.

So What?

I've had a bit of fun thinking about this, but does it tell us anything about anything? I think it does. I'm inclined to take philosophical questions at face value, and to have the same commitments with respect to them inside and outside of the philosophy room. If I'm bullshitting, I'm not consciously or deliberately bullshitting. I've got a lot of philosophical commitments, albeit subject to a great deal of uncertainty, and I'm sincere about them. But I think my responses to this thought experiment vary a lot depending on which philosophical question we're talking about. Sometimes I think I'd take the bet. Sometimes considering being offered a bet makes me feel more uncertain. (I guess in these cases either I'm being called on my bullshit or the feeling of added uncertainty is itself unjustified. Perhaps it's rooted in risk aversion.) And sometimes I come over a bit Carnapian and get one or other kind of imaginative resistance. (I'm not sure I ever feel the way Beebee suggests I should feel, but I don't understand her position terribly well, and in any case I've only run the thought experiment on a few questions.)

This variation is interesting to me. When people are talking about the status of philosophical propositions and beliefs, they sometimes make it sound like they think we should go for the same response to everything, or perhaps one response for ethics and another for everything else. But I feel very torn between the different responses for a lot of questions, and I lean quite different ways for different questions even within the same branch of philosophy. So when Beebee, Carnap and the rest are putting forward views on the status of philosophical disputes, an answer that works well for one dispute may not work so well for another. One more thing to worry about, I guess.

Notes

[1] There is a connection here, in that van Fraassen is sceptical about the reliability of scientific methods to verify the parts of theories that go beyond their empirical adequacy, for example by positing unobservable entities. I'm not a good source for van Fraassen though: most of this is coming from Beebee's account of his position.

[2] The trolley problem is a thought experiment where someone driving a train (or trolley) is going to hit five workers on the tracks, and the only way to avoid killing them is to steer down a side track and kill another, different worker. The original puzzle is explaining why the driver ought to steer and kill one to save five, even though in some other situations it's better to do nothing and let five people die than to act to save them at the cost of killing someone else. Foot gave the example of framing someone to prevent a riot, and another common one is killing someone to use their organs, which I think is due to Judith Jarvis Thomson (1985). Since Thomson's paper there has been a large research programme involving variants on the trolley problem. Some people think it is silly and dismissively call it 'trolleyology'. My own view is that it's unfairly maligned, although I do quite like the word 'trolleyology'.

[3] Foot does acknowledge that the trolley case isn't especially realistic and that the worker might be able to get out of the way. But she also notes that the relevant aspects of the outcomes often really are more or less certain in the real-life medical situations she's using the trolley problem to illuminate.

[4] As I understand him, van Inwagen himself is probably enough of a realist about metaphysics that he should go for the first answer rather than the second. But the availability of the second is what I'm interested in here, and other philosophers do apply a Carnapian approach to questions about composition, including the question of whether there are tables. The main person for this approach is probably Eli Hirsch.

References

  • Beebee, Helen (2018). I - The Presidential Address: Philosophical Scepticism and the Aims of Philosophy. Proceedings of the Aristotelian Society 118 (1):1-24.
  • Foot, Philippa (1967). The problem of abortion and the doctrine of the double effect. Oxford Review 5:5-15.
  • van Inwagen, Peter (1990). Material Beings. Cornell University Press.
  • Jenkins, C. S. I. (2014). Serious Verbal Disputes: Ontology, Metaontology, and Analyticity. Journal of Philosophy 111 (9-10):454-469.
  • Kripke, Saul A. (2011). Nozick on Knowledge. In Philosophical Troubles. Collected Papers Vol I. Oxford University Press.
  • Nozick, Robert (1981). Philosophical Explanations. Harvard University Press.
  • Thomson, Judith Jarvis (1985). The Trolley Problem. The Yale Law Journal 94 (6):1395-1415

Thursday, July 26, 2018

Domestique

Domestique

I've been following the Tour de France this year, and yesterday I wrote a poem about it. It's called 'Domestique'. I hope you like it. Here it is.

I am a humble domestique
I ride the Tour de France
My sponsor's name is on my shirt
And also on my pants

And though I will not win today
I must pretend to try
So when the cameras film it all
They're advertising Sky

It's even worse when riding up
An Alp or Pyrenee
Those are the days I'm someone it's
No fun at all to be

I'm not as strong as Froome, of course
But Froomey needs to chill
So he stays in my slipstream, while
I drag him up the hill

If Froomey's feeling peckish, he
Can have my protein gel
And if his bike breaks down, and he
Needs mine, that's his as well

If such a thing were possible
I'd give my very soul
Maintaining Froomey's comfort
Is my one and only goal

I feel I must explain myself
I feel it makes no sense
That Chris gets all the glory, and
It's all at my expense

To really get inside my head
You have to understand
For three short weeks of agony
They pay me ninety grand

Friday, June 15, 2018

Harsanyi vs The World

Harsanyi vs The World

Utilitarians and Egalitarians

One of the things impartial consequentialists get to argue with each other over is how much equality matters. If John has one util (utils are units of utility or wellbeing) and Harriet has four, is that better or worse than if they both have two? Or is it just that the first setup is better than the second for Harriet and the second is better than the first for John, and that's all there is to it? I'm inclined towards this last view, but impartial consequentialists - utilitarians and the like - tend to want to say that setups can be better or worse overall, because they want to go on to say that the overall goodness of the setup an action brings about, or will probably bring about, or something like that, determines whether the action was right or wrong. You can't very well say that your action was right for John and wrong for Harriet, and that's all there is to it. Goodness and badness may seem to be plausibly relative to individuals, but it's less plausible to say that about rightness and wrongness.

So, let's suppose you've decided you want to be an impartial consequentialist. You know how good a setup is for each person, and you want to work out how good it is overall. What are your options?

  • You add up everyone's utility. That makes you an aggregating utilitarian.
  • You can take the mean of everyone's utility. That makes you an averaging utilitarian.
  • You can say that sometimes a setup with less total/average utility is better because it's more equal. That makes you an egailitarian.
  • You can say that sometimes a setup with less total/average utility is better because it's less equal. That makes you an elitist, or something - it's not a very popular view and we'll ignore it in what follows.

For a fixed population, the average will be higher whenever the aggregate is higher and vice versa, so it doesn't matter which kind of utilitarian you are. The population is of course sometimes affected by our actions, but I won't have anything to say about that today. I'm thinking about the difference between being a utilitarian and being an egalitarian. Mostly I'd thought that both were pretty live options, and while there were considerations that might push you one way or the other, ultimately you would have to decide for yourself what kind of impartial consequentialist you were going to be, assuming you were going to be an impartial consequentialist at all. But a few days ago someone on Twitter (@genericfilter - thanks) drew my attention to a paper by John Harsanyi which threatens to settle the question for us, in favour of the utilitarian. That was disquieting for me, since my sympathies are on the other side. But Harsanyi's got a theorem. You can't argue with a theorem, can you? Well, of course you can argue with a theorem: you just argue with its premises. So I'm going to see how an egalitarian might go about arguing with Harsanyi's theorem.

Harsanyi's Theorem

I'm actually reliably informed that Harsanyi has two theorems that utilitarians use to support their case, but I'm only talking about one. It's his theorem that says, on a few assumptions that are plausible for people who think von Neumann-Morgenstern decision theory is the boss of them, that the only function of individual utilities that can give you the overall utility is a weighted sum (or average; we're taking the population to be fixed here). Since we're already impartial consequentialists, that leaves us with a straight sum (or average). So the egalitarians lose their debate with the utilitarians, unless they reject one or more of the assumptions.

Now, I think I understood the paper OK. Well enough to talk about it here, but not well enough to explain the details of the theorem to you. Hopefully this doesn't matter. Harsanyi's work may have been new to me but it isn't all that obscure, and you may already be familiar with it. In case you're not, I'll also try to tell you what's relevant to the argument when it comes up, but if you want a proper explanation of the details you'll really have to go elsewhere for them.

Harsanyi proves five theorems. The first three seem kind of like scene-setting, with the real action coming in theorems IV and V. Here they are (Harsanyi 1955: 313-4):

  • Theorem IV: W [the social welfare function] is a homogeneous function of the first order of U1, U2... Un [where these are the utilities of each of the n individuals in the setup].
  • Theorem V: W is a weighted sum of the individual utilities, of the form W = ∑ ai•Ui, where ai stands for the value that W takes when Ui = 1 and Uj = 0 for all j ≠ i.

Some clarifications before we go on. A homogeneous function of the first order means that if you multiply all the Uis by a constant k then that multiplies W by k as well. The utility functions Ui and the social welfare function W aren't just functions on categorical states of the world; they're functions on probability distributions over such states. He shows in theorem III that you can treat W as a function of the individual utility functions, and you can also treat it as a function on probability distributions over distributions of utility.

Theorem V basically says that the utilitarians are right. Harsanyi doesn't seem to have assumed anything as radical as that, and yet that's what he's proved. It's as if he's pulled a rabbit out of a hat. So the question is: where does the rabbit go into the hat? As I say, theorems I-III seemed like scene-setting, so let's start by looking at theorem IV.

Diminishing Returns

You might think, as I did, that egalitarians should be pretty unhappy with theorem IV. To get an idea of why, think about diminishing returns. If you give someone with no money a million pounds, that will make a big difference. Give them another million, and it makes less of a difference. The third million makes less difference again. Now, the utilitarians aren't saying that John having two million and Harriet having nothing is just as good as both having one million. You build that into the utility function, by saying money has a diminishing return of utility. But here's the thing: the returns of giving John or Harriet money should diminish faster from the point of view of the egalitarian social planner than they do from the point of view of the person getting the money. That's because boosting an individual's utility becomes less important to the planner as the person gets better off.

You can get a feel for why by thinking about different people's agendas. John's agenda remains John's agenda, however much of it he achieves. He achieves the first item, and moves down the list to the second. That's just what it is for it to be John's agenda. But the planner's agenda is different. If John's agenda is totally unmet, the planner might prioritize the first thing on John's list. But if John is getting on well with his agenda, the planner can ignore him a bit and start prioritizing the agendas of people who aren't doing so well. John's fine now. And however you take into account the diminishing returns of John's agenda for John, they should diminish faster for the planner.

For me, this idea had a lot of instinctive pull, and I expect it would for other egalitarians. And this idea is the very thing theorem IV contradicts. Theorem IV says that boosting UJohn from 0 to 1 has just as much impact on W as boosting UJohn from 10 to 11, when everyone else's utility is 0. You have to do a little more to generalize it to hold when other people's utilities aren't 0, which is what theorem V does, but this consequence of theorem IV is bad enough. The rabbit is already in the hat at this point. Or so it seems.

It turns out that denying theorem IV, or at least accepting the egalitarian principle that conflicts with it, gives a result which is even worse, or at least to me it seems even worse. It's well established that diminishing returns can manifest themselves as risk aversion. (Though it's possible not all risk aversion can be explained by diminishing returns.) A second million is worth less to you than a first, and this explains why you wouldn't risk the first for a 50% chance of winning a second. Maybe you're already risk-averse enough that you wouldn't do that anyway, but the point is that diminishing returns on a thing X generate risk-averse behaviour with respect to X. So if Harriet's utility has diminishing returns for the planner, this means that the planner will be more risk-averse with Harriet's utility than she is, other things being equal. That result is very odd. What it means is that in two situations where the only difference is whether Harriet takes a certain gamble or not, one will be better for Harriet but the other will be better overall, even though they're both equally good for everyone else. This is a weird result indeed, and Harsanyi makes sure his assumptions rule it out. It seems that it should be possible to be an egalitarian without accepting this weird result.

The Repugnant Conclusion

So if we're granting theorem IV, we could look for the rabbit going in somewhere in the proof of theorem V. I couldn't really see anything much to work with there, though, so I thought I'd try a different tack. To get an idea of the new strategy, think about Derek Parfit's (1984) repugnant conclusion argument against aggregate utilitarianism (and against lots of other positions). The idea of that argument is to show that aggregate utilitarians are committed to saying that for any setup with lots of people all of whom have great lives, there is a better setup in which everyone's life is only just worth living. (The second setup manages to be better because there are so many more people in it.)

Now, you could try setting up the objection by just taking some postulates that aggregate utilitarians are committed to and then formally proving the result. That's more or less how Harsanyi proceeds with his argument. But what Parfit does has a different flavour. He takes his opponent on a kind of forced march through a series of setups, and the opponent has to agree that each is no worse than the last, and this gets you from the nice world to the repugnant world. Here's a version of how it can go. Start with a world A with lots of happy people. Change it to world B by adding in some people whose lives aren't good but are still worth living. That can't make the world worse, because it's just as good for the initial people and the new people's lives are still better than nothing. Then take a third world C which has the same total utility as B but it's equally distributed. That shouldn't make the world worse either. C is like A but with more people who are less happy. Then you repeat the process until you get the repugnant world R. (If your opponent says R is just as good as A, you construct R' by making everyone a tiny bit better off. R' is still repugnant, and if it's not repugnant enough for you then just continue the process again from R' until you find a world that is.)

What I'm thinking is that if Harsanyi's proofs are valid, which they are, then you should be able to get a similar forced-march argument that embodies the proof, but where you move from a world where lots of people are happy to a repugnant world where one person has all the utility and everyone else has none. This argument should be more amenable to philosophical analysis, and once we've worked out what's wrong with it, we should be able to return to Harsanyi's proof and say where the rabbit goes into the hat. I'm not Parfit, and I haven't come up with anything as good as the repugnant conclusion argument. But I have come up with something.

Harsanyi Simplified

What we want to show, without loss of generality, is why one util for John and four for Harriet is better than two utils for each. One and four are placeholders here; what's important is that they'd each go for a 50-50 shot between one and four, rather than a guaranteed two. Here's the forced march:

  • Independent 50-50 shots between one and four for each is better than a guaranteed two for each.
  • A 50-50 shot between one for John and four for Harriet or four for John and one for Harriet is just as good as independent 50-50 shots between one and four for each.
  • One for John and four for Harriet is as good as four for John and one for Harriet.
  • Since these two outcomes are equally good, and a 50-50 shot between them is better than two for each, both are better than two for each.
  • So one for John and four for Harriet is better than two for each.

This doesn't track his theorem exactly, I don't think, but as I understand the important moves, if the egalitarian can explain what's wrong with this argument, then they can explain what's wrong with Harsanyi's. And if they can't, they can't. A couple of things are worth pointing out, before we think about what might be wrong with it.

First, in a way it's more general than Harsanyi's. You don't have to assume that people should be expected utility maximizers. The point of the argument is that whatever risk profiles self-interested people should be most willing to accept for themselves, the corresponding distributions are the best ones for everyone overall.

Second, and very relatedly, it more or less shows that the best setup (at least for a fixed population) is the one that someone self-interested would choose behind the veil of ignorance to be in. Harsanyi was a fan of that idea, I'm told, but that idea drops out of the reasoning behind his theorem. You don't need that idea to prove the theorem.

So, where should the egalitarian resist the argument? To me, given what we saw earlier in the section on diminishing returns, it seems they would have to resist the idea that if a 50-50 shot at outcomes X or Y is better than outcome Z, then at least one of X and Y must be better than Z. (And if X and Y are equally good, both must be better than Z.) This idea is very intuitive, at least considered in the abstract, and if it's wrong then it seems a lot of decision theory won't really be applicable to impartial moral decision-making. It's not just expectation-maximizing that will have to go either, because the move in question is pretty much just dominance reasoning. (If Z is better than X and better than Y, then it's better than a 50-50 shot between X and Y.) When you give up dominance reasoning, I'm not sure how much of decision theory is left.

Longtime readers may remember that I once said I preferred versions of consequentialism that assessed actions by their actual consequences rather than by their expected consequences. Harsanyi does set things up in a way that's much more friendly to expected-consequences versions, but I don't hold out much hope for resisting Harsanyi's argument along these lines. One problem I have with expected consequences was that it's hard to find a specific probability distribution to use which doesn't undermine the motivation for using expected consequences in the first place. (People will be unsure of the probabilities and of their own credences, just as they're unsure of the consequences of their actions.) But Harsanyi doesn't exploit that at all. Another problem I had, and the one I talked more about in the post, was that expectation consequentialism gives a norm about the proper attitude towards risk, and I don't see that there is a plausible source for such a norm. It's just up to us to take the risks and put up with the consequences. Harsanyi does exploit expectation-maximizing to get the strong utilitarian result, but you just need simple dominance reasoning to get the weaker result that inequalities are justified in a setup if people would choose that setup behind a veil of ignorance (with an equal chance of occupying each position).

All of this, even the weaker result, seems to me to be huge if true. I've long been a fellow traveller of impartial consequentialism, and the main reason I keep using the term 'impartial consequentialism' rather than 'utilitarianism' is that I've got egalitarian sympathies. Readers of my recent posts on Hipparchia will have already noticed me losing my enthusiasm for the project. This Harsanyi stuff may force me to give up on it altogther.

References

  • Arrhenius, G., Ryberg, J. and Tännsjö, T. 2017: 'The Repugnant Conclusion', The Stanford Encyclopedia of Philosophy (Spring 2017 Edition), Edward N. Zalta (ed.), URL = https://plato.stanford.edu/archives/spr2017/entries/repugnant-conclusion/
  • Harsanyi, J. 1955: 'Cardinal Welfare, Individualistic Ethics, and Interpersonal Comparisons of Utility', Journal of Political Economy 63(4): 309-321
  • Parfit, D. 1984: Reasons and Persons (Oxford University Press)

Friday, May 25, 2018

What We Demand Of Each Other

In the last post I was thinking about Hipparchia's paradox. Hipparchia was a cynic philosopher who lived in Athens in the 4th and 3rd centuries BCE, and she posed the puzzle of why it wasn't OK for her to hit a guy called Theodorus, even though it would have been OK for him to hit himself and morality is supposed to be universalizable. I did try discussing the puzzle a bit, but what I mostly wanted was for moral philosophers to take the puzzle more seriously, to work out how their moral theories can accommodate it, and to start calling it 'Hipparchia's paradox'. I'm not really the kind of person I was hoping would think about it more, but I've been thinking about it a bit more anyway.

I mentioned that we can try resolving the paradox by saying that people were allowed to waive consideration of negative consequences to themselves in the moral evaluation of their own actions. And I worried that if these moral waivers are a thing, then there might be other kinds of moral waivers, and our final theory might end up looking unrecognizable as consequentialism. Some people will be fine with that, of course, but I've long been a bit of a fellow-traveller of (impartial, agent-neutral) consequentialism, and consequentialists (especially impartial agent-neutral ones) are probably the people Hipparchia's paradox is most of a puzzle for.

So, I've been thinking some more about these waivers, and what I'm thinking is that the reason they feel kind of scary is that they're an example of voluntarism1. Voluntarism is the idea that what's right and wrong is fixed in some special way by someone's will. What counts as the will and what counts as the relevantly special way is a bit up for grabs, and some of the disputes over voluntarism will be verbal. But it's not all verbal, and a certain kind of moral philosopher should be scared of voluntarism. A classic version of voluntarism is divine command theory, which is sometimes called theological voluntarism. Divine command theorists say this sort of thing:

  • When something is wrong, it's because it goes against God's will.
  • When something is wrong, it's because God has decreed that it's wrong.

Divine command theory isn't all that popular among moral philosophers nowadays, although it had a pretty good run with them in the middle ages, and it's still alive and well in the moral thinking of some religious people. Its detractors often view it as getting things backwards. Things aren't wrong because they go against God's will; God wants us not to do them because they're wrong. Similarly, when God says something's wrong, that's because it is wrong, not the other way round. This problem is called the Euthyphro problem, after the dialogue Plato wrote about it. The Euthyphro problem isn't just about divine command theory though; it applies in some way to all versions of voluntarism. People don't make things wrong by wanting them not to be done; they want them not to be done because they're wrong, or at least they should. The worry is that anyone adopting a version of voluntarism is taking the wrong side in the Euthyphro problem. That sounds like bad news for the waiver response to Hipparchia's paradox.

Nonetheless, I think it might be worth giving voluntarism another look, at least in the form of these waivers. There are two reasons. First, Hipparchia's paradox does provide a direct argument for waivers. Second, there's a big difference between God being the boss of us and us being the boss of us, or even better, the people our actions have an impact on being the boss of us. Arguments against divine voluntarism may well not carry over to this more worldly form of voluntarism. So now here's the next question: if waivers are a thing, which waivers are a thing? In the previous post I made a list of questions about possible waivers, and I'll repeat that list here, with a bit of commentary explaining why I thought they were worth asking.

  • Can I waive consideration of consequences to myself in the moral evaluation of someone else's actions?

The idea there is that if I volunteer to take one for the team, then it isn't wrong for the team to go along with that. Suppose you want to go to a party which will be pretty good, and I want to go to a different party which will be very good, but one of us has to stay home and wait for the plumber to come and fix the toilet. (We'll assume we'd both enjoy staying home equally.) What I'm suggesting is that if I volunteer to stay home, you don't wrong me in going along with this, even though I would probably enjoy my party more than you would enjoy yours. Now, you might disagree with this assessment of the situation. But the point is quite similar to Hipparchia's paradox: just as Theodorus is allowed to hit himself, people are allowed to sacrifice their own interests for others, even if the sacrifice is greater than the benefit. And even if they can't make the sacrifice without the co-operation of the beneficiaries, the beneficiaries don't do anything wrong in co-operating. If the person making the sacrifice says it's OK, then it's OK. (I'm not saying this is right, but this is the thinking behind the question.)

  • Can I waive consideration of some but not all negative consequences to myself?

I'm not sure I expressed this one as clearly as I could have, but here's what I'm thinking. Maybe it's OK for me to waive consideration of minor things, but not major things. Or maybe it's OK for me to waive consideration of forgone pleasures, but not of positive harms. I won't go into the details of what sorts of things I might not be morally allowed to do to myself, or other people might be wrong to do to me even with my permission, but there's a reasonably venerable tradition of thinking that there is such a distinction to be made. But if there is, you have to wonder what its basis might be.

  • Can I waive consideration of bad things happening to me even if someone else cares about me and so these would also be negative consequences to them?

Nobody is an island, and often if something bad happens to Theodorus, he's not the only person who suffers. If Theodorus hits himself, this might upset his friends, and maybe it's wrong because of that. I think there's a fair bit of pressure from common-sense morality to say that Theodorus hitting himself is nobody's business but his own, and if it bothers his friends then he's entitled to waive that fact from the moral evaluation of his action. There are probably limits to what common-sense morality permits along these lines, and maybe I'm getting common-sense morality wrong. But even if I'm not getting it wrong, I'm not really sure how this dynamic is supposed to work. One possibility is that waiving the harm Theodorus does you by hitting himself is partly constitutive of the very relationship in virtue of which Theodorus hitting himself harms you. While I do think this idea has some superficial appeal, I fear its appeal may be only superficial. But perhaps there's the germ of something workable in there.

  • Can I do this on an action-by-action basis, or at least a person-by-person basis, or do I have to waive it for all people or all actions if I waive it for one?

This is an issue about universalizability and fairness. How arbitrary am I allowed to be in dishing out permissions? One possibility is that we have a lot of latitude about what permissions we can give, but a lot less latitude about what permissions we should give. But I expect we also probably have a fair bit of latitude with the ones we should give, because these permissions are bound up with personal relationships, and we don't have personal relationships with everyone. In particular, waivers in personal relationships might often be part of a mutually beneficial reciprocal arrangement. Being morally in the wrong is bad for you, and personal relationships are difficult, and provided you're both trying hard it might be better not to be morally in the wrong every time you mess up. These waivers probably shouldn't have to be blanket waivers: a certain amount of mutual pre-emptive forgiveness doesn't make it impossible for you to to wrong each other.

  • Are there ever situations where someone can waive consideration of a negative consequence to someone other than themselves?

Part of the issue here is the nobody-is-an-island problem I discussed a couple of questions ago. But the issue also arises in the case of children, and other people who have someone else responsible for their welfare to an extent. It may also arise with God. I think it's quite possible that there just aren't any exceptions of this kind. You're allowed to take one for the team, but you're not allowed to have your children take one for the team. But here's an example I've been thinking about a bit. Suppose that you and I are doing a high-stakes pub quiz together, and we win a family trip to Disneyland. A reasonably fair thing to do would be for us to auction half of the trip between us and have the higher bidder pay the lower bidder for the lower bidder's half. But suppose I just tell you to go ahead and enjoy yourself. My family are losing out here as well as me, but somehow it still feels like I've done something nice, rather than robbing my family of the equivalent of half a trip to Disneyland. I think I'd probably end up coming down on the side of saying I'm wrong to give you my half of the trip, although perhaps the matter is complicated by the fact that my family weren't on the team, so it's my prize not theirs. But letting you have the trip does still put my family out. I'm really not sure what I think about this. But I think it's likely people do make this kind of collective sacrifice from time to time, and that they feel like they've done a good thing and not a bad thing.

I think that a moral theory that incorporated these kinds of waivers in a big way might have some mileage in it. There are plenty of worries about it, of course. I'll talk about two.

First, how freely does someone have to be giving these permissions? People make decisions with imperfect information and imperfect rationality, and they also make them under conditions of oppression. It's a common criticism of libertarian capitalism that letting people make whatever contracts they want will lead to a lot of inequality of outcome resulting from unequal bargaining positions. Most countries don't want the economically disempowered bargaining away their kidneys, and maybe we don't want people bargaining away the fact that harming them is wrong. I think some libertarian rhetoric makes it sound as if they think that contracts actually do have this wrongness-nullifying effect, but it's possible they don't say this, and if they do then I'm really not optimistic about them being right. You might be able to imagine idealized situations where the waivers look plausible, but the reality of it might look pretty hideous in some cases. And when you're doing ethics, hideousness detracts from plausibility.

My second worry is that constructing a theory of moral waivers might be joining what I think of as the excuses industry. Impartial consequentialism is notoriously demanding, especially in our interconnected world. But I don't think that should be surprising really: we don't expect it to be easy doing the right thing all the time. A long time ago I wrote about how the supposedly counterintuitive results of impartial consequentialism seemed to me to appeal to either selfishness, squeamishness, or a bit of both. I still feel the pull of that line of thought, and although I'm not really an impartial consequentialist myself, I am as I say a fellow traveller. Some people try to construct theories that don't have these demanding results, but I don't really want to be in the business of constructing theories that are basically elaborate excuses that allow us to live high while other people die. I hope that's not all I'd be doing, and I don't think it's all that other opponents of impartial consequentialism are doing, but I do think it's a trap you have to be careful not to fall into.

With those worries out in the open, I'll sketch the basic outline of the theory I've got in mind. You start with a background of some kind of impartial consequentialism, and then overlay the waivers. Morality might legitimate us making some very heavy demands on each other, but we don't have to actually make these demands. I guess the way it works is these waivers will create a category of supererogatory actions - actions which are good but not obligatory - which impartial consequentialism sometimes struggles to accommodate. If someone's waived a harm it's still better not to cause the harm, but it's not obligatory. I'm imagining the theory as being most distinctive in its treatment of morality within personal relationships. I mentioned earlier that some reciprocal waiving might be a common or even constitutive feature of some relationships. Perhaps it could be extended to involve relationships between people who don't know each other as well or at all, but who are members of the same community. If I was going to think seriously about that then I'd need to learn more about communitarian ethical theories. I'm really not very familiar with how they work, but from what I've heard they sound pretty relevant.

The post a few days ago closed with this argument against consequentialism:

  • Hipparchia's paradox shows that fully agent-neutral consequentialism is absurd.
  • The only promising arguments for consequentialism are arguments for fully agent-neutral consequentialism.
  • So there are no good arguments for consequentialism.

It's not great, really, and I said so at the time. But let's think about how all this waivers stuff started with Hipparchia's paradox. You could just look at the paradox and say "waivers are in, impartial consequentialism is out", and then merrily start constructing theories with waivers all over the place. I think that would be a mistake. An alternative, which I don't think would be the same kind of mistake, is to look at other candidates for waivers that are somehow similar to the original case. The best case I've got in mind is when a group of people see themselves as somehow on the same side, and so individual team-members' failures aren't moral failures, even though they could have done better and the other members of the team would have been better off. The team has a sufficient unity of purpose that the members view the team as analogous to an individual. Team members don't press moral charges against team members just as Theodorus doesn't press moral charges against himself.

One last thing about waivers is that you might share a lot of the intuitions about the examples but try to incorporate them within a straight impartial consequentialist theory. Maybe people being able to take advantage of the waivers without feeling guilty about it turns out to have the best consequences overall. There's a long tradition of moral philosophers doing this sort of thing. The main trick is to distinguish what it is for an action to be right or wrong with the information we use to decide whether an action is right or wrong. In John Stuart Mill's Utilitarianism he makes this move several times, and in fellow utilitarian RM Hare's Book Moral Thinking: Its Levels, Method, and Point those were the sorts of levels he was talking about. (At least if I remember them right.)

I used to be quite impressed with this move. Now I'm not so sure. The reason I've long been a fellow-traveller of impartial consequentialism without properly signing up is that I'm also a bit of an error theorist. I read JL Mackie's Ethics at an impressionable age, and while I'm a lot more sceptical about philosophical conclusions than I used to be, it's still got some pull for me. But maybe the reason it's got all this pull is because I'm thinking about moral facts in terms of what Henry Sidgwick (I think) called 'the point of view of the universe'. On that topic I read something once about the conflict between deontological and consequentialist ethics that stayed with me. The point was that deontologists shouldn't argue that sometimes there are actions we should do even though things will be worse if we do them. To concede that is to concede too much to the consequentialist: it makes things too easy for them if that's the position they have to attack. The consequentialist needs to earn the claim that there's some meaningful way in which consequences, or the whole world, can be good or bad simpliciter rather than just good or bad from the point of view of a person, or a particular system of rules, or perhaps something else. It's true: the consequentialist needs this claim and the deontologist doesn't. It's non-trivial and the deontologist should make the consequentialist earn it. And I don't think they have earned it. I can't remember where I read this, unfortunately. I'd thought it was in a Philippa Foot paper, but I re-read the two papers I thought it might be in (Foot 1972 and 1995), and while re-reading them was rewarding I couldn't find it in either. I still think it's probably her. If you can tell me where it's from, please do so in the comments. [UPDATE 24/6/18: She makes the point in Foot 1985: 'Utilitarianism and the virtues'.]

Anyway, maybe error theory wouldn't have the same pull for me if I got away from the idea of the point of view of the universe and instead thought about morality as being fundamentally about human relationships, collective decision-making and what have you. The levels move takes the manifest image of morality and explains it in terms of something more systematic at a lower level. But this systematic lower level is where the point of view of the universe is, and that's what threatens to turn me into an error theorist. The manifest image is where the human relationships and collective decision-making are, and maybe those aren't so weird. The dilemma arises because the lower level stuff about how good the universe is can seem more plausibly normative, while the higher level stuff about relationships is more plausibly real.

Of course, as things stand with the theory I've got in mind there's still an impartial consequentialist background with the waivers laid on top. The impartial consequentialist background is as weird as ever, and you can't have a moral theory that's all waivers. But maybe this could be a transitional step on the way to me having a more accurate conception of what moral facts are facts about, and perhaps eventually losing interest in moral error theory altogether. That might be nice.

Notes

[1] I'm a little unsure about the terminology here. It's pretty established to call divine command theory 'theological voluntarism', and I'm fairly sure I've seen 'voluntarism' used more generally to include non-theological versions like the waiver theory I'm talking about here. But 'voluntarism' also seems to be used to refer to theories according to which the moral properties of an action depend on the will with which the action was performed. (This idea is important in Kant's ethics.) The two ideas could overlap, but it's not obvious that they have to. So if you've got strong views about what 'voluntarism' means and think I'm using it wrong, then I apologize. And when you're discussing this blogpost with your friends, you should be careful how you use the word. But I don't know another word for the thing I'm talking about, and I think I've heard people calling it 'voluntarism', so that's the call I've made.

References

  • Foot, P. 1972: 'Morality as a system of hypothetical imperatives', Philosophical Review 81 (3):305-316
  • Foot, P. 1985: 'Utilitarianism and the virtues', Mind 94 (374):196-209
  • Foot, P. 1995: 'Does moral subjectivism rest on a mistake?', Oxford Journal of Legal Studies 15 (1):1-14
  • Hare, R. M. 1981: Moral Thinking: Its Levels, Method, and Point (Oxford University Press)
  • Mackie, J. L. 1977: Ethics: Inventing Right and Wrong (Penguin Books)
  • Mill, J. S. 1863/2004: Utilitarianism, Project Gutenberg ebook #11224, http://www.gutenberg.org/files/11224/11224-h/11224-h.htm
  • Plato c.399-5 BCE/2008: Euthyphro, trans. Benjamin Jowett, Project Gutenberg ebook #1642, http://www.gutenberg.org/files/1642/1642-h/1642-h.htm

Monday, May 21, 2018

Hipparchia's Paradox

The most famous cynic philosopher was Diogenes of Sinope, who lived in an old wine jar and told Alexander the Great to get out of his light. But he wasn’t the only cynic; there was a whole bunch of them. The second or third most famous cynic was Hipparchia. (The third or second was Crates, Hipparchia’s husband.) Hipparchia doesn’t seem to have written much if anything, as tended to be the way with the cynics, but history has recorded at least one of her arguments, via an anecdote about an exchange she had with some jackass called Theodorus at a party one time. Here’s how Diogenes Laertius (not to be confused with Diogenes the jar-dweller) tells it:
Theodorus, the notorious atheist, was also present [at Lysimachus’s party], and she posed the following sophism to him. ‘Anything Theodorus is allowed, Hipparchia should be allowed to do also. Now if Theodorus hits himself he commits no crime. Neither does Hipparchia do wrong, then, in hitting Theodorus.’ At a loss to refute the argument, Theodorus tried separating her from the source of her brashness, the Cynic double cloak. Hipparchia, however, showed no signs of a woman’s alarm or timidity. Later he quoted at her lines from The Bacchae of Euripides: ‘Is this she who abandoned the web and women’s work?’ ‘Yes,’ Hipparchia promptly came back, ‘it is I’. But don’t suppose for a moment that I regret the time I spend improving my mind instead of squatting by a loom.’ [Lives of the Ancient Philosophers 6: 96-8; pp45-6 in Dobbin]
I’ve quoted the context as well as just the argument, the alternative being to quote it out of context. I think it’s pretty clear that Hipparchia is the winner of this story, although it’s possible the reality of the situation was pretty unpleasant for everyone concerned. But having acknowledged the context, I’d like to think a bit about the argument in isolation. Here’s the argument laid out neatly:
  • Anything Theodorus is allowed, Hipparchia should be allowed to do also.
  • If Theodorus hits himself he commits no crime.
  • So neither does Hipparchia do wrong in hitting Theodorus.
The first premise is about universalizability: morality is supposed to apply equally to everyone. It’s a bit less clear what the theoretical basis of the second premise is. It seems like a part of most people’s common sense morality that if someone wants to hit themselves then that’s their own business, and while it might be inadvisable, it isn’t immoral. Common sense morality changes from place to place, but I guess this is part of it that my society has in common with Hipparchia’s. You could explain the truth of the second premise in various ways, some of which will mean qualifying or restricting it, and I think that how exactly we explain it will affect how the paradox gets resolved. The conclusion is meant to be absurd, showing that something is wrong with either the premises or the inference.
I think the most obvious way to try to resolve the paradox is to interpret the permission in the second premise as being explained by a general permission for people to hit themselves, rather than a general permission to hit Theodorus. The action that Theodorus is allowed to do is hitting oneself, not hitting Theodorus. Hipparchia is allowed to do the action hitting oneself too, so universalizability is saved.
There’s a problem with this, though: Theodorus is also allowed to do hitting Theodorus. He’d better be, because if an action is immoral under some description, then it’s immoral. This means there is something he’s allowed to do and Hipparchia isn’t, and so universalizability isn’t saved. Universalizability isn’t the idea that some of morality applies equally to everyone; it’s the idea that all of morality applies equally to everyone. Now, I don’t mean to be disingenuous. I’m not saying that Hipparchia’s paradox shows that universalizability is bunk; I’m just saying there’s more work to do. I don’t think there can be much doubt that it somehow matters that the description of the action as hitting oneself applies to Theodorus’s action and not Hipparchia’s. It just doesn’t resolve the paradox completely, and it’s perhaps more of a restatement of the paradox than anything else. Sometimes a restatement of a paradox is more or less all you need, but in this case I don’t think the restatement is enough.
Here’s another line of attack. Maybe on any given occasion it really is only OK for Theodorus to hit Theodorus if it’s OK for Hipparchia to hit Theodorus. The difference is that occasions when he hits himself will be those rare occasions when he wants to be hit, whereas occasions when she hits him are likely to be occasions when he doesn’t want to be hit. (And also he won’t hit himself harder than he wants to be hit.) This kind of reasoning is behind some anti-paternalist thinking in political philosophy. The classic anti-paternalist work is On Liberty, which was published under John Stuart Mill’s name but was probably coauthored with Harriet Taylor Mill, if you take its dedication literally. (It’s possible the Mills were the greatest philosophical power couple since Hipparchia and Crates. I can’t think of a greater one in the roughly 2150 years betweeen them, although perhaps you can, and perhaps there’s an obvious one I’m missing. [UPDATE: A friend pointed out I forgot Abelard and Heloise.]) They argued that the state shouldn’t be interfering with you if you’re not doing anyone else any harm. Here they are:
The object of this Essay is to assert one very simple principle, as entitled to govern absolutely the dealings of society with the individual in the way of compulsion and control, whether the means used be physical force in the form of legal penalties, or the moral coercion of public opinion. That principle is, that the sole end for which mankind are warranted, individually or collectively, in interfering with the liberty of action of any of their number, is self-protection. That the only purpose for which power can be rightfully exercised over any member of a civilised community, against his will, is to prevent harm to others. His own good, either physical or moral, is not a sufficient warrant. [On Liberty: p17]
People disagree over how far you can reconcile this with the consequentialism you find in Utilitarianism, but if you’re trying to reconcile them it usually goes roughly as follows. People will do things that have good consequences for themselves, so if their actions don’t have bad consequences for anyone else then they don’t have bad consequences for anyone. Given consequentialism, that means the actions aren’t bad. That means the state shouldn’t be interfering with them. It’s a bit of a Swiss cheese of an argument, and I think it remains so even if you’re properly doing justice to it, but I also think they were on to something important.
A classic example of paternalism is seatbelt laws. Idealizing a bit, the set-up is this: by not wearing a seatbelt you’re not putting anyone at risk but yourself. But by having laws demanding people wear seatbelts, you can save lives. Let’s consider a couple of things a libertarian might have to say about this:
  • “If I value my life so much and my convenience so little that the small chance that wearing a seatbelt will save my life is worth the inconvenience of wearing one, then I will wear a seatbelt.”
  • “The only person who stands to get hurt here is me, and I’m fine with it. Mind your own business.”
The first is a simple consequentialist argument: we don’t have to worry about people not wearing seatbelts in situations where the expected consequences are negative. (It also takes the relative value of someone’s life and convenience to be the relative value they themselves assign to them, but maybe that’s not so silly at least in the case of most adults.) The second libertarian response is harder to categorize. It can still be made out as consequentialist in a way, but it says that people are allowed to waive consideration of negative consequences to themselves. The first objection, where it applies, flows straightforwardly from a simple consequentialism that says the right thing to do is the thing with the best consequences. The second applies more generally, but it says that sometimes it’s OK to do the thing that doesn’t have the best consequences. If we’re allowing people to waive consideration of consequences to themselves in the moral evaluation of their own actions, this raises questions about what other kinds of waivers are allowed:
  • Can I waive consideration of consequences to myself in the moral evaluation of someone else’s actions?
  • Can I do this on an action-by-action basis, or at least a person-by-person basis, or do I have to waive it for all people or all actions if I waive it for one?
  • Can I waive consideration of some but not all negative consequences to myself?
  • Can I waive consideration of bad things happening to me even if someone else cares about me and so these would also be negative consequences to them?
  • Are there ever situations where someone can waive consideration of a negative consequence to someone other than themselves?
None of these seem to me like they have obvious answers, with the possible exception of the last one, even if we grant that people can waive consideration of harm to themselves in the moral evaluation of their own actions. I expect some readers will think some of the answers are fairly obvious (and that the last one is obviously obvious), or will at least have views on some of the questions, perhaps based on the literatures which presumably exist on each of them. To be clear, I’m not saying that a consequentialism with a self-sacrifice caveat can’t be made coherent. You could say that an action is permissible iff it either maximizes expected utility or has an expected utility for other people at least as high as the expected utility for other people of some permissible action. That seems to get the right results. The problem I have is that if waivers are a thing, then there are other waivers we might want to include in our theory as well, and after a while our theory might end up not looking much like consequentialism at all.
One way to avoid these questions is to deny that people can waive consideration of themselves in the first place. But then Hipparchia’s paradox comes back, at least a little. The problem with this simple consequentialist response to the paradox is that people don’t always do what’s best for them. Unless we supplement the response somehow, it will mean that whenever Theodorus hits himself and it isn’t what’s best for him, he is doing something wrong after all. (At least when he had enough information to work out that it probably wouldn’t be best for him.) Is this what we want to say?
I can sort of see how some people might want to bite this bullet. If you’re an agent-neutral consequentialist, then you think that the only information relevant to whether an action is wrong or not is how good its consequences are. Who did the action isn’t relevant. So this kind of consequentialist should say that Theodorus hitting himself really is immoral whenever it’s inadvisable. If someone gets on their high horse with you about how you’re not doing what’s best for yourself, they actually do have the moral high ground. Perhaps this is right. But it’s weird.
I don’t really feel like I’ve got very far with this. But my main aim was to present the argument as something worth thinking about, because I do think it’s worth thinking about. I’ll close by presenting another argument, which is also a Swiss cheese of an argument, but which I’m also worried might be on to something.
  • Hipparchia’s paradox shows that fully agent-neutral consequentialism is absurd.
  • The only promising arguments for consequentialism are arguments for fully agent-neutral consequentialism.
  • So there are no good arguments for consequentialism.
References
  • Dobbin, R. 2012: Anecdotes of the Cynics, selected and translated by Robert Dobbin. Penguin Random House.
  • Mill, J. S. 1859/2011: On Liberty, Project Gutenberg ebook #34901, http://www.gutenberg.org/files/34901/34901-h/34901-h.htm
  • Mill, J. S. 1863/2004: Utilitarianism, Project Gutenberg ebook #11224, http://www.gutenberg.org/files/11224/11224-h/11224-h.htm