Pages

Friday, October 12, 2018

Gambling With The Metaphysics Oracle

A Tax On Bullshit

There's a lot of bullshit around. Wherever you look, people are confidently making predictions, often while being paid to so, and by the time we've been able to test these predictions the people who made them are long gone, sunning themselves on a beach somewhere, spending our money and laughing at us. It's a sorry state of affairs. What can we do about it?

One idea, and it's a good one, is to get people to put their money where their mouths are. Offer people bets on their predictions. If they really believe what they're saying, they shouldn't mind having a little bet on it. If they don't believe it and bet anyway, then at least their bullshit is costing them something. People sometimes call betting on predictions a "tax on bullshit". The person I've heard talking most enthusiastically about this idea is Julia Galef, who I suppose is a pillar of what people refer to as the Rationality Community. Apparently she does it in her everyday life. She's always sounded to me as if she has a fun life, but I think I'm probably too much of an epistemic pessimist to fit in well with the Rationality Community myself.

Regular readers may recall that I sometimes worry about bullshit in philosophy. A lot of the claims philosophers make aren't really very testable at all, and so you can keep up your bullshit for thousands of years without ever being found out. Of course, if something isn't testable then it's not practical to bet on it. But lately I've been wondering how philosophers, particularly metaphysicians, would react if we somehow could offer them bets on their claims. Peter van Inwagen (1990), for example, doesn't think tables exist. When we think we're in the presence of a table, he thinks we're really just in the presence of some simples arranged tablewise. But if we could go to an Oracle to settle the question, would he put his simples arranged moneywise where his simples arranged mouthwise are?

Taking The Bet

The simplest response is for the philosopher to just take the bet, and offer us very favourable odds corresponding to how very sure they are that they're right. Maybe the methods we use for answering metaphysical questions aren't so different in principle from the methods we use for answering any other kind of question, and if we've got what we take to be a good argument then we should be confident in its conclusion. I think that plenty of metaphysicians would have no problem at all taking these bets. They mean what they say quite literally and they are confident that their answers are right.

Declining The Bet

A second response is not to take the bet, on the grounds that you don't actually believe the metaphysical positions you've taken. There are at least two ways this could work, one obvious and disreputable and the other less obvious and more respectable. The obvious one is that you're not actually committed to these positions the way you said you were. You were bullshitting, perhaps without properly realizing it, and now you've been found out. The more respectable one is that you are committed to your metaphysical positions, but the mode of commitment you take to be appropriate to metaphysical positions is not belief. It's something else. Helen Beebee (2018) argues for something along these lines, building on Bas van Fraassen's work on the analogous question in the philosophy of science. For Beebee it's largely a response to the phenomenon of expert disagreement in philosophy and concern about the reliability of philosophical methods, while for van Fraassen I understand it's more about the underdetermination of scientific theories by evidence1. For Beebee and van Fraassen, this kind of commitment is less about believing the untestable parts of the theory and more about committing oneself to participating in a particular research programme.

Rejecting The Setup

A third response is to reject the bet on the grounds that you reject the authority of the Oracle. How can you reject the authority of an Oracle? The basic idea is that we can't imagine anything the Oracle could say or do to convince us that our position is wrong, but you have to be careful with this sort of dialectical move. You don't want to be the sort of person who responds to the trolley problem2 (Foot 1967) by saying they'd dive off the trolley and push the workers out of the way, or some other such silliness. This kind of move usually just serves to derail the conversation and prevent us from engaging with the point the thought experiment was trying to make. In the trolley problem you just stipulate that the situation is simple and the outcomes given each action are certain.3 In the Oracle thought experiment you similarly stipulate that the Oracle is reliable (or infallible) and trusted by all parties. Nonetheless, I think that sometimes it does tell us something useful if we push back on the setup.

The cleaned up certain-outcomes version of the trolley problem isn't very realistic, but it's still something we can imagine. With at least some metaphysical questions, however, the Oracle thought experiment might give rise to what philosophers call imaginative resistance. This is what happens when what you're being asked to imagine somehow doesn't make sense to you, to the point that you struggle to imagine it. It can happen in various ways, including when a story is blatantly inconsistent, or when the moral facts in a story as stipulated conflict with the moral judgements we're inclined to make ourselves when given the non-moral facts. I want to suggest that this imaginative resistance is an indication that even if we take for granted that the Oracle is reliable and that we trust it, this might not be enough. We might still disagree with it, for reasons other than our not trusting it.

I can think of a couple of kinds of case where this situation might arise. Both embody a kind of Carnapian attitude towards philosophical questions. First, suppose we're asking the Oracle about whether there are tables, and it says that there are. Van Inwagen could respond in a couple of ways:

  • "Fair enough; that's a weird and oddly cluttered world, and there was no way for me to find out that there were tables, but I guess you're the Oracle here."
  • "Look, if you want to describe the world as having tables in it, that's up to you. I'm going to keep describing it as not having tables. We're both right by the lights of our own description schemes, and choosing between the schemes is a practical matter about which you're not the boss of me."4

Second, suppose that we're asking the Oracle what the correct analysis of knowledge is, and it says the correct one is Robert Nozick's (1981) counterfactual truth-tracking analysis. We point out all the bizarre results this commits us to as outlined by Saul Kripke (2011), and the Oracle just shrugs, says that's what the word "know" means, and presents a bunch of linguistic usage data to back up its claim. Again, two responses:

  • "Fair enough; the word 'know' isn't as useful as we thought it was, and we can be forgiven for thinking Nozick was wrong, but it means what it means and we must accept that."
  • "Look, if that's what the word means, then the word isn't fit for purpose. We need a concept of knowledge we can use for talking about important issues about humans' access to information about the world, and the concept embodied by this linguistic usage data simply can't be used for that."

These two cases are quite similar. The difference is that in the tables case the philosopher says they weren't wrong about anything: the Oracle and the philosopher have just made different choices, and they are the authorities over their own choices. In the knowledge case the philosopher is shown to be wrong about who knows what, but they push back by saying this just means we need different concepts. The cases kind of shade into each other a bit, but I think the distinction is there, and that the knowledge and tables cases are on different sides of it. Van Inwagen would not be surprised to be shown that we talk as if there are tables. Kripke would be surprised to be shown that we talk as if Nozick's is the correct analysis of the concept expressed by the word "know". That's a difference.

Now, even if you take one of these Carnapian lines, the Oracle could still push back and say that actually you're wrong about whose concepts are better. It might not want to do that in the knowledge case; it might agree that our word "know" attaches to a concept that isn't very useful. But the point is that the Oracle knows everything there is to know, and so it might know something that would make you change your mind. The thought here is along the lines of what Carrie Jenkins (2014) argues for and calls Quinapianism: the decisions over which concepts to use to describe the world are up to us the way Carnap thinks, but our views about which concepts are best are revisable in the light of new information the way Quine thinks all our cognitive commitments are. But even if the Oracle's omniscience gives it an advantage over us, what we end up with here is still a philosophical discussion of the more familiar kind. The Oracle makes the case for its recommended set of concepts, but it's still up to us which concepts we end up using.

So What?

I've had a bit of fun thinking about this, but does it tell us anything about anything? I think it does. I'm inclined to take philosophical questions at face value, and to have the same commitments with respect to them inside and outside of the philosophy room. If I'm bullshitting, I'm not consciously or deliberately bullshitting. I've got a lot of philosophical commitments, albeit subject to a great deal of uncertainty, and I'm sincere about them. But I think my responses to this thought experiment vary a lot depending on which philosophical question we're talking about. Sometimes I think I'd take the bet. Sometimes considering being offered a bet makes me feel more uncertain. (I guess in these cases either I'm being called on my bullshit or the feeling of added uncertainty is itself unjustified. Perhaps it's rooted in risk aversion.) And sometimes I come over a bit Carnapian and get one or other kind of imaginative resistance. (I'm not sure I ever feel the way Beebee suggests I should feel, but I don't understand her position terribly well, and in any case I've only run the thought experiment on a few questions.)

This variation is interesting to me. When people are talking about the status of philosophical propositions and beliefs, they sometimes make it sound like they think we should go for the same response to everything, or perhaps one response for ethics and another for everything else. But I feel very torn between the different responses for a lot of questions, and I lean quite different ways for different questions even within the same branch of philosophy. So when Beebee, Carnap and the rest are putting forward views on the status of philosophical disputes, an answer that works well for one dispute may not work so well for another. One more thing to worry about, I guess.

Notes

[1] There is a connection here, in that van Fraassen is sceptical about the reliability of scientific methods to verify the parts of theories that go beyond their empirical adequacy, for example by positing unobservable entities. I'm not a good source for van Fraassen though: most of this is coming from Beebee's account of his position.

[2] The trolley problem is a thought experiment where someone driving a train (or trolley) is going to hit five workers on the tracks, and the only way to avoid killing them is to steer down a side track and kill another, different worker. The original puzzle is explaining why the driver ought to steer and kill one to save five, even though in some other situations it's better to do nothing and let five people die than to act to save them at the cost of killing someone else. Foot gave the example of framing someone to prevent a riot, and another common one is killing someone to use their organs, which I think is due to Judith Jarvis Thomson (1985). Since Thomson's paper there has been a large research programme involving variants on the trolley problem. Some people think it is silly and dismissively call it 'trolleyology'. My own view is that it's unfairly maligned, although I do quite like the word 'trolleyology'.

[3] Foot does acknowledge that the trolley case isn't especially realistic and that the worker might be able to get out of the way. But she also notes that the relevant aspects of the outcomes often really are more or less certain in the real-life medical situations she's using the trolley problem to illuminate.

[4] As I understand him, van Inwagen himself is probably enough of a realist about metaphysics that he should go for the first answer rather than the second. But the availability of the second is what I'm interested in here, and other philosophers do apply a Carnapian approach to questions about composition, including the question of whether there are tables. The main person for this approach is probably Eli Hirsch.

References

  • Beebee, Helen (2018). I - The Presidential Address: Philosophical Scepticism and the Aims of Philosophy. Proceedings of the Aristotelian Society 118 (1):1-24.
  • Foot, Philippa (1967). The problem of abortion and the doctrine of the double effect. Oxford Review 5:5-15.
  • van Inwagen, Peter (1990). Material Beings. Cornell University Press.
  • Jenkins, C. S. I. (2014). Serious Verbal Disputes: Ontology, Metaontology, and Analyticity. Journal of Philosophy 111 (9-10):454-469.
  • Kripke, Saul A. (2011). Nozick on Knowledge. In Philosophical Troubles. Collected Papers Vol I. Oxford University Press.
  • Nozick, Robert (1981). Philosophical Explanations. Harvard University Press.
  • Thomson, Judith Jarvis (1985). The Trolley Problem. The Yale Law Journal 94 (6):1395-1415