Regular readers will remember that once I posted a fallacious proof of the continuum hypothesis. I’m not a mathematician, but I am a philosopher, and the continuum hypothesis is philosophically interesting as well as mathematically interesting, so occasionally I think about it and read about it and generally try to understand it better. The last time I did this I tried to learn about forcing. It didn’t go well.
Forcing is a mathematical technique for producing models of set theory. Paul Cohen used it to produce models of ZFC (Zermelo-Fraenkel set theory with the axiom of choice) in which the continuum hypothesis is false, which showed that it didn’t follow from the ZFC axioms. Kurt Godel had already produced a model of ZFC in which the continuum hypothesis was true, and so this completed a proof that ZFC doesn’t rule the continuum hypothesis in or out. Godel and Cohen’s methods for constructing models of set theory have proved very useful elsewhere in set theory too, and we’re all very grateful.
Anyway, when I was trying to learn about forcing I read a couple of papers with titles like “A beginner’s guide to forcing” and “A cheerful introduction to forcing and the continuum hypothesis”. They sounded like they were about my level. The first of those contained a passage which made me feel a bit better about not having understood forcing before:
All mathematicians are familiar with the concept of an open research problem. I propose the less familiar concept of an open exposition problem. Solving an open exposition problem means explaining a mathematical subject in a way that renders it totally perspicuous. Every step should be motivated and clear; ideally, students should feel that they could have arrived at the results themselves. The proofs should be “natural” in Donald Newman’s sense [Donald J. Newman, Analytic Number Theory, Springer-Verlag, 1998]:
This term . . . is introduced to mean not having any ad hoc constructions or brilliancies. A “natural” proof, then, is one which proves itself, one available to the “common mathematician in the streets.”
I believe that it is an open exposition problem to explain forcing. [Timothy Y. Chow, 'A Beginners Guide to Forcing', p.1]
After reading these papers, however, I still didn’t get it. One stumbling block was that when they sketched how it works, they would invoke a countable standard transtitive model of set theory. A standard model is one where all the elements are well-founded sets and “∈” is interpreted as set-membership. A transtive model is one where every member of an object in the domain (in reality, not just according to the model) is also in the domain. A countable model is one with countably many objects in its domain. And while they don’t construct one of these models, they assure us that the Lowenheim-Skolem theorem shows that there will be such models.
Now, for me, reading this feels the way that spotting an error feels. Because when I learned about the Lowenheim-Skolem theorem as an undergraduate, they told us it said that every theory in a countable first-order language that has a model has a countable model. It doesn’t say that it has a countable standard transitive model. Standardness and transitivity are meta-language things, and you can’t just stick an axiom into the theory so it’ll only have standard and transitive models. So the Lowenheim-Skolem theorem doesn’t guarantee anything of the kind, forcing doesn’t work and Paul Cohen should posthumously give back his Fields medal.
Of course, this isn’t the correct response to my feeling and the accompanying line of reasoning. I’ve just made a mistake somewhere. I think the most likely thing is that there’s a suppressed step in the argument which is obvious to the authors and to their imagined readers but which isn’t obvious to me. It’s normal for mathematicians to leave out steps they think are obvious, which is why maths papers aren’t all ridiculously long. It’s also normal for these steps not to be obvious to everyone, which is why maths papers are often difficult to understand. But in this case I can’t really even imagine what this obvious step might be. It just looks to me the way a silly mistake would look.
The fact I’ve made this error is sad and embarrassing for me, but I’ve no serious doubt that it is a fact. This just isn’t the sort of thing the mathematical community gets wrong. Well, there was that thing with John von Neumann and hidden variable interepretations of quantum mechanics, but the people who spotted the problems with that were mathematicians and physicists, not clowns like me. There’s no serious doubt that forcing works. I’d be a fool not to defer to the experts on this, and I do. (I feel like you won’t believe me about this, but I don’t know how to make it any more explicit.)
Normally when we trust mathematicians, we haven’t inspected the case ourselves. I’ve no significant doubt that Fermat’s last theorem is true, even though I haven’t looked at the proof. I do have significant doubt that the ABC conjecture is true, because not enough people have gone through the proof. Andrew Wiles’s original proof of Fermat’s last theorem didn’t work but could be fixed, but maybe Shinichi Mochizuki’s proof of the ABC conjecture doesn’t work and can’t be fixed. But when there’s a consensus in the mathematical community that they have found a rigorous proof of something, we can all believe it and we don’t have to look at the proof ourselves.
In the case with me trusting mathematicians on forcing and the continuum hypothesis, however, I have inspected the proof and the experience was subjectively indistiguishable from discovering that it didn’t work. That puts me in an odd position. It’s complicated, but the situation is a bit like if I believed that P and that if P then Q but rejected Q. The premises are compelling, the inference is compelling, but the conclusion is impossible to accept.
This is a bit like the situation you get in the presence of a paradox, like the liar paradox or Zeno’s paradoxes about motion. I think a lot of the time we just accept that some paradoxes are weird and there’s no particularly satisfying response to them. But with forcing and the continuum hypothesis, I also don’t think there’s any significant chance that there is any paradox here. It feels to me the way paradoxes feel, but it isn’t really a paradox. It’s a subjective paradox, if you like. An objective paradox arises where the premises are indubitable, the conclusion is unacceptable and the inference from one to the other is compelling. A subjective paradox arises for someone when it just seems that way to them.
I used to think that there weren’t any objective paradoxes and they were all ultimately subjective paradoxes, and I still think that might be right. But there’s a big difference between a subjective paradox where you’re just confused and something like Curry’s paradox where nobody knows how to sort it out. The thing with me and forcing is definitely in the subjective category.
So I’m in a weird situation, but you’re probably not in it yourself. I was reminded of this weird situation when I was reading a paper by David Enoch on moral deference recently. He discusses a couple of cases, one from Bernard Williams and one he came up with himself. Here’s the Williams one:
There are, notoriously, no ethical experts. […] Anyone who is tempted to take up the idea of there being a theoretical science of ethics should be discouraged by reflecting on what would be involved in taking seriously the idea that there were experts in it. It would imply, for instance, that a student who had not followed the professor’s reasoning but had understood his moral conclusion might have some reason, on the strength of his professorial authority, to accept it. […] These Platonic implications are presumably not accepted by anyone. [Bernard Williams, Making Sense of Humanity (Cambridge: Cambridge University Press, 1995): 205.]
Enoch’s own example is about his friend Alon who tends to think wars in the Middle East are wrong, whereas Enoch usually starts off thinking they’re right and then later comes round to agree with Alon. When a new war arises, Alon opposes it and Enoch doesn’t, but Enoch knows he’ll probably end up agreeing with Alon. Alon is likely to be right. So why not defer to Alon?
Enoch wants to explain what’s fishy about moral deference, while disagreeing with Williams about whether moral deference is OK. Enoch says sometimes you should defer. He should defer to Alon and the student should under some circumstances defer to the professor. The fishiness, according to Enoch, results from the fact that moral deference indicates a deficiency of moral understanding or something like it. But two wrongs don’t make a right, and it’s better to endorse the right conclusion without understanding the situation properly, than to endorse the wrong conclusion because you don’t understand the situation properly. I don’t think that misrepresents Enoch’s position too badly, although you can read the paper if you want to know what he actually says.
One interesting thing Enoch does in the paper is to compare moral deference with deference in other areas: prudential rationality, aesthetic assessment, metaphysics, epistemology, and maths. I think that considering the mathematical case can bring to light a source of fishiness Enoch overlooks. I would say there is nothing fishy about me deferring to the mathematical community about Fermat’s last theorem. I’ve good evidence it’s true, I’ve no idea why it’s true, and there’s nothing particularly odd about this situation. But my deference in the forcing case is fishy. People believe ZFC doesn’t imply the continuum hypothesis on the basis of the proof, I’ve inspected the proof, and from where I’m sitting it transparently sucks. To repeat: I don’t think the proof is wrong. But I am in a strange situation.
Now consider two cases of moral deference, adapted from the ones Enoch talks about. First, consider an 18 year old taking Ethics 101. They read a bit of Mill and decide utilitarianism sounds pretty good. They can’t really see yet how the morality of an action could be affected by anything besides its consequences, although they haven’t thought about it much. Professor Williams assures them there’s more to it and things like integrity and people’s personal projects matter. I think the student should probably trust Williams on this, at least to an extent, even before they’ve heard his arguments. (Perhaps especially before hearing his arguments.) Williams knows more about this than the student does, and the student doesn’t have enough information to come to their own conclusion. They’ve just read a bit of Mill.
Now consider Enoch and Alon. Enoch looks at the available non-moral information about the war and comes to the conclusion that this one is right. This time is different, he thinks, and he points to non-moral differences grounding the moral difference. Alon looks at the same information and comes to the conclusion that the war’s wrong. Should Enoch defer to Alon? Well, yes. I think so. And so does Enoch. But how does he go about deferring to Alon? He basically has to reject his inference from the indisputable non-moral premises to the conclusion, even though that inference seemed to be compelling. He’s in the grip of a subjective paradox, just like me.
Williams’s student lacks moral understanding, the way children frequently do. But there’s nothing especially epistemically dodgy about what Williams’s student is doing. He doesn’t have all the information to understand why integrity and projects matter but he has it on good authority that they do. Enoch lacks moral understanding too, but he’s in a worse position. He has what seems to him to be a compelling argument that Alon is wrong this time, and another compelling argument that Alon is probably right. Worse, we might as well stipulate that he has examined Alon’s argument and it doesn’t seem to be much good.
I stipulated that Enoch’s inference from the non-moral facts to his conclusion that the war was right seemed to him compelling. That was deliberate. Maybe you don’t think it’s a plausible stipulation, and if you don’t, then I’m on your side. I think that’s the source of the problem. It’s tempting to think that armchair reasoning has to seem compelling if it seems to be any good at all. What would an intermediate case look like? Either an argument follows or it doesn’t, right? Well sure, but we’ve got no special access to whether arguments follow or not. No belief-formation method is completely reliable. We can go wrong at any step and our confidence levels should factor this in. Nothing should seem completely compelling. Of course, you’d expect me to say that, because I’m a sceptic. But it makes it a lot easier to deal with situations like the forcing thing. By the way, if you think you can explain to me why the Lowenheim-Skolem theorem guarantees that ZFC has countable standard transitive models, feel free to have a go in the comments.
No comments:
Post a Comment