Pages

Showing posts with label scepticism. Show all posts
Showing posts with label scepticism. Show all posts

Sunday, November 6, 2016

Mathematical and moral deference

Regular readers will remember that once I posted a fallacious proof of the continuum hypothesis. I’m not a mathematician, but I am a philosopher, and the continuum hypothesis is philosophically interesting as well as mathematically interesting, so occasionally I think about it and read about it and generally try to understand it better. The last time I did this I tried to learn about forcing. It didn’t go well.

Forcing is a mathematical technique for producing models of set theory. Paul Cohen used it to produce models of ZFC (Zermelo-Fraenkel set theory with the axiom of choice) in which the continuum hypothesis is false, which showed that it didn’t follow from the ZFC axioms. Kurt Godel had already produced a model of ZFC in which the continuum hypothesis was true, and so this completed a proof that ZFC doesn’t rule the continuum hypothesis in or out. Godel and Cohen’s methods for constructing models of set theory have proved very useful elsewhere in set theory too, and we’re all very grateful.

Anyway, when I was trying to learn about forcing I read a couple of papers with titles like “A beginner’s guide to forcing” and “A cheerful introduction to forcing and the continuum hypothesis”. They sounded like they were about my level. The first of those contained a passage which made me feel a bit better about not having understood forcing before:

All mathematicians are familiar with the concept of an open research problem. I propose the less familiar concept of an open exposition problem. Solving an open exposition problem means explaining a mathematical subject in a way that renders it totally perspicuous. Every step should be motivated and clear; ideally, students should feel that they could have arrived at the results themselves. The proofs should be “natural” in Donald Newman’s sense [Donald J. Newman, Analytic Number Theory, Springer-Verlag, 1998]:

This term . . . is introduced to mean not having any ad hoc constructions or brilliancies. A “natural” proof, then, is one which proves itself, one available to the “common mathematician in the streets.”
I believe that it is an open exposition problem to explain forcing. [Timothy Y. Chow, 'A Beginners Guide to Forcing', p.1]

After reading these papers, however, I still didn’t get it. One stumbling block was that when they sketched how it works, they would invoke a countable standard transtitive model of set theory. A standard model is one where all the elements are well-founded sets and “∈” is interpreted as set-membership. A transtive model is one where every member of an object in the domain (in reality, not just according to the model) is also in the domain. A countable model is one with countably many objects in its domain. And while they don’t construct one of these models, they assure us that the Lowenheim-Skolem theorem shows that there will be such models.

Now, for me, reading this feels the way that spotting an error feels. Because when I learned about the Lowenheim-Skolem theorem as an undergraduate, they told us it said that every theory in a countable first-order language that has a model has a countable model. It doesn’t say that it has a countable standard transitive model. Standardness and transitivity are meta-language things, and you can’t just stick an axiom into the theory so it’ll only have standard and transitive models. So the Lowenheim-Skolem theorem doesn’t guarantee anything of the kind, forcing doesn’t work and Paul Cohen should posthumously give back his Fields medal.

Of course, this isn’t the correct response to my feeling and the accompanying line of reasoning. I’ve just made a mistake somewhere. I think the most likely thing is that there’s a suppressed step in the argument which is obvious to the authors and to their imagined readers but which isn’t obvious to me. It’s normal for mathematicians to leave out steps they think are obvious, which is why maths papers aren’t all ridiculously long. It’s also normal for these steps not to be obvious to everyone, which is why maths papers are often difficult to understand. But in this case I can’t really even imagine what this obvious step might be. It just looks to me the way a silly mistake would look.

The fact I’ve made this error is sad and embarrassing for me, but I’ve no serious doubt that it is a fact. This just isn’t the sort of thing the mathematical community gets wrong. Well, there was that thing with John von Neumann and hidden variable interepretations of quantum mechanics, but the people who spotted the problems with that were mathematicians and physicists, not clowns like me. There’s no serious doubt that forcing works. I’d be a fool not to defer to the experts on this, and I do. (I feel like you won’t believe me about this, but I don’t know how to make it any more explicit.)

Normally when we trust mathematicians, we haven’t inspected the case ourselves. I’ve no significant doubt that Fermat’s last theorem is true, even though I haven’t looked at the proof. I do have significant doubt that the ABC conjecture is true, because not enough people have gone through the proof. Andrew Wiles’s original proof of Fermat’s last theorem didn’t work but could be fixed, but maybe Shinichi Mochizuki’s proof of the ABC conjecture doesn’t work and can’t be fixed. But when there’s a consensus in the mathematical community that they have found a rigorous proof of something, we can all believe it and we don’t have to look at the proof ourselves.

In the case with me trusting mathematicians on forcing and the continuum hypothesis, however, I have inspected the proof and the experience was subjectively indistiguishable from discovering that it didn’t work. That puts me in an odd position. It’s complicated, but the situation is a bit like if I believed that P and that if P then Q but rejected Q. The premises are compelling, the inference is compelling, but the conclusion is impossible to accept.

This is a bit like the situation you get in the presence of a paradox, like the liar paradox or Zeno’s paradoxes about motion. I think a lot of the time we just accept that some paradoxes are weird and there’s no particularly satisfying response to them. But with forcing and the continuum hypothesis, I also don’t think there’s any significant chance that there is any paradox here. It feels to me the way paradoxes feel, but it isn’t really a paradox. It’s a subjective paradox, if you like. An objective paradox arises where the premises are indubitable, the conclusion is unacceptable and the inference from one to the other is compelling. A subjective paradox arises for someone when it just seems that way to them.

I used to think that there weren’t any objective paradoxes and they were all ultimately subjective paradoxes, and I still think that might be right. But there’s a big difference between a subjective paradox where you’re just confused and something like Curry’s paradox where nobody knows how to sort it out. The thing with me and forcing is definitely in the subjective category.

So I’m in a weird situation, but you’re probably not in it yourself. I was reminded of this weird situation when I was reading a paper by David Enoch on moral deference recently. He discusses a couple of cases, one from Bernard Williams and one he came up with himself. Here’s the Williams one:

There are, notoriously, no ethical experts. […] Anyone who is tempted to take up the idea of there being a theoretical science of ethics should be discouraged by reflecting on what would be involved in taking seriously the idea that there were experts in it. It would imply, for instance, that a student who had not followed the professor’s reasoning but had understood his moral conclusion might have some reason, on the strength of his professorial authority, to accept it. […] These Platonic implications are presumably not accepted by anyone. [Bernard Williams, Making Sense of Humanity (Cambridge: Cambridge University Press, 1995): 205.]

Enoch’s own example is about his friend Alon who tends to think wars in the Middle East are wrong, whereas Enoch usually starts off thinking they’re right and then later comes round to agree with Alon. When a new war arises, Alon opposes it and Enoch doesn’t, but Enoch knows he’ll probably end up agreeing with Alon. Alon is likely to be right. So why not defer to Alon?

Enoch wants to explain what’s fishy about moral deference, while disagreeing with Williams about whether moral deference is OK. Enoch says sometimes you should defer. He should defer to Alon and the student should under some circumstances defer to the professor. The fishiness, according to Enoch, results from the fact that moral deference indicates a deficiency of moral understanding or something like it. But two wrongs don’t make a right, and it’s better to endorse the right conclusion without understanding the situation properly, than to endorse the wrong conclusion because you don’t understand the situation properly. I don’t think that misrepresents Enoch’s position too badly, although you can read the paper if you want to know what he actually says.

One interesting thing Enoch does in the paper is to compare moral deference with deference in other areas: prudential rationality, aesthetic assessment, metaphysics, epistemology, and maths. I think that considering the mathematical case can bring to light a source of fishiness Enoch overlooks. I would say there is nothing fishy about me deferring to the mathematical community about Fermat’s last theorem. I’ve good evidence it’s true, I’ve no idea why it’s true, and there’s nothing particularly odd about this situation. But my deference in the forcing case is fishy. People believe ZFC doesn’t imply the continuum hypothesis on the basis of the proof, I’ve inspected the proof, and from where I’m sitting it transparently sucks. To repeat: I don’t think the proof is wrong. But I am in a strange situation.

Now consider two cases of moral deference, adapted from the ones Enoch talks about. First, consider an 18 year old taking Ethics 101. They read a bit of Mill and decide utilitarianism sounds pretty good. They can’t really see yet how the morality of an action could be affected by anything besides its consequences, although they haven’t thought about it much. Professor Williams assures them there’s more to it and things like integrity and people’s personal projects matter. I think the student should probably trust Williams on this, at least to an extent, even before they’ve heard his arguments. (Perhaps especially before hearing his arguments.) Williams knows more about this than the student does, and the student doesn’t have enough information to come to their own conclusion. They’ve just read a bit of Mill.

Now consider Enoch and Alon. Enoch looks at the available non-moral information about the war and comes to the conclusion that this one is right. This time is different, he thinks, and he points to non-moral differences grounding the moral difference. Alon looks at the same information and comes to the conclusion that the war’s wrong. Should Enoch defer to Alon? Well, yes. I think so. And so does Enoch. But how does he go about deferring to Alon? He basically has to reject his inference from the indisputable non-moral premises to the conclusion, even though that inference seemed to be compelling. He’s in the grip of a subjective paradox, just like me.

Williams’s student lacks moral understanding, the way children frequently do. But there’s nothing especially epistemically dodgy about what Williams’s student is doing. He doesn’t have all the information to understand why integrity and projects matter but he has it on good authority that they do. Enoch lacks moral understanding too, but he’s in a worse position. He has what seems to him to be a compelling argument that Alon is wrong this time, and another compelling argument that Alon is probably right. Worse, we might as well stipulate that he has examined Alon’s argument and it doesn’t seem to be much good.

I stipulated that Enoch’s inference from the non-moral facts to his conclusion that the war was right seemed to him compelling. That was deliberate. Maybe you don’t think it’s a plausible stipulation, and if you don’t, then I’m on your side. I think that’s the source of the problem. It’s tempting to think that armchair reasoning has to seem compelling if it seems to be any good at all. What would an intermediate case look like? Either an argument follows or it doesn’t, right? Well sure, but we’ve got no special access to whether arguments follow or not. No belief-formation method is completely reliable. We can go wrong at any step and our confidence levels should factor this in. Nothing should seem completely compelling. Of course, you’d expect me to say that, because I’m a sceptic. But it makes it a lot easier to deal with situations like the forcing thing. By the way, if you think you can explain to me why the Lowenheim-Skolem theorem guarantees that ZFC has countable standard transitive models, feel free to have a go in the comments.

Tuesday, August 30, 2016

Scepticism as a engineering problem

Regular readers may remember that a while ago I was getting interested in scepticism. It was Peter Adamson’s fault, really. Well, I’m still interested in it. And I think basically that scepticism’s true. Here’s a picture of me defending it:


Nobody knows anything.jpg


That picture was taken over a year ago, and I still haven’t written the paper. But I thought I might write some blog posts about it. This is the first.


A while back I was reading Peter Unger’s book about scepticism, which is called Ignorance: A Case For Scepticism. You pretty much have to read it if you’re defending scepticism. This is what I thought of it at the time, followed by a response from Aidan McGlynn:

McGlynn Twitter Unger Ignorance Frustrating.png


And he’s right: it is a frustrating book. I don’t know why Aidan found it frustrating, but my problem with it is that the central argument is a bit silly. He argues that a lot of words are absolute terms, which means they only apply to cases at one end of a spectrum. If something could be flatter, it’s not flat. If something could be squarer, it’s not square. And if you could be justified in being more confident of something, or if you haven’t ruled out the possibility of some evidence making that thing seem less likely, then you don’t know it. In a lot of cases we’d ordinarily classify as knowledge, I guess you can’t completely rule out some new evidence coming in that’d make you more or less confident, so I guess if knowledge is one of these absolute terms, then a lot of everyday knowledge ascriptions are false, just like most flatness and squareness ascriptions. Unger hits you with this argument quite early on, and you think he’s just softening you up. You think he’s presenting a silly argument that shows his conclusion is technically true, before getting on to the real argument, the serious argument, the one that shows there’s something really problematic about our epistemic situation. You want him to argue that we’re never justified in being really confident about things. But if he ever does, I must have missed it. It was frustrating.


Now, you might wonder how else an analytic philosopher is supposed to argue for scepticism. Scepticism is the claim that nobody knows anything, or can know anything, or something like that. So as an analytic philosopher you analyse the concepts involved to get a more rigorous version of the everyday claim, and then argue for the rigorous version, either from premises people already agree with, or at least from premises they’ll agree with after going through your thought experiments. If the conclusion you come to from this method isn’t that big of a deal, that’s the method’s problem. And Unger pretty much agrees with this, which is why he also writes books about how most analytic philosophy isn’t that big of a deal.


I think there’s another way of framing the sceptical problem though, which doesn’t involve analysing the concept of knowledge, and doesn’t rest on any particular analysis of it. The sceptical problem is basically an engineering problem.

Think of all the opinions that right-thinking people have. They have strong opinions about what time it is, what happened a day ago, whether humans are causing climate change, what the capital of Sweden is, roughly how old the universe is, what stars are made of, and so on. Maybe they’re certain of some of this stuff; maybe they’re just pretty confident. Maybe there are also some things that a right-thinking person will think is between 60% and 70% likely, or whatever. In any case, there’s a credence distribution that sensible people will roughly have. Think for a bit about that credence distribution, in all its glorious and wide-ranging detail.


Now think about your evidence. You can perceive a few things around you. You’ve got some memories. You can do some tentative experiments: trying to move around, or rearranging the objects on a table, or looking behind you or underneath something. You can bring some memories to consciousness, you can ask yourself questions and test your dispositions to respond to them. You can type things into Google and get some results, and you can read the results. You can do some sums or concoct some philosophical arguments. You can get up and go somewhere else and be presented with the evidence over there instead. Maybe you can’t do all these things, and maybe you can do a few other things. The point is that when you really focus on it, your evidence can seem pretty limited. It can sometimes seem consistent with the solipsism of the present moment. In spite of this, you’re supposed to have all these strong opinions about remote things. So here’s the problem: how do you get from just this, to all that?


It’s not an argument, really. It’s a problem. You’ve got some tools and some raw materials, and you’re supposed to be able to do something with them. Your mission, should you choose to become an epistemologist, is to figure out to get from the evidence in front of you to roughly the credence distribution of a right-thinking adult. It’s a kind of engineering problem. It’s Descartes’s engineering problem. Descartes claimed that he could solve it, that he could start from premises that couldn’t be doubted and end up with something firm and constant in the sciences. You can argue how far he really believed he’d succeeded, but that is what he claimed. A simplified account of one version of his explanation is that you can argue fairly simply from pure reason and the contents of your own mind to the existence of a benevolent god who wouldn’t deceive you, and so if you do science as carefully as you can then you can be confident of the conclusions. Most people agree with the sceptics that this doesn’t work, but don’t agree with the sceptics that nothing works. I don’t think anything works, and that’s what puts me on the side of the sceptics. At least, I think it puts me on the side of the ancient sceptics, and Hume in some moods and maybe Montaigne and people like that. (I don’t know much about Montaigne and people like that.) I’m not sure Unger says anything much in his book to indicate that he and I are on the same side, though.


You can’t solve the engineering problem by analysing the concept of knowledge differently, because it doesn’t really use the concept of knowledge. It’s couched in some concepts, of course, and maybe you can try to undermine the problem with a careful analysis of those concepts. But what you really want is a story. Something like Descartes’s story, except plausible. You can tell a story about the raw materials, saying that perception has externalist content, or that we have acquaintance with Platonic forms, or that we have innate ideas. You can tell a story about the tools we have for constructing things out of those materials, bringing in externalist theories of justification, or inference to the best explanation, or talking about evolution. Maybe you can look hard at the maths of probability theory and see if there are any surprises there. And probably some of this storytelling can be and has been done under the auspices of conceptual analysis - analysing the concepts of perception, or justification, or whatever. But at the end of it, if it’s not a story about how we get from just this to all that, it’s not an answer to scepticism. That’s the story I don’t think can be plausibly told. And if you can’t tell that story, then you need to tell a different story about how an epistemically conscientious person will behave. I’ll sketch some of that story next time.

Thursday, September 25, 2014

Ripped at the seems



I don’t often have ideas about epistemology. Longtime readers may remember me talking about epistemic virtues of personal concern, and for a while I misguidedly tried to push the idea that Gettier was wrong about knowledge not being justified true belief. Recently though, I was listening to Peter Adamson’s excellent podcast series on the history of philosophy, and when he got to the ancient sceptics I started thinking about epistemology again. Here’s my idea.

In epistemology, there are (of course) lots of debates, arguments and projects about knowledge. For example, sceptical arguments say that nobody knows anything, at least in some domain of inquiry. There’s also a kind of Kantian project that starts instead from the fact that we do have knowledge, and asks how everything else might be set up to make that possible. My idea is to have analogous debates but replacing “S knows that P” or “I know that P” with “it seems to S that P” and “it seems to me that P”. This goes off in a few directions.

First, there’s a sceptical argument that nothing seems to you to be true other than that you’re having the experiences you’re now having. Just as with knowledge, you can argue for this by considering sceptical scenarios, such as that you’re a brain in a vat hooked up to a virtual reality machine (a bit like the Matrix), or that you’re dreaming, or that Descartes’ evil demon is tricking you. Now, maybe you don’t know that these things aren’t happening, but at least they seem not to be happening, right? Well, that isn’t obvious. I mean, what would these things seeming to be happening be like? It’d be just like it is now. So maybe it actually does seem exactly like you’re in the Matrix, apart from the greeny-grey colour filter which we can ignore for present purposes. At least, it seems you’re in the Matrix just as much as it seems you’re in the regular external world. The only thing these seemingly-true scenarios have in common is your experiences, so maybe all we can really say is that you seem to be having the experiences you’re having now.

For fans of modal logic, I guess this is an understanding of seeming according to which something seems to be true iff it’s true in all possible worlds where you’re having the experiences you’re actually having. Maybe you could try replacing “all possible worlds” with “close possible worlds”, producing a kind of externalist understanding of seeming which is a bit like Nozick’s understanding of knowledge. I wonder if it’d be open to similar objections to Nozick’s view, including the absolute pummelling of it that Kripke published in 2011.

We don’t have to revise our understanding of seeming that way, though. We can revise it a different way which allows some things other than our experiences to seem to be happening, while avoiding the objections to Nozick's view. We can think about what seeming, and people, and the world would have to be like for there to be non-experiential seemings. This is like the Kantian project I mentioned before. Maybe biology can help. To me it, er, seems that this might be more plausible than coming up with a biologized conception of knowledge that lets us have what we want, because knowledge is a more normatively loaded concept. Being lucky enough to have cognitive biases might make things seem to be true, but could they make you know things? I don't know. Maybe we can construct a plausible naturalized epistemology around seeming which isn't vulnerable to the same normative criticisms as one constructed around knowledge.

If you’re still on board at this point, there are a couple of ways you might want to go. One is to be like Sextus Empiricus (I think), and never make claims about knowledge: you just say how things seem to you. You might think a position like that was unstable, because when you assert the position you’re committing to knowing it, not just to it seeming to be true. I don’t think that’s necessarily right though: we could posit a speech act which is correct when its content seems to you to be true, as assertions are (let’s say) correct when you know their contents. (More boringly, we could just allow that we do know how things seem to us.) We could investigate what kind of logical norms would apply to such a speech act. Maybe you can’t know contrary propositions, but can contrary propositions both seem to be true? If P&Q seems to be true, does that mean P seems to be true as well? There’s room for productive debate about those and similar questions. I think it’s perfectly possible that an epistemology constructed around seeming might be stable in a way that full-on scepticism of a kind which doesn’t allow any assertions or put anything in assertion’s place isn’t stable. The seeming-based epistemology might not be stable either, but I’d like to know, and if it is, and it’s what the ancient sceptics had in mind, that would be very cool.

Maybe we don’t have to content ourselves with seeming, though. One way epistemologists sometimes frame the problem of scepticism is as the problem of getting knowledge from non-knowledge. Maybe a beefed-up notion of seeming can give us some traction on this. That’s because if something seems to be true, that’s plausibly a defeasible reason for believing that it is true. So from seeming, which isn't knowledge, we get reasons for belief. Can we get all the way to knowledge? Maybe we could with a bit of ingenuity. Mark Schroeder wrote a book called Slaves of the Passions in which he tried to reduce all of our normative concepts to the idea of facts being reasons for people to do actions. I don’t actually think he succeeds, but it’s impressive how far he gets. (It really is a fantastic book; I can't praise it enough.) And if you can construct a bunch of normative concepts out of ‘P is a reason for S to do X’, maybe you can construct some epistemic concepts out of ‘S has a reason to believe P’. That would also be very cool.

Sunday, June 26, 2011

Who can you trust?

Regular readers will know that not so long ago I read and very much enjoyed Steven Pinker’s The Blank Slate, which is about the influence of genes on human psychology. That isn’t my area of expertise, but he seemed to be on the level. His reasoning was mostly reasonable and his evidential claims were backed up by sources which sounded reputable enough. The first half of chapter 18 was about psychological differences between the sexes. He argued that there was strong evidence that some psychological traits are correlated with sex and some evidence that genetic differences between men and women contribute to these correlations. As I say, he seemed to be on the level and know his stuff, so I believed him.

Last week I got some conflicting signals, though. I read Cordelia Fine’s Delusions of Gender, and I enjoyed that a lot too. Fine’s book is about the claims made about psychological differences between men and women, both by scientists and popular writers drawing on the work of scientists. I suppose if I was summarising its claims in bullet points, I’d pick these: 

  • There isn’t as much evidence for psychological differences between the sexes as a lot of people make out.
  • A lot of the research into this sort of thing is done very badly.
  • A lot of the popular writers either misinterpret or wildly extrapolate from what evidence there is, and sometimes just make things up.
  • The hypotheses getting tested tend to be based on stereotypes.
  • There’s no shortage of places to look for non-genetic explanations for the differences that have been found.

Fine seemed to be on the level just as much as Pinker did, and what she said was largely pretty persuasive. Since she went into far more detail about this specific issue than Pinker did, I suppose my credences are currently balanced in her favour, and my trust in the other 20½ chapters of Pinker’s book is correspondingly undermined. Mostly though, I just don’t know what to think. Fine goes into far more detail about the methods of the research she disagrees with than those of the research she uses to support her positive claims, so I’ve no way of knowing that I won’t read another book in a few months’ time which critiques that just as severely. If she had gone into as much detail about it all it would have doubled the length of her book though, so I can kind of see why she didn’t.

I like reading non-fiction, and I particularly like reading science books pitched at about the level Pinker’s and Fine’s books are pitched at. But I sometimes wonder why I bother. I’m trying to learn but if what I end up believing depends on which persuasive-sounding books are entertainingly written and easy to get hold of, then I’m not learning at all; I’m just making myself an unwitting vehicle for the memes I happen to get infected with. That’s no good. If all I’m going to learn from reading non-fiction is that scientists disagree with each other just as much as philosophers do and nobody really knows anything about anything, then maybe I’ll just read PG Wodehouse all the time.

Friday, December 24, 2010

Even Pyrrho believes in composite objects

Here’s an argument against hydrological scepticism. Given the way we’ve been using the word ‘water’ all this time, it’s going to refer to whatever is the wet stuff rivers and lakes are made of. Now we might have got our chemistry wrong, and actually the wet stuff isn’t H20 at all; it’s XYZ. In that case there wouldn’t be any H20, and considered as counterfactual that’d be a situation without any water. But considered as actual, if the wet stuff is XYZ then when we've been saying 'water' we've been talking about XYZ, so water is XYZ, so we still know there’s water. None of that’s very controversial. Even if you’re fan of (what I understand to be the views of) Pyrrho and think that while everyday knowledge is fine our claims to scientific knowledge are pushing it, you can still be sure there’s water.

Now think about a mereological nihilist who thinks composition’s not identity and there aren’t any composite objects. There are electrons and quarks, but no atoms, molecules, skyscrapers or galaxies. Here’s an argument that this combination of views doesn’t hang together well. Start from the position of someone who thinks composition isn’t identity, and it happens but contingently. Now consider the situation, which they think is possible, where composition hadn’t happened. That’d be a world with just simples and no composite objects, just like the mereological nihilist thinks. Of course the simples arranged peoplewise wouldn’t have noticed, because some simples arranged skyscraperwise look just like a skyscraper. What would the part-whole-talk have referred to? If there’s no eligible referent that matches use well enough, then it wouldn’t have referred to anything. Is there such a referent? What about plurality inclusion? It looks reasonably eligible to me. The simples arranged chairlegwise would be among the simples arranged chairwise. Plurality inclusion is a partial ordering like parthood’s meant to be. I’m not saying that it's a more eligible referent than the part-whole relation, but in the absence of the latter I think plurality inclusion is a pretty good candidate.

Plurality inclusion is what composition-as-identity fans think part-whole-talk is actually about. So my suggestion is that if you don’t think composition’s identity, then you should think that if composition didn’t happen then part-whole-talk would be about plurality-inclusion and identity. This is inconsistent with the combination of composition-isn’t-identity and mereological nihilism.

So if the argument works, we can be surer that composition happens than we can be that non-identity composition happens, just as we can be surer that there’s water than that there’s H2O. Also, we shouldn’t think both that water’s H2O and that there’s no water in the rivers. If thinking composition isn’t identity and it doesn’t happen is like that, then we shouldn’t think both those things. But some people do.

Now, this argument makes use of the Lewis-style metasemantics whereby reference is determined by use plus naturalness. It also supposes that our concept of composition is the kind of concept which, like that of water, doesn’t tell you all about what composition’s like. But if you don’t think composition’s identity, then you should think that. And even if you do, maybe you should still think that. Look at Peter van Inwagen’s despair at solving the general composition question. In spite of the theoretical background it relies on though, I think it’s an interesting argument, one I’ve not seen written down before and one I’d like to see mereological nihilists address.