Pages

Monday, December 19, 2016

The Electoral College Is A Good Idea

As I write this, the American electoral college is voting for the President. They are probably going to elect Donald Trump. America is unusual among democracies, in that its President is not elected directly. The simplest system, and the one most countries with presidents use, would be to award the Presidency to the person who gets the most votes. Then they could say that the President had a democratic mandate, the choice reflected the will of the majority, and that sort of thing. But they don't do that.


Each state (and DC) appoints some electors to the electoral college, and the electoral college elects the President and the Vice President. A state gets a number of electors equal to the number of representatives it has in Congress, and DC gets three, which is the number a state with DC's population would have. States have a number of House members proportional to their population, and two senators. This means, for the most part, that the smaller a state (or DC, although there are additional complications with DC) is, the more electors it has per person. If no candidate gets a majority of the votes then the House of Representives votes from among the top three.


Forty-eight states, and DC, tell all their electors to vote for whichever President/Vice President ticket got the most votes in the general election. Maine and Nebraska tell one elector to vote the way each electoral district voted, and tell the other two electors to vote for whichever ticket got the most votes statewide.


Some people - perhaps most people - don't think this system is very fair. These people are wrong.


There are four main objections to this system:

1) Small states have an influence in the electoral college out of proportion to their populations.
2) The system allows for the candidate who gets most votes not to become President. This happened to Al Gore in 2000, and to some other people a long time ago, and it will probably happen again to Hillary Clinton this year. It happened with the Vice Presidency too, to Joe Lieberman in 2000 and probably to Tim Kaine this year, but nobody cares about that.
3) When a state heavily favours one candidate, neither candidate has to do anything to get that state's votes, since one is guaranteed to get no electoral votes anyway and the other is guaranteed to get all of them.
4) If the electors don't vote the way they are told, then the wrong person will become President (or Vice President). This has never happened before, unless it's happened this year while I'm writing, but it bothers people that it could happen.


The first three can be taken together, because the correct response to them is the same. There's a big difference between a state of America and an administrative division of a lot of other countries.


In Britain, where I live, the national government is in charge. We have some devolution in Scotland, Northern Ireland, to a lesser extent Wales, and to a still lesser extent London, and they're a bit more like states. But in England the national government is in charge. It divides the country up into electoral regions, counties, metropolitan boroughs, postal regions and so on, at its convenience. Some of these regions have elections and local governments. But the national government divides it up however it wants. If the government wants to move most of Wirral, where I grew up, from Cheshire to Merseyside, then it does. (People write to local papers saying that Wirral is all still in Cheshire, but they are wrong.)


States are different. The original states of America were independent entities that decided to form a union, with a federal government controlling some things, and the states retaining control over some other things. It's a bit like the European Union. The central governing entities of the EU can't just decide that Wirral is a part of Ireland now, or that Alsace and Lorraine are parts of Germany. They can't tell countries how to appoint their commissioners. And there are some things they can't regulate within the countries, without an amendment to the European Union's rules that could be vetoed by any of the member countries.


To summarize: most countries divide themselves into administrative regions, but the states organize themselves into a country.


States aren't as independent as member countries of the European Union, but that's because they've chosen not to be. There are some rights they haven't chosen to give up, and one of those rights is the right to tell their electors who to vote for when it comes to the Presidency and the Vice Presidency. Now, maybe you think the states should give up this right, or that the states should decide how to tell the electors to vote in a different way. But that's up to them, and all the states have democratically elected governments that can change it if they want.


OK, now we can take the objections in turn.


1) Small states have an influence in the electoral college out of proportion to their populations.


Without loss of generality, California (big population) has made a deal with Montana (small population). The deal says that California has more influence over the Presidency than Montana, but not in proportion to the population. It's the same deal they've made when it comes to representation in Congress. If California doesn't like the deal, they can try to renegotiate the deal. They can't unilaterally secede, but nor can Montana: that's part of the deal too. But if they're renegotiating the deal, secession could be on the table. It's understandable that Montana doesn't want to be a drop in someone else's bucket, any more than Malta wants to be a drop in Italy or Tunisia's bucket, and it's understandable that other states are willing to compromise on this to get the benefits of having them in the Union. And whether or not you agree that it's understandable, that's the deal they made.


2) The system allows for the candidate who gets most votes not to become President.


Accepting (1) means accepting (2).


3) When a state heavily favours one candidate, neither candidate has to do anything to get that state's votes.


Most of the states, in the infinite wisdom of their state governments, have decided to tell their electors to vote for whoever gets the majority of votes in that state. This means that the presidential candidates can more or less ignore the interests of people in, for example, California (which always votes Democratic) and Texas (which always votes Republican). They can't afford to ignore them in the primaries, but once they've got the party's nomination, it's all about Florida and Ohio.


It's understandable that swing states would go for winner-takes-all. The presidential candidates can be expected to promise extra goodies to voters in swing states, which benefits everyone there. And while there's a chance the whole state won't vote the way an individual voter or legislator likes, this is kind of balanced out by the possibility that the whole state will vote the way they do like. The expected number of electoral voters isn't necessarily the same, because the expected vote split isn't the same as the chances of each side winning. (Suppose the state was guaranteed to vote 48-52.) Swing states aren't necessarily close states; they're states where either side might win (which in practice tend to be close states). But in genuine swing states, it's a gamble to be winner-take-all but not obviously a bad gamble, and there is the added bonus of candidates paying more attention to the voters when setting out their platforms.


It's less obvious that voters in safe states should prefer winner-take-all. They confine their influence over candidate platforms to the primaries, and in return they get a bigger influence in the electoral college. (Or for the minority voters, no influence. A California Republican or a Texas Democrat shouldn't prefer winner-take-all.) Most states have gone for winner-take-all. This is the outcome of a standard democratic process within the states themselves. Call it tyranny of the majority, but the states are representative democracies and not direct democracies, so if the voters care enough about it, there will be votes in it. I guess they care about something else more, and let the majority have their way on this one. In a democracy, the minority has to pick its battles.


4) If the electors don't vote the way they are told, then the wrong person will become President.


For some people, this is the one thing about the Electoral College they like. If someone wins the general election but they would be such a disastrous President that it's worth violating a strong democratic norm to keep them out of the White House, there's a final line of defence. But most people probably think that this norm is important enough that it should be enforced by a law. Some states do enforce it with a law, but the penalties probably aren't a sufficient deterrent given the stakes.


I'm not going to say whether this feature of the Electoral College is a bad thing or a good thing. Unlike the other three objections, it's less clear cut. But it's worth pointing out three things. First, the norm has proven pretty strong. Its being a norm rather than a law hasn't changed the result of any elections. Second, it's not weird to have the top dog chosen by representatives chosen by the people. Britain does the same thing. (Unless you think the Queen is top dog rather than the Prime Minister, but the Queen isn't directly elected either.) Third, if a few electors did violate the norm and cause a crisis, the (democratically elected) House of Representatives would usually be able to step in and avert that crisis. They would only be unable to avert it if someone who got a majority of pledged electors didn't get in the top three of actual electors, or if no-one got a majority of pledged electors and one of the top three pledged didn't get in the top three actual. [BUT SEE UPDATE BELOW.] Given the two party system and the strong norm, neither situation is likely. It's a bit more likely that a close electoral college would go to the House and the House would pick the wrong person. But it's OK if your constitution would lead to a minor constitutional crisis in an unlikely situation and a major constitutional crisis in a very unlikely situation. It's normal. Everyone's constitution is like that. You can't blame the electoral college.


Obviously I don't actually think the electoral college is a good system; nobody does. But it’s fun to defend the indefensible, and debate's healthy.

UPDATE 15/02/18: I've realized that the House would also be unable to avert the crisis if someone got a majority of actual electors without having had a majority of pledged electors. I guess this is more likely than the other two situations I thought of: you'd get it when the pledged elector count was very close and a small number of electors defected. I'm not sure how much this possibility should affect the strength of the overall argument.

Sunday, November 6, 2016

Mathematical and moral deference

Regular readers will remember that once I posted a fallacious proof of the continuum hypothesis. I’m not a mathematician, but I am a philosopher, and the continuum hypothesis is philosophically interesting as well as mathematically interesting, so occasionally I think about it and read about it and generally try to understand it better. The last time I did this I tried to learn about forcing. It didn’t go well.

Forcing is a mathematical technique for producing models of set theory. Paul Cohen used it to produce models of ZFC (Zermelo-Fraenkel set theory with the axiom of choice) in which the continuum hypothesis is false, which showed that it didn’t follow from the ZFC axioms. Kurt Godel had already produced a model of ZFC in which the continuum hypothesis was true, and so this completed a proof that ZFC doesn’t rule the continuum hypothesis in or out. Godel and Cohen’s methods for constructing models of set theory have proved very useful elsewhere in set theory too, and we’re all very grateful.

Anyway, when I was trying to learn about forcing I read a couple of papers with titles like “A beginner’s guide to forcing” and “A cheerful introduction to forcing and the continuum hypothesis”. They sounded like they were about my level. The first of those contained a passage which made me feel a bit better about not having understood forcing before:

All mathematicians are familiar with the concept of an open research problem. I propose the less familiar concept of an open exposition problem. Solving an open exposition problem means explaining a mathematical subject in a way that renders it totally perspicuous. Every step should be motivated and clear; ideally, students should feel that they could have arrived at the results themselves. The proofs should be “natural” in Donald Newman’s sense [Donald J. Newman, Analytic Number Theory, Springer-Verlag, 1998]:

This term . . . is introduced to mean not having any ad hoc constructions or brilliancies. A “natural” proof, then, is one which proves itself, one available to the “common mathematician in the streets.”
I believe that it is an open exposition problem to explain forcing. [Timothy Y. Chow, 'A Beginners Guide to Forcing', p.1]

After reading these papers, however, I still didn’t get it. One stumbling block was that when they sketched how it works, they would invoke a countable standard transtitive model of set theory. A standard model is one where all the elements are well-founded sets and “∈” is interpreted as set-membership. A transtive model is one where every member of an object in the domain (in reality, not just according to the model) is also in the domain. A countable model is one with countably many objects in its domain. And while they don’t construct one of these models, they assure us that the Lowenheim-Skolem theorem shows that there will be such models.

Now, for me, reading this feels the way that spotting an error feels. Because when I learned about the Lowenheim-Skolem theorem as an undergraduate, they told us it said that every theory in a countable first-order language that has a model has a countable model. It doesn’t say that it has a countable standard transitive model. Standardness and transitivity are meta-language things, and you can’t just stick an axiom into the theory so it’ll only have standard and transitive models. So the Lowenheim-Skolem theorem doesn’t guarantee anything of the kind, forcing doesn’t work and Paul Cohen should posthumously give back his Fields medal.

Of course, this isn’t the correct response to my feeling and the accompanying line of reasoning. I’ve just made a mistake somewhere. I think the most likely thing is that there’s a suppressed step in the argument which is obvious to the authors and to their imagined readers but which isn’t obvious to me. It’s normal for mathematicians to leave out steps they think are obvious, which is why maths papers aren’t all ridiculously long. It’s also normal for these steps not to be obvious to everyone, which is why maths papers are often difficult to understand. But in this case I can’t really even imagine what this obvious step might be. It just looks to me the way a silly mistake would look.

The fact I’ve made this error is sad and embarrassing for me, but I’ve no serious doubt that it is a fact. This just isn’t the sort of thing the mathematical community gets wrong. Well, there was that thing with John von Neumann and hidden variable interepretations of quantum mechanics, but the people who spotted the problems with that were mathematicians and physicists, not clowns like me. There’s no serious doubt that forcing works. I’d be a fool not to defer to the experts on this, and I do. (I feel like you won’t believe me about this, but I don’t know how to make it any more explicit.)

Normally when we trust mathematicians, we haven’t inspected the case ourselves. I’ve no significant doubt that Fermat’s last theorem is true, even though I haven’t looked at the proof. I do have significant doubt that the ABC conjecture is true, because not enough people have gone through the proof. Andrew Wiles’s original proof of Fermat’s last theorem didn’t work but could be fixed, but maybe Shinichi Mochizuki’s proof of the ABC conjecture doesn’t work and can’t be fixed. But when there’s a consensus in the mathematical community that they have found a rigorous proof of something, we can all believe it and we don’t have to look at the proof ourselves.

In the case with me trusting mathematicians on forcing and the continuum hypothesis, however, I have inspected the proof and the experience was subjectively indistiguishable from discovering that it didn’t work. That puts me in an odd position. It’s complicated, but the situation is a bit like if I believed that P and that if P then Q but rejected Q. The premises are compelling, the inference is compelling, but the conclusion is impossible to accept.

This is a bit like the situation you get in the presence of a paradox, like the liar paradox or Zeno’s paradoxes about motion. I think a lot of the time we just accept that some paradoxes are weird and there’s no particularly satisfying response to them. But with forcing and the continuum hypothesis, I also don’t think there’s any significant chance that there is any paradox here. It feels to me the way paradoxes feel, but it isn’t really a paradox. It’s a subjective paradox, if you like. An objective paradox arises where the premises are indubitable, the conclusion is unacceptable and the inference from one to the other is compelling. A subjective paradox arises for someone when it just seems that way to them.

I used to think that there weren’t any objective paradoxes and they were all ultimately subjective paradoxes, and I still think that might be right. But there’s a big difference between a subjective paradox where you’re just confused and something like Curry’s paradox where nobody knows how to sort it out. The thing with me and forcing is definitely in the subjective category.

So I’m in a weird situation, but you’re probably not in it yourself. I was reminded of this weird situation when I was reading a paper by David Enoch on moral deference recently. He discusses a couple of cases, one from Bernard Williams and one he came up with himself. Here’s the Williams one:

There are, notoriously, no ethical experts. […] Anyone who is tempted to take up the idea of there being a theoretical science of ethics should be discouraged by reflecting on what would be involved in taking seriously the idea that there were experts in it. It would imply, for instance, that a student who had not followed the professor’s reasoning but had understood his moral conclusion might have some reason, on the strength of his professorial authority, to accept it. […] These Platonic implications are presumably not accepted by anyone. [Bernard Williams, Making Sense of Humanity (Cambridge: Cambridge University Press, 1995): 205.]

Enoch’s own example is about his friend Alon who tends to think wars in the Middle East are wrong, whereas Enoch usually starts off thinking they’re right and then later comes round to agree with Alon. When a new war arises, Alon opposes it and Enoch doesn’t, but Enoch knows he’ll probably end up agreeing with Alon. Alon is likely to be right. So why not defer to Alon?

Enoch wants to explain what’s fishy about moral deference, while disagreeing with Williams about whether moral deference is OK. Enoch says sometimes you should defer. He should defer to Alon and the student should under some circumstances defer to the professor. The fishiness, according to Enoch, results from the fact that moral deference indicates a deficiency of moral understanding or something like it. But two wrongs don’t make a right, and it’s better to endorse the right conclusion without understanding the situation properly, than to endorse the wrong conclusion because you don’t understand the situation properly. I don’t think that misrepresents Enoch’s position too badly, although you can read the paper if you want to know what he actually says.

One interesting thing Enoch does in the paper is to compare moral deference with deference in other areas: prudential rationality, aesthetic assessment, metaphysics, epistemology, and maths. I think that considering the mathematical case can bring to light a source of fishiness Enoch overlooks. I would say there is nothing fishy about me deferring to the mathematical community about Fermat’s last theorem. I’ve good evidence it’s true, I’ve no idea why it’s true, and there’s nothing particularly odd about this situation. But my deference in the forcing case is fishy. People believe ZFC doesn’t imply the continuum hypothesis on the basis of the proof, I’ve inspected the proof, and from where I’m sitting it transparently sucks. To repeat: I don’t think the proof is wrong. But I am in a strange situation.

Now consider two cases of moral deference, adapted from the ones Enoch talks about. First, consider an 18 year old taking Ethics 101. They read a bit of Mill and decide utilitarianism sounds pretty good. They can’t really see yet how the morality of an action could be affected by anything besides its consequences, although they haven’t thought about it much. Professor Williams assures them there’s more to it and things like integrity and people’s personal projects matter. I think the student should probably trust Williams on this, at least to an extent, even before they’ve heard his arguments. (Perhaps especially before hearing his arguments.) Williams knows more about this than the student does, and the student doesn’t have enough information to come to their own conclusion. They’ve just read a bit of Mill.

Now consider Enoch and Alon. Enoch looks at the available non-moral information about the war and comes to the conclusion that this one is right. This time is different, he thinks, and he points to non-moral differences grounding the moral difference. Alon looks at the same information and comes to the conclusion that the war’s wrong. Should Enoch defer to Alon? Well, yes. I think so. And so does Enoch. But how does he go about deferring to Alon? He basically has to reject his inference from the indisputable non-moral premises to the conclusion, even though that inference seemed to be compelling. He’s in the grip of a subjective paradox, just like me.

Williams’s student lacks moral understanding, the way children frequently do. But there’s nothing especially epistemically dodgy about what Williams’s student is doing. He doesn’t have all the information to understand why integrity and projects matter but he has it on good authority that they do. Enoch lacks moral understanding too, but he’s in a worse position. He has what seems to him to be a compelling argument that Alon is wrong this time, and another compelling argument that Alon is probably right. Worse, we might as well stipulate that he has examined Alon’s argument and it doesn’t seem to be much good.

I stipulated that Enoch’s inference from the non-moral facts to his conclusion that the war was right seemed to him compelling. That was deliberate. Maybe you don’t think it’s a plausible stipulation, and if you don’t, then I’m on your side. I think that’s the source of the problem. It’s tempting to think that armchair reasoning has to seem compelling if it seems to be any good at all. What would an intermediate case look like? Either an argument follows or it doesn’t, right? Well sure, but we’ve got no special access to whether arguments follow or not. No belief-formation method is completely reliable. We can go wrong at any step and our confidence levels should factor this in. Nothing should seem completely compelling. Of course, you’d expect me to say that, because I’m a sceptic. But it makes it a lot easier to deal with situations like the forcing thing. By the way, if you think you can explain to me why the Lowenheim-Skolem theorem guarantees that ZFC has countable standard transitive models, feel free to have a go in the comments.

Wednesday, October 26, 2016

Let's talk about national insurance

Most people think rich people should pay more tax than poor people. There are at least three reasons for this. First, governments have to get their money from somewhere, and if everyone paid what poor people are able to pay, there might not be enough to run the government. Second, a given tax bill will do more damage to a poor person’s standard of living than to a rich person’s, so you can minimize the damage by having poor people pay less. Third, some people think it’s fair for rich people to pay more.

There are several ways of having rich people pay more tax. We tend to do this by taxing people more if they have higher incomes, although income and wealth aren’t the same thing. Taxing income instead of wealth doesn't really make sense to me, but maybe it's because it’s easier for rich people to avoid wealth taxes. But there are also different ways of taxing higher incomes more.

You can have a flat tax, so people pay in proportion to their income. 20% of a million pounds is more than 20% of ten thousand pounds. But that’s not what countries normally do. Normally they have what’s called a progressive tax. A progressive income tax is one where people with higher incomes pay a higher proportion of their incomes. The income tax in the UK is a progressive tax. Here are the rates:


UK Income Tax bands.png

You don’t pay any tax on the first £11,000 of yearly income, and then you pay 20% of the next £32,000, 40% of the next £107,000, and 45% of anything after that. People with higher incomes are paying higher rates. People making under £11,000 aren’t paying any income tax at all.

A progressive income tax has the “rich people pay more” effect to a greater extent than a flat tax does, and so leftwing people tend to like progressive taxes. Leftwing people these days also often call themselves “progressives”. As far as I know, this is a coincidence.

Now, as well as a flat tax or a progressive tax, there’s also the theoretical option of a regressive tax. You could just reverse the numbers in the table above: people pay 45% of the first £11,000, 40% of the next £32,000, 20% of the next £107,000, and nothing on income after that. Rich people are still paying more, just to a lesser extent. (Well, very rich people are all paying the same if the top rate is 0%. But very poor people are all paying the same, i.e. nothing, under the existing set-up. It still counts as rich people paying more than poor people.)

A regressive tax would probably be pretty unpopular. People would think it wasn’t fair, and to raise enough money to run the government you’d have to collect a lot of money from poor people, with inevitable damage to their quality of life. So it might surprise you to hear that the UK - the very same country that has the rather progressive income tax we talked about earlier - also has a very regressive tax. It’s called National Insurance. It doesn’t have the word “tax” in its name, but it’s still a tax. Here are the rates for employed people:

National Insurance marginal rates.png

Eagle-eyed readers will have noticed that £155 a week is a lot less than £11,000 a year, so while the poorest workers aren’t paying any income tax, they’re still paying plenty of national insurance. £827 a week does also put you more or less on the cusp of the basic and higher rates of income tax, which I suppose counts for something. But the fact remains that the UK has a progressive income tax which politicians and journalists talk about a lot, but makes up for it with a regressive tax that they hardly talk about at all. I don’t know if other countries do this, but it certainly seems odd. It’s hard to see what ideology a government could have which would motivate levying both taxes.

Before the 2015 general election, David Cameron said he had raised the personal income tax allowance a lot. “That is three million people taken out of income tax altogether”, he said. And I suppose that technically it was, but they were still paying plenty of tax on their income and their marginal rate was still 12%. So what Cameron said was a little bit misleading.

I think it’s very likely that the reason we have a progressive income tax and a regressive employment-income tax to make up for it is because the government and the electorate don’t agree on how progressively they want income to be taxed, and so the government divides the tax into two and mostly talks about the one that sounds less nakedly plutocratic. Maybe that’s not why they do it, but I’d at least like an explanation.







Tuesday, August 30, 2016

Scepticism as a engineering problem

Regular readers may remember that a while ago I was getting interested in scepticism. It was Peter Adamson’s fault, really. Well, I’m still interested in it. And I think basically that scepticism’s true. Here’s a picture of me defending it:


Nobody knows anything.jpg


That picture was taken over a year ago, and I still haven’t written the paper. But I thought I might write some blog posts about it. This is the first.


A while back I was reading Peter Unger’s book about scepticism, which is called Ignorance: A Case For Scepticism. You pretty much have to read it if you’re defending scepticism. This is what I thought of it at the time, followed by a response from Aidan McGlynn:

McGlynn Twitter Unger Ignorance Frustrating.png


And he’s right: it is a frustrating book. I don’t know why Aidan found it frustrating, but my problem with it is that the central argument is a bit silly. He argues that a lot of words are absolute terms, which means they only apply to cases at one end of a spectrum. If something could be flatter, it’s not flat. If something could be squarer, it’s not square. And if you could be justified in being more confident of something, or if you haven’t ruled out the possibility of some evidence making that thing seem less likely, then you don’t know it. In a lot of cases we’d ordinarily classify as knowledge, I guess you can’t completely rule out some new evidence coming in that’d make you more or less confident, so I guess if knowledge is one of these absolute terms, then a lot of everyday knowledge ascriptions are false, just like most flatness and squareness ascriptions. Unger hits you with this argument quite early on, and you think he’s just softening you up. You think he’s presenting a silly argument that shows his conclusion is technically true, before getting on to the real argument, the serious argument, the one that shows there’s something really problematic about our epistemic situation. You want him to argue that we’re never justified in being really confident about things. But if he ever does, I must have missed it. It was frustrating.


Now, you might wonder how else an analytic philosopher is supposed to argue for scepticism. Scepticism is the claim that nobody knows anything, or can know anything, or something like that. So as an analytic philosopher you analyse the concepts involved to get a more rigorous version of the everyday claim, and then argue for the rigorous version, either from premises people already agree with, or at least from premises they’ll agree with after going through your thought experiments. If the conclusion you come to from this method isn’t that big of a deal, that’s the method’s problem. And Unger pretty much agrees with this, which is why he also writes books about how most analytic philosophy isn’t that big of a deal.


I think there’s another way of framing the sceptical problem though, which doesn’t involve analysing the concept of knowledge, and doesn’t rest on any particular analysis of it. The sceptical problem is basically an engineering problem.

Think of all the opinions that right-thinking people have. They have strong opinions about what time it is, what happened a day ago, whether humans are causing climate change, what the capital of Sweden is, roughly how old the universe is, what stars are made of, and so on. Maybe they’re certain of some of this stuff; maybe they’re just pretty confident. Maybe there are also some things that a right-thinking person will think is between 60% and 70% likely, or whatever. In any case, there’s a credence distribution that sensible people will roughly have. Think for a bit about that credence distribution, in all its glorious and wide-ranging detail.


Now think about your evidence. You can perceive a few things around you. You’ve got some memories. You can do some tentative experiments: trying to move around, or rearranging the objects on a table, or looking behind you or underneath something. You can bring some memories to consciousness, you can ask yourself questions and test your dispositions to respond to them. You can type things into Google and get some results, and you can read the results. You can do some sums or concoct some philosophical arguments. You can get up and go somewhere else and be presented with the evidence over there instead. Maybe you can’t do all these things, and maybe you can do a few other things. The point is that when you really focus on it, your evidence can seem pretty limited. It can sometimes seem consistent with the solipsism of the present moment. In spite of this, you’re supposed to have all these strong opinions about remote things. So here’s the problem: how do you get from just this, to all that?


It’s not an argument, really. It’s a problem. You’ve got some tools and some raw materials, and you’re supposed to be able to do something with them. Your mission, should you choose to become an epistemologist, is to figure out to get from the evidence in front of you to roughly the credence distribution of a right-thinking adult. It’s a kind of engineering problem. It’s Descartes’s engineering problem. Descartes claimed that he could solve it, that he could start from premises that couldn’t be doubted and end up with something firm and constant in the sciences. You can argue how far he really believed he’d succeeded, but that is what he claimed. A simplified account of one version of his explanation is that you can argue fairly simply from pure reason and the contents of your own mind to the existence of a benevolent god who wouldn’t deceive you, and so if you do science as carefully as you can then you can be confident of the conclusions. Most people agree with the sceptics that this doesn’t work, but don’t agree with the sceptics that nothing works. I don’t think anything works, and that’s what puts me on the side of the sceptics. At least, I think it puts me on the side of the ancient sceptics, and Hume in some moods and maybe Montaigne and people like that. (I don’t know much about Montaigne and people like that.) I’m not sure Unger says anything much in his book to indicate that he and I are on the same side, though.


You can’t solve the engineering problem by analysing the concept of knowledge differently, because it doesn’t really use the concept of knowledge. It’s couched in some concepts, of course, and maybe you can try to undermine the problem with a careful analysis of those concepts. But what you really want is a story. Something like Descartes’s story, except plausible. You can tell a story about the raw materials, saying that perception has externalist content, or that we have acquaintance with Platonic forms, or that we have innate ideas. You can tell a story about the tools we have for constructing things out of those materials, bringing in externalist theories of justification, or inference to the best explanation, or talking about evolution. Maybe you can look hard at the maths of probability theory and see if there are any surprises there. And probably some of this storytelling can be and has been done under the auspices of conceptual analysis - analysing the concepts of perception, or justification, or whatever. But at the end of it, if it’s not a story about how we get from just this to all that, it’s not an answer to scepticism. That’s the story I don’t think can be plausibly told. And if you can’t tell that story, then you need to tell a different story about how an epistemically conscientious person will behave. I’ll sketch some of that story next time.

Saturday, July 30, 2016

Your defence of the Oxford comma sucks

The Oxford comma, also known as the serial comma, is the comma before the last item of a list. Some people usually put it in; some people usually leave it out. It’s the comma that appears in the first but not the second of these:


Tom, Dick, and Harry.
Tom, Dick and Harry.


I usually leave it out. When I was a little boy being taught how to use commas, I was taught the style that leaves it out. I got into the habit before I knew there was an alternative, and I haven’t seen a good reason to change. I’ve changed other writing habits I was taught as a child; for example, I was taught to type two spaces after a full stop, and to write “realise” and “jeopardise” instead of “realize” and “jeopardize”. Now I put one space after a full stop and write “realize” and “jeopardize”. I won’t bore you with the reasons for these changes, though if you’re keen then there are people to bore you with them here and here. Some people think there are also good reasons to buy into the Oxford comma. Their reasons are silly.


The standard Oxonian argument rests on the ambiguity of phrases like the following:


I’d like to thank my parents, Marilyn Monroe and God.
We invited the lion tamers, Stalin and Nelson Mandela.


These phrases are ambiguous between their intended readings and ones that imply that Marilyn Monroe and God are the speaker’s parents and that Stalin and Nelson Mandela are the lion tamers. The Oxford comma would remove this ambiguity. The argument doesn’t work, for four reasons.


First, the examples are made up. If you want to show that not using the Oxford comma causes problems, the best evidence would be instances of it causing problems. If people never used the anti-Oxonian style, the use of hypothetical examples would be understandable. But lots of people use that style, and if it’s a problem then their use should supply evidence of this.


Second, the examples are not ambiguous in context. We all know that context usually removes ambiguity, and context includes common knowledge among speakers and listeners. It also includes expectations about the sort of thing people are likely to be saying. We know that Marilyn Monroe and God are not your parents and you wouldn’t say they were, and this removes the ambiguity. Ambiguous examples would be ones like these:


I’d like to thank my parents, Steve and Michelle.
We invited the lion tamers, Steve and Michelle.


But those examples aren’t the ones people use to defend the Oxford comma, and that’s another reason their defence of it sucks.


Third, the fact that using the Oxford comma would sometimes remove an ambiguity does not mean you have to use it all the time. It is quite normal to use optional extra commas or other punctuation when leaving them out would make for problematic ambiguity. Every sensible writer does this, unless their punctuation style is already maximally comma-heavy and so there are no optional extra commas to add. It’d probably be sensible to use an Oxford comma in the examples above about Steve and Michelle, or to rephrase them if you intended the other readings. But the argument that you should always use the Oxford comma because sometimes it resolves ambiguity and you should punctuate all your lists the same way would lead to a maximally comma-heavy style in general, and nobody wants that. (Well, maybe the New Yorker wants that, but nobody else does. I’m a big fan of the New Yorker, but I disagree with their style guru Mary Norris about almost everything.) And even if you’re willing to bite the maximally comma-heavy bullet, it won’t do any good. When lists contain items which themselves include commas, you eventually have to start separating them with semi-colons or the writing will be incomprehensible. Not even the New Yorker advocates separating all items in all lists with semi-colons, just to maintain consistency with a practice that is occasionally necessary.


Fourthly, all the made-up examples can be easily changed into ammunition against the Oxford comma.


I’d like to thank my parents, Marilyn Monroe and God.
I’d like to thank my mother, Marilyn Monroe, and God.
We invited the lion tamers, Stalin and Nelson Mandela.
We invited a lion tamer, Stalin, and Nelson Mandela.


Here’s an example of this ambiguity occurring in the wild:


Oxford comma obama a republican.png