Pages

Showing posts with label Daniel Kahneman. Show all posts
Showing posts with label Daniel Kahneman. Show all posts

Friday, September 8, 2017

Reviewing Non-Fiction Is Hard

Sometimes I read non-fiction books, and really enjoy them. “What an awesome book,” I’ll think. But it’s actually quite hard to tell if non-fiction books are any good or not. At least, it’s hard to tell just by reading them. I guess you could read a review. But someone has to write the reviews.


The reason it’s hard is that you’ll usually be in one of two situations. Either you’ll be an expert in the topic the book is about, or you won’t be. Suppose you’re not, and so a lot of the stuff in the book is new to you. You don’t know if the book is any good or not, because you don’t know whether the stuff in the book is right or not. You can try factchecking it, but even if you can track down the sources they’ll often be buried in difficult academic writing of a sort you’re not really competent to understand. And if the book you’re reading is any good, a lot of what it’s telling you won’t be checkable facts, but rather a kind of expert insight and analysis that you wouldn’t be able to reconstruct yourself. And of course some books contain original research, in which case they can’t really be checked because in a way they are the source. These three all blur into each other, but they all make it very hard for a non-expert to tell if a non-fiction book is any good just by reading it. (And if we’re being strict about “just by reading it”, you’re not allowed to factcheck it anyway! But let’s not be strict. It's hard in any case.)


A good example of this danger is Daniel Kahneman’s Thinking, Fast and Slow. Kahneman is one of the world’s top psychologists, he did pioneering work on cognitive biases with Amos Tversky, and he won the economics Nobel "for having integrated insights from psychological research into economic science, especially concerning human judgment and decision-making under uncertainty". The book is about cognitive biases and decision-making under uncertainty, and the title refers to two styles of thought, one automatic and not very conscious and the other fairly carefully thought through and more conscious. It tells you about lots of cool little findings from psychological research, like how people are more likely to believe something they read if it’s written more legibly (pp62-3), and it integrates the anecdotes into a general narrative about the different ways people think.


When that book came out, it was very well received by people who weren’t already experts on the topic. I vaguely remember experts being more divided on it, but I’m not an expert and I thought it was awesome. I was entertained by the anecdotes, and I really felt like I was learning something. I told my friends the anecdotes as if we could be confident that they were true, and I recommended the book to them. But since it came out, psychology has had a bit of an existential crisis based on the fact that lots of its little findings don’t replicate. A lot of effects people found may well have been flukes that only seemed representative of how people behave because when psychologists have done experiments and not found anything cool they haven’t told anyone about them. Or they have but nobody has listened, which makes them less likely to bother telling people next time. Kahneman himself is very concerned about the whole thing, and he thinks he was a bit too credulous about some of the stuff in the book.


Now, I don’t want to set Kahneman up as some kind of fall-guy here. It may still be a good book, and the issues with some of the anecdotes may be kind of minor. He’s still a great psychologist and communicator, he was writing in good faith, and his willingness to publicly address problems with his own work is impressive and an example to his colleagues. The point is that I wasn’t competent to judge how good his book was. And to be honest, I still couldn’t tell you.


So, it’s hard to tell if a non-fiction book is good if you’re not already very familiar with the subject. Well, duh! But what if you are an expert? What if you could have written the book yourself? In that case you have a different problem: The Curse Of Knowledge. In his book The Sense of Style, Steven Pinker argues that the reason people often write badly is they struggle to imagine what it’s like not to know things that they in fact do know. You know what you want to say, but your reader has to try to work it out from what you’ve written, and it’s hard to put yourself into their mind and work out whether what you’ve written would still be clear. And if you’re trying to explain something you already understand but they don’t, you have to put yourself into their mind and work out whether they can understand the thing you’re trying to explain on the basis of what you’ve written. That’s hard too. Now, imagine you’re an expert reading a book by another expert on the same thing, and you’re trying to work out whether a non-expert will be able to understand the thing the book is about on the basis of what the book says. It’s not easy to do.


So, who should be reviewing books, if both experts and novices face systematic obstacles? Three suggestions come to mind.
  • Someone who is neither an expert nor a novice. What you want is someone who doesn’t already know what the book says, and so doesn’t have the curse of knowledge, but who’s competent enough in the sort of thing the book is about to be able to check if it’s right, once they’ve been told the things the book says. In general it’s often easier to check an answer to a question than find the answer. While I wasn’t competent to check if Kahneman’s book was any good, maybe a psychologist in a different field could have done it.
  • A great teacher. The curse of knowledge essentially arises because a certain kind of imaginative exercise is difficult. But some people seem to be quite good at it. Being good at it is part of the skillset of a great teacher: they need to be able to get into the minds of the students and tell whether what they’re saying would communicate the material to someone who wasn’t already familiar with it.
  • A novice and an expert working together. The two problems are pretty separate, so in theory the expert ought to be able to read the book to check that what it says is sound, while the novice reads it to see if they feel like they’re learning something. And after they’ve both read it, the novice and the expert can talk to each other so the expert can check that the novice really did learn the things they felt like they were learning.

My favourite one is the last one. While an expert in an adjacent field might be able to do a decent job of factchecking, they won’t do as good a job as an expert, and it’ll be harder for them. Their transferable research skills will also mean they still have to do a bit of difficult imagining to get into the minds of the intended audience. A great teacher might be able to do the job, but we don’t even have enough great teachers to fill all the teaching jobs, let alone all the reviewing jobs as well. This leaves the last option. Unlike great teachers, experts are ten a penny, and novices are twenty a penny. Of course, you do need two people. But it’s still my favourite option, and it’s kind of odd that it practically never happens.

Wednesday, May 8, 2013

Risky business


I don’t know how many of the millions of people who’ve bought Daniel Kahneman’s Thinking, Fast and Slow have read it, but I have and I thought it was very interesting. One of things he talks about in chapters 25-26 is risk aversion. Lots of people won’t take a bet to either gain $200 or lose $100 on a coin-toss, and that seems to mean they’re risk averse. They stand to gain more than they stand to lose, and the chances are equal, but they won’t take that chance. Regular readers may remember risk aversion coming up once before when I was talking about Deal or No Deal.

Kahneman says that for a long time economists used to think that (or at least idealize that) people were risk averse when it came to money, but not when it came to utility. Your first million makes a bigger difference to you than your second, and maybe it even makes a bigger difference than your second and third put together. In view of that, maybe your last $100 makes more of a difference than your next $200. If that’s right, you’re not rejecting the bet by being risk averse; you’ve just got a proper appreciation of the diminishing marginal utility of money.

The problem with this line of thought is that while it can rationalize bets which seem sensible instances of monetary risk aversion, it can only do so by attributing people utility functions which also rationalize insane-seeming pieces of (monetary) risk aversion. Matthew Rabin showed this in a technical paper, and he and Richard Thaler wrote an entertaining paper about it which references Monty Python’s dead parrot sketch. The idea is that if diminishing marginal value of money is all that is going on, then someone can’t rationally reject one fairly unattractive bet without rejecting another very attractive bet. Their first example is that if someone will always turn down a 50-50 shot at gaining $11 or losing $10, then there’s no amount of money they could stand to win which would induce them to take a 50% risk of losing $100. They have several other examples, including ones which remove the ‘always’ caveat, only demanding that they would still turn down the first bet even if they were quite a bit richer than they are now. The basic idea is the utility of money has to tail off surprisingly quickly to rationalize rejecting the small bet, and if it tails off too quickly you'll have to make odd decisions when the stakes are high. They’ve thought of objections and the reasoning is hard (for me) to argue with.

Now, what Thaler and Rabin reckon is going on is loss aversion. The reason you won’t take the $100-$200 bet is that you recoil in horror at the thought of losing $100. There’s plenty of behavioural economics research (I’m told) showing that people can’t stand losing even if they’re pretty chilled about not gaining, and that’s why Thaler, Rabin and Kahneman think that’s what’s going on. Thaler and Rabin say it’s not just loss aversion either, it’s myopic loss aversion. The reason it’s myopic is that you’d take a bunch of $100-$200 bets if you were offered them at the same time, because overall you’d probably win big and almost certainly wouldn’t lose. But if that’s your strategy then you should take the bets when they arise, and in the long run you’ll probably end up on top.

I agree that people are myopic, and they don’t always see individual decisions as part of a longterm strategy where losses today get offset by the same strategy’s gains tomorrow. I think Thaler and Rabin have missed something when they invoke loss aversion, though. This is because you can set up the “if you reject this bet then you’ve got to reject this attractive bet” argument without doing anything with losses. Suppose I offer people a choice of either $10 or a 50-50 shot at $21. Sure, some people will gamble, but aren’t lots of people going to take the $10? If they haven’t already, some behavioural economists should do that experiment, because if people reject the bet then Rabin’s theorem will kick in just the same as before and lead to crazy consequences. The difference is that this time you can’t explain the difference as recoiling in horror at the prospect of losing $10, because the gamble doesn’t involve losing any money. It just involves not winning some money, and people are relatively OK with that. (Notice that choosing not to gamble also involves not winning some money.) If you object that the non-gamblers want to make sure they get something, then change the set-up (if your budget stretches that far) to either $20 guaranteed or a 50-50 gamble for $10 or $31. It still works, and I bet plenty of people will still take the $20.

Now, what I think is going on is myopic risk aversion. I don’t see that there’s much wrong with risk aversion in itself. If you could choose either a life containing a million hedons or a 50-50 shot at either a thousand or two million, I’d understand if you took the million. Only a real daredevil would gamble. And when John Rawls is putting whole-life choices before people in the Original Position, he won’t assume they’re anything less than maximally risk averse. Maybe Rawls has gone too far the other way, but I’d definitely want to see a pretty good argument before believing that the cavalier attitude of the expected-something maximizer is rationally obligatory.

Now, mostly when we make decisions they’re small enough and numerous enough that a fairly cavalier strategy has a very low risk of working out badly overall. Applying original-position thinking to the minor bets offered by the behavioural economists in the pub is confused. It feels like you’ve got a 50% chance of getting the bad outcome, but seen in the context of a more general gambling habit the chances of the bad outcomes are actually very small even with the cavalier strategy, and since its potential payoffs are much higher, you’d have to be very risk averse overall to turn down the gamble. You’re very unlikely to be that risk averse all things considered, although perhaps Rawls was right that it’s cheeky to make assumptions.

So that’s what I think’s going on. Loss aversion is real, but it can’t do the work Thaler and Rabin want, either in straightforward form or myopic form. I think the real culprit is myopic risk aversion. Overall risk aversion is rationally permissible, but myopia isn’t and can result in individual decisions looking more risky than they really are. Unless the stakes are really high, like on Deal or No Deal.