is only correct in a world where lawyers and engineers have all the same characteristics. That the sampled individual is a man will skew things all by itself.
Of the lawyers, approximately 40% will be women, whereas only 11% of the engineers. So our samplee could be one of 27 engineers or 42 lawyers - we've already bumped Peng from .3 to .39! That he likes math puzzles easily takes Peng to over .4, meaning the answer can't be A).
I'm just gonna quote what I said when this was on Metafilter a few months back:
"Anyway, yes, the fundamental problem in these questions is not how people think -- it's that the question they want the respondent to think they're asking, and the question they claim they're actually asking, are two different things.
The lawyer/engineer one is a classic example of this; what they hope is that you will read it as "how likely is it that these personality traits correlate to an engineer", so that they can then swoop in and say "what we were really asking is the mathematical definition of a percentage!"
Which ultimately tells us very little about the respondents and quite a bit about the people conducting the quiz..."
The author of this quiz seems to have completely misunderstood the relevant research. Either he has to assume that all individuals are identical (in which case, the little story is irrelevant) or he needs to apply Bayes rule according to the probabilities associated with the factors expressed in the personality exposition.
Either way, the article's explanation for that particular question is wrong.
Exactly. Here's the example used in Kahneman's book:
"Dick is a 30-year-old man. He is married with no children. A man of high ability and high motivation, he promises to be quite successful in his field. He is well liked by his colleagues.
This description was intended to convey no information relevant to the question of whether Dick is an engineer or a lawyer."
Yes, there's more to it. The experiment was done with 70/30 and 30/70 ratios for different subjects. The book doesn't say whether they specified they were all males, my guess would be that they did.
a frequentist would take issue with the two sons, one born on a tuesday problem. you can actually count up the permutations.
let's say we have 10 engineers, 9 of them are male. we also have 10 lawyers, 6 of them are male. Let's say one in 10 people likes doing math on the weekend.
There are 90 out of 100 ways to have a group of male engineers, one of which who likes math, but only 60 out of 100 ways to do the same with a male lawyer. furthermore, if we add in the four kids as another 1 in 10 thing, the situation gets even worse. This isn't bayesian, this is just counting boxes on a permutation table.
> This isn't bayesian, this is just counting boxes on a permutation table.
It's the same thing. Bayes' theorem allows you to shortcut straight to the answer without having to draw out a full probability tree / permutation table. But the underlying math is the same - in each case you have a different probability of B given A, versus B given (not A).
Personally I wouldn't call it "Bayesian" so much as just "a conditional probability." The question doesn't ask what the probability is that a randomly selected participant is an engineer, it asks what the probability is that a participant is an engineer given that he has "typical" engineer-like traits.
But then yes, ideally you could use Bayes' Rule to find that probability.
A better way to explain the common response to question #2 would be "quiz questions usually only give me information relavant to their answers" heuristic.
It's worse than that - the probability is biased as you say if the sample were chosen randomly, but they did not claim that.
Given that the only distinguishing characteristic mentioned is a boolean (engineer or lawyer), and that they only chose one sample, the probability might just as well be 50% - the probability of the person picking the sample deciding one way or the other.
30 engineers, 70 lawyers, the probability of being an engineer is 30%. What Jack likes to do is irrelevant. Why can't a lawyer like math and dislike politics (winning a case framed by certain rules is just a puzzle/game to hack)?
To the question "Are you an engineer?", Jack answered "Yes".
Would you still argue that the probability is 30% that he is an engineer? A lawyer can claim to be an engineer, after all. However I think it is clear that if we actually did the experiment, it would be much more likely than 30% that Jack is an engineer. A way of testing what you really believe the probability to be is this: I bet you a dollar that Jack is an engineer. If you wouldn't, that means that you really believe the probability to be larger than 50%.
This is because the probability that he answers yes to the question is much higher when he is in fact an engineer than when he is a lawyer. Bayes' law says:
P(E|Y) = P(E) * P(Y|E)/P(Y)
You should read P(A|B) as "the probability that A is true given that B is true". In this case E = "a person is an engineer" and Y = "a person answers yes to the question 'are you an engineer?'". As you can see the original P(E) = 30% gets multiplied by P(Y|E)/P(Y) given the information that the person answered yes. The probability that a person answers yes given that he is an engineer is higher than the general probability that a person answers yes. So P(Y|E)/P(Y) > 1. So P(E|Y) > 30%.
This same law applies to other characteristics, for example Y = "person likes mathematics".
What Uhrrr is saying is that prob(Person is an engineer | Named Jack and has Jack's description and initial 30/70 ratio) > prob(Person is an engineer | 30/70 ratio)=30%. I think the article goofed on that question too. I'd rather have seen an illustration of the conjunction bias, which is similar. Given Jack's description, what's more probable: Jack is a lawyer, or Jack is a lawyer who likes classical music. (The first one, but people overwhelmingly pick the second one.)
It's not that a lawyer can't like maths or dislike politics. The issue is whether they are less likely to on average. From my personal experience of knowing several of both groups, I would say that on average the lawyers I know are less interested in maths than the engineers and more interested in politics than them. It doesn't apply universally (some of the engineers I know are obsessed by politics, just not all of them and not as many of them as lawyers). It's possible that there may be research that demonstrates this doesn't hold when you look at the sum total of lawyers and engineers, but that's not what the author is trying to rely on.
Assuming that there is a skew of preferences, then this info isn't irrelevant you can perfectly reasonably use this to help identify the likelihood of this person being in one group or another. It doesn't guarantee that you're right, but it will improve your chances.
I agree. In many areas of pop-culture we seem to have a lot of people trying to convince us that "We don't know what we know", often with hilarious results. Like the EU officials who ruled that "drinking water has not been shown to reduce dehydration."
The more I see this trend, the more stubbornly I find myself clinging to "What I know"
I think the point that many are missing is that is not known for certain that an engineer is more likely to enjoy certain hobbies over others. People use their personal experience to develop a heuristic which this test is designed to reveal.
Getting hung up over the specificity of the hobbies and interests and the likelihood of those hobbies and interests representing either or a lawyer or an engineer is irrelevant, because the only factual data that was provided by the questioner is that 30% of the participants were engineers, and 70% were lawyers.
It's not "getting hung up" about the specifics. They are relevant, and clearly deliberately chosen. You're right that we don't know this likelihood with absolute 100% certainty, but that doesn't mean we should dismiss our personal experience, and a bit of logic (maths is typically more useful for, and a more practical path into, engineering than law, and there's far more politicians in my country with a background in law than there are with a background in engineering) out of hand.
What this article is trying to present is heuristic errors - like question 1, where ignoring of the fact that sample size is relevant gets you to the wrong answer. Ignoring the likelihood that there is a correlation between personal interests and career choice seems to me to be the equivalent heuristic error for this question.
Let me give you an alternate example. There are roughly 700 million Europeans and roughly 300 million Americans. If I randomly picked one person out from this, gave you no other information and asked you where they came from, you'd have a 70% chance of guessing correctly by saying "Europe". If I told you that their first language was English, that they loved American football and baseball and hated soccer, and that their favourite TV show was Conan, and then asked you to guess where they came from, it would be hugely naive to ignore that information and still assume that they were probably European. Yes, it's entirely possible that there are Europeans who fall into all of those things, and I've not done a survey to find the exact percentage of each group that answer this description, but I'd be prepared to put a fair amount of money on the fact that there's a larger overall number of Americans who answer it than Europeans, so the smart guess would now be that they are American.
I think it still depends which school do you belong, Bayesian or frequentists. A real frequntist may not assign a probability to a single instance of society! he is either an engineer or not!
And the killer mistake in # 2 is that he "He shows no interest in political and social issues" Now Law is inherently political (both big and small P's) even more so in the USA where Judges etc are elected.
I would bet good money that hes more likely an engineer than a lawyer.
The questions involving "90% chance of $1000 or 100% chance of $900" always bother me. I never understand why economists think that a rational actor would consider them equivalent; they're not, unless you are making that choice many many times.
But if I'm given that chance once (which is presumably what most participants assume, since that's not a choice that comes up often in one's life), it's really then a choice between "90% of a ton of free money or 100% chance of a ton of free money". Unless the dollar amounts are radically different, who in their right mind would take the choice that could possibly leave them without a life-changing sum the next day?
In short, there are three types of people: risk-averse, risk-neutral, and risk-preferring. (In the general case, people can exhibit all three types of behavior at different income levels, but let's keep things simple).
Imagine a graph, with income on the x-axis and utility on the y-axis. A risk-averse person will have a concave utility function (like a square-root function), whereas a risk-netural and risk-preferring person would have a straight-line and a convex utility function, respectively.
You have two income levels: $0 and $1000. Now take the two points (0, U(0)) and (1000, U(1000)) and connect them with a straight line. Since we're dealing with a 90% chance = .9 probability, find the point on that line which is 90% of the way between the two points (closer to the second point). This point represents the utility received from the risky situation, or the expected utility. This is not the same thing as the utility of the expected value! Call this point A, with coordinates (Ax and Ay)
Compare U(Ax) this with the value U($900).
For a risk-neutral person, the two values will be exactly the same, as the utility function is a straight line. For a risk-averse person, the second value will be higher, as the utility function is concave with respect to the origin. For a risk-preferring person, the first value will be higher, as the utility function is convex with respect to the origin.
In practice, the utility function may not have a constant concavity, which explains why people buy insurance (which is only justified under risk-averse behavior) yet also buy lottery tickets or gamble at casinos (which is only justified under risk-preferring behavior).
> Who in their right mind would take the choice that could possibly leave them without a life-changing sum the next day?
As you can see, the answer to your question is, 'A person who is risk-preferring' (or operating under risk-preferring situations which are quite common in practice).
Distinguishing between risk-averse/neutral/preferring seems like begging the question to me. Couldn't there be an objective answer to which of three behaviors is the most rational in some situation?
the term "rational" for economists has a very specific meaning. an actor behaving "rationally" has a utility function that satisfies some set of properties, and when confronted with choices, chooses in a way that maximizes that utility function.
Right but which metric is reasonable can (and I believe should) change with scale. I'm risk-preferring when it comes to small amounts -- opportunities to make such decisions come up all the time, so I'll be likely to realize the mean. But I'm risk-averse when it comes to large amounts -- I may only get to make one such decision in my life. Better to minimize the variance here.
This not only explains why people play lottery and buy insurance (playing the lottery non-compulsively involves risking only small amounts of money; not having insurance involves risking large amounts of money), but it also explains why those close to retirement should have risk-averse portfolios, while the young should have risk-preferring portfolios (those close to retirement have few "samples" left to take as it were).
The feeling stupid part got me here. Aftter that, I understood a little bit more. So thanks for that.
But I really believe, that it comes down to "how often" these chances come in ones life.
It can be seen in the gameshow "Who want's to be a millionaire". In Germany we have a fourth joker: people give up the savety-net at 16.000 for an aditional joker. till now everyone of the winners, who got the million, did not take this option.
16.000 is a lot of money, regarding, that you started with 0. on the other hand 500 (the second level net) is really not so much, when you are hanging at 125.000 and having to take a shot at the million. and falling down to 500 feels a lot more stupid. so people become risk adverse (risk of loosing a lot and feeling stupid) a lot faster and don't trust their answer when gambling for that million.
The interesting thing is not that people are risk averse (and thus choose 100% chance of $900), the interesting thing is that people become risk seeking when it comes to losses (and thus choose 90% chance of -$1000).
You're creating a bit of a straw man when it comes to behavioral economists and their view of "rational actors" - the whole field is built around the understanding that there is more to economic decisions than expected value.
That's because Kahneman gets the signs wrong in the OP.
Question 5a is: $900 @ 100%, or $1000 @ 90%
Most people choose $900 @ 100%
Question 5b is: -$1000 + $100 @ 100%, or -$1000 + $1000 @ 10%
Most people choose -$1000 + $1000 @ 10%
Written this way, the false symmetry vanishes, and we see that in both cases, people are risk averse when the payoff is low, and risk-seeking when the payoff is high. Which is to say, people value life-changing sums super-linearly as compared to insignificant sums.
But that's not what the research was about, if I'm understanding it correctly (the research has been mentioned a lot in popular economics literature, so I think that I am understanding it correctly, at least at the high level). The conclusion of the research is that people are more risk-averse when it comes to losses - i.e. they'd rather not win 100$ than loose 100$.
If I'm understanding you correctly, you're saying it's about the amounts; and I think I'd have to agree with you that for the examples to be equivalent, the amounts in 5b should be multiplied by 10. But then again, maybe that would introduce another comprehension hurdle or cognitive correction effect which would render the experiment invalid. Interesting question to ask the original researchers, although I presume that by now (after decades) they've addressed it somewhere already :)
Ya, my theory holds for negative sums too -- I'd rather have a chance of not losing [large amount of money] than lose [almost as large amount of money] for sure. A $900 loss could be enough to bankrupt me -- I need to take the 10% chance.
The problem with that question, to me, is that the 10% difference in outcome is nearly inconsequential. By the time I'm losing (or gaining) $900, what's another $100? If the proportions were different, I think the question would be more interesting.
Yes, however there is a slight chance of winning the odds and not paying anything. The risk is minimal. If the numbers were further apart I don't think the same logic would kick in.
A = $900 lost
B = $9000 lost with a 90% chance
Those are odds I wouldn't want to try. I would take the instant loss knowing that my odds are just not good enough to win if I were to choose B.
The reason that $1000 at 90% odds and $900 at 100% odds are used in this example is the expected value is the same in both cases, making the situations 'equivalent'.
A 90% chance of losing $9000 has an expected value of -$8100.
It seems odd to me that this disproves Bernoulli theory "that a person’s willingness to gamble a certain amount of money was a product of how that amount related to his overall wealth".
If I could pull $900-$1000 from my savings with no immediate consequences, I'd be more likely to spend the $900 at 100%. But if loosing $900-$1000 means I'll have to tell my landlord I'll be late with the rent and then finding someone to borrow it from, and paying it back with interest, the extra $100 aren't significantly more crippling - it's the transaction cost of going through all this bother that's problematic - I'll take a 10% chance.
Come to think of it, I actually did something like this: Prior to moving abroad a while ago, I consulted a lawyer to make sure I did everything right to avoid double taxation. That was a taking on a 100% chance of a rather big expense to avoid an unknown chance of an even larger expense.
It could also be that the utility of money is perceived to be logarithic, and then it depends on the base the individual uses in the personal utitlity function (which probably depends on the persons net worth) which choice is rational. For example,
ln 900 > 0.9 * ln 1000.
But the point is that research has show that almost all people have this bias that makes them much more risk adverse when avoiding losses than when they don't already have the money.
We can speculate a lot on the reasons, but that speculation isn't tested yet.
Economists do not all think that way. Just as there are many differing perspectives in other sciences (feel free to cringe if you are a mathematician, chemist, etc.), there are a range of schools in economics as well. Though I cannot fully delve into the topic at this time, variance is certainly taken into consideration by many economists. Depending on the situation, (number of betting cycles etc.) variance is very important. So, in a one-off bet, it makes sense that the utility between these choices would differ.
I tried to find a good resource that further explains this but wasn't able to quickly find one. If I am able, I will try and post one later today.
On a final note, all of the situations in this quiz are easily correctly answered (or in some cases, predicted) with a basic knowledge of statistics and economics.
By rescaling the numbers, the two questions can be turned into:
- "Would you pay $900 for a 90% chance to win $1000?"
- "Would you pay $100 for a 10% chance to win $1000?"
So the distribution of results really is different. It's not just a phrasing trick. I would still say no to the first and yes to the second. I like positive outliers more than negative ones. This doesn't seem irrational to me.
There's also the fact that we usually cannot estimate the probabilities of events with that kind of precision.
I'd say that most people rarely have enough data to split probabilities into more than about five bins: ~0%, fairly unlikely, coin flip (~50%), fairly likely, and ~100%.
I agree completely with a minor tweak:
"Would you RISK $900 for a 90% chance to win $1000?"
"Would you RISK $100 for a 10% chance to win $1000?"
It's all about managing the risk: generally speaking, I can afford to risk $100, but not $900. Furthermore, I've already spent/borrowed the second $1000 (We're TAKING my existing money, which I presumably have already planned on having). So the first is a windfall, but the second is a real need. So I WON'T risk a large amount of money for a windfall, but I WILL risk a small amount in order to meet a real need. I'd say that's 100% rational.
I don't understand. Kahneman's whole thing is showing that it's wrong to assume that humans are rational actors.
Your second paragraph is exactly the point - the choices are not equivalent to humans.
Humans are rational, just not in the 'mathematically rational' sense. I.e., not just using statistics and probabilities to make decisions. As long as decision-making involves a process (even if it's unconscious) of trade-offs to optimize total utility, it's still rational (well one can debate if should be called 'rational' or have some qualifier attached to it, but then it becomes a definition question, a rather boring one).
Agree 100%. There is so much wrong with all of these questions, and that's only one of the things wrong with this particular one.
In addition to that, there's very good reason for someone to act differently when it's a gain vs. loss at stake. For one thing, this difference is the whole reason that an insurance industry can exist! (And insurance, in turn, is the only reason many kinds of utility-enhancing ventures can exist at all!)
I used to wonder what the point of insurance was when you could just bear the risk yourself, but then I had an insight (that no one else arguing with me managed to bring up):
Utility as a function of how much of a good G that you have, usually increases at a decreasing rate. The first n units provide more of a utility gain than the 2nd n units, and so on. For much the same reason, losing your first n units isn't as bad as losing the 2nd n units, and so on. (It may help to visualize U(n) as a logarithmic curve.)
This is why people can rationally regard it as better to have a guaranteed loss of (at least) N rather than a (1/x) chance of losing x times N, while not also buying a lottery ticket for N that offers a (1/x) chance of gaining x times N. And that, in turn, shows the fundamental asymmetry between insurance and gambling.
Kahnemann must be on the phone with VF right now demanding they correct this article in about ten places.
I think, when the money you may gain or lose goes way over your possible wealth you will start to think really non-linear (non-rational).
but I agree that people with same wealth level will weight risk factor differently ( in each gain or loss). in other words simple utility function is not enough!
the actual question is wrong. to illustrate the framing effect, which underlies prospect theory, the question should be:
1. choose a) 90% for $1000 or b) 100% for $900
a. you just got $1000. choose a) 10% chance to loose $1000 or b) 100% chance to loose $100.
according to rational actor theory, people should choose the same thing both times. however, people often don't. its the framing effect. decisions are different when framed as losses or games.
this builds the foundation for prospect theory. rational actor theory says that your preferences are consistent. prospect theory says that you have a reference point and your utility function is inconsistent and changes depending on your reference point.
The book also discusses that. When the problem is _not_ about a life-changing amount, it is better to take the riskier choice when the expected utility is the same. The explanation is long-ish (and honestly, almost above my head - took me awhile to grok it) and involves the sum of all such incidents over a lifetime, and differences in accumulating utility vs accumulating wealth.
Maybe someone who read the book more recently could take a stab at describing it.
As Kahneman says in the book - every experienced gambler and trader knows that "you win some, you lose some". Although we may not get multiple chances to repeat the same exact gamble, our intuition tends to lead us to minimize risk when the risk is negligible,and to maximize certainty when things are already pretty certain. This is clearly not the optimal approach. By relaxing our personal constraints a little and adjusting our strategy, over the course of a lifetime and the many gambles we take (e.g., starting that web business) they may pay greater dividends than taking our "default" human strategy.
I think in Kahneman's book (unless I'm recalling incorrectly), the situation wasn't "90% chance of $1000 or 100% chance of $900).
It was more like "90% chance of $1000, or 100% chance of $850" (i.e., something a little less than P(X)*X). That was the whole point), people are willing to pay a premium for certainty - and the contrary (are willing to pay a premium to turn a 0.01% chance into a 0% chance)
Same deal. $850 and $1000 pretty much both equate to "some large sum of money" in my mind.
Now if the sums were $8.50 and $10.00 instead, I'd likely make the more rational choice (90% of $10), because such choices with smaller amounts of money come up far more often in my life: the sample size will be large enough that the mean approaches the expected value.
That's the point - you're willing (we all are, usually) to pay a premium for that certainty. In the book he uses all sorts of figures or probabilities (I remember one case, when it was like 99% chance to win one million dollars, or 100% chance to win ${800,000, $600,000, $400,000} -- starts getting a little tricky there, right?)
The importance is the order and context of the questions, both questions basically equates to a: Would you gamble with $1000 and b: Would you gamble with $100.
If only given question b and some time to think you would probably answer no (depending on your risk-preference). But because of question a, the amount and probability in question b seems insignificant which decreases your risk-averseness.
A thousand dollars is not a life-changing sum. It's well within the range in which the utility of money is linear in the amount. (Even if it's large compared to what you have in your pocket, by the time you spend it, your life will be the same as it is now.) If we were talking about a million dollars, your answer would be right.
Such as? Even living below the poverty line in a poor country with soft currency (in which case you probably won't be reading HN or participating in psychology experiments) you're still likely to get through quite a bit more than that in a year.
"In their right mind" or "reasonable" is not what "rational" means in discussions of economics. There also isn't as much of an implication of rational=good and irrational=bad.
Kahneman's work is great, and deserves to have attention directed to it, but this Vanity Fair article is pretty bad. Why does the title say that Kahneman wants people to fail the quiz? That's wrong, and since it's attention-grabbing in a way that the truth isn't, I'd say it's dishonest. And the rest of it is riddled with errors and confusions.
So go read Daniel Kahneman's books, and don't read Vanity Fair.
Despite the article's imputation that heuristics is a quick vs. accurate trade off, Gerd Gigerenzer's work show that in fact it is usually "quick AND more accurate" when used in the real, "large", world versus the "small" world of games and logical puzzles where all of the rules are known and knowledge of the problem is perfect, with infinite time allowed for optimization and calculation. The video here is well worth watching to counter-balance Kahneman's focus on edge cases where heuristic's break down or the wrong heuristic was applied.
As someone who has had lots of economics/probability training, this quiz really doesn't do the research justice. It gets a point across, but could have been presented better. I was bothered by its misuse of terminology and the quiz not really being one.
Spoilers ahead!
1. The answer could be A or C, depending on how small and large the hospitals are. The small hospital could easily be expected to have < 5% more variance (depending on definition of "small" and what 5% of each other means) depending on its and the large one's size.
2. I don't see why the correct response must be 30%. Such attributes conditionally describe an engineer better than a lawyer. Assuming random sampling of lawyers and engineers, I'd be surprised if the answer isn't > 30%. I'm not sure what the numbers are actually, as I don't know how strong this conditional applies to engineers or lawyers.
The point of the research is that people over-emphasize the conditional over the prior (indeed often ignore the prior), not that people should not use conditional information.
3. " it is likely that your answer to question (a) is positively correlated to your answer to question (b)"
I understand what they are trying to say, but the wording is quite off. Correlation is a property of data, not an individual point. If I'm the only person who ever takes this test, my answers have undefined correlation. The modifier "likely" is especially baffling (correlation is a constant property of data).
Proper (and less verbose!) terminology is "people's answers to (a) are positively correlated to (b)"
Not exactly the same, but my favorite visceral examples of imperfect heuristics are "garden-path sentences"[1]. These are sentences that trick you into having to backtrack when you're parsing them, which brings the whole parsing process into sharp relief.
To save everyone the trouble of reading a lot to learn a little: "man" is a verb in that sentence.
Though if I wanted to be clever I could hammer that into less garden-pathy sentence as follows:
"In my will, I left everything to the one who could use it best. I gave the boy the legos. I gave the girl the dollhouse. I gave the mother the kitchen set. The father the hunting rifle. The bachelor the suit. The old man the boat."
Dan Kahneman, Richard Thaler and Dan Ariely have both published some popular books on the subject. Thinking Fast, Thinking Slow just came out last year. Predictably Irrational came out a few years ago. Nudge (Cass Sunstein and Richard Thaler) looks at the implications of behavioral economics for the law. There is a good summary article of that work here: http://www.law.harvard.edu/programs/olin_center/papers/pdf/2....
I personally think this work is pretty earth-shattering in the field, and that the above work is a must-read for anyone interested in economics. The engineer side of me is really attracted to the fact that behavioral economics uses legitimate experimental methodology, instead of mathematically-supported handwaving. And the implications of the work really turn some of our assumptions about the nature of the economic system on their head.
I'm afraid I don't quite understand the point of the second question. Does the answer mean to state authoritatively that that engineers are no more likely to do carpentry and partake in recreational mathematics than lawyers, or is there another explanation?
If it were changed to something like
"2. A team of psychologists performed personality tests on
100 professionals, of which 50 were engineers and 50 were
nurses. Brief descriptions were written for each
subject. The following is a sample of one of the
resulting descriptions:"
"'Jack is a 45-year old man.'"
and if we were to assume that 80% of engineers are men, and that 80% of nurses are women, we'd expect 80% of the men to be engineers (and so expect Jack to be an engineer with 80% probability). Maybe?
Question 4 - the theatre ticket - could be worded more clearly. I thought "as you enter the theatre" meant "as you enter the big room with the stage and lots of chairs". Who would buy a ticket after entering that room, no matter what the circumstance?
The question is really wrong. I've heard a similar question before and it goes more like this:
"A team of psychologists performed personality tests on 100 professionals, of which 5 were engineers and 95 were lawyers. Brief descriptions were written for each subject. The following is a randomly picked sample of one of the resulting descriptions:
'Jack is a 45-year old man who enjoys recreational Mathematics.'
Assume 90% of engineers enjoy recreational Mathematics and only 10% of lawyers do. What is the chance that Jack is an engineer?"
And the point is that the odds are still that Jack is a lawyer. The question as posed in the article, though, really doesn't make sense.
I recently passed (with high distinctions) three postgraduate course exams towards my masters degree (at a top-10 university). Ho hum, don't be impressed; that is not unusual. My strategy, though, is not textbook; I spend more time studying the conceits of the lecturer, than I did the lecture material itself.
With this in mind:
The point we're supposed to accept is that when asked to evaluate the data sets presented, one heuristically makes assumptions based on social norms and crystallised intelligence. I'm pretty sure that this hypothesis was made in the article preceding the quiz, so no excuses for not realising the point of, well, any of the questions.
For the theatre question, I too started with your interpretation. However, it didn't make sense in the context of illustrating the author's hypothesis, so it couldn't have been the "right" answer.
Protip: when considering someone's line of argument in anything but a hard engineering discipline, I usually play the man before the ball - it's much more revealing.
I recently read Kahneman's 2011 book "Thinking, Fast and Slow" -- it should probably be required reading for everybody who's in charge of making decisions that affect a lot of people.
I have to second this advice, even though I'm halfway through it.
Kahneman has spent decades investigating biases and cognitive error. And it turns out he's not all that bad at popularising his and related research.
Of particular fascination (and frustration!) are the little examples he liberally sprinkles throughout the book. Small quizzes, questions and the like for the reader to try. Try as I might, I have consistently "failed" these in the way Kahneman goes on to explain.
He even explains why he does that: because it's hard to particularise from the general. It's easy to generalise from the particular.
I've found that many dimly held intuitions I've developed over the years about "how the world works" are starkly illustrated in the book. And many things I believed were not borne out be research.
Bottom line: it's a great book and very illuminating. Read it.
I'm halfway through it as well, but have a less positive impression. I came into it with a lot of respect for Kahneman and his research, but the first half of book is extremely flat. Much like the reaction here to this Vanity Fair teaser, I constantly find myself quibbling with the examples, and disagreeing with the explanations.
I'm not sure who to blame, though. I think the book was written over a considerable period of time. Perhaps Kahneman's standards have changed? Perhaps there were multiple editors involved, some of whom (like the Vanity Fair author) didn't really understand the material? Or perhaps the editor was great, but only worked on some of the chapters.
I'd offer a much less enthusiastic bottom line: It's a frustrating book, but you should read it anyway. Skip to Chapter 20 "The Illusion of Validity" if you get bogged down.
I agree that the writing style is flat, but it's also very thorough and methodical. More than once I've found myself thinking "but what about X?" and then there, on the very next page, he addresses X to a depth I'd never considered.
The biggest counterargument against his tests is that like a lot of sentences, they can be read in more than one way. Possibly for people with System 2 minds that like to dissect concepts, that's enormously annoying.
The Vanity Fair article says "Kahneman and Tversky debunked Bernoulli’s utility theory, a cornerstone of economic thought since the 18th century. (Bernoulli first proponed that a person’s willingness to gamble a certain amount of money was a product of how that amount related to his overall wealth—that is, $1 million means more to a millionaire than it does to a billionaire.)"
But in the lecture, Kahneman stated no qualms with utility theory. Rather, he pointed to the application of the theory, where Bernoulli had assumed a gain of $1 million is equivalent to $1 million appearing in their bank account, with the mechanism being irrelevant. That is what prospect theory (and the associated question) debunks.
I didn't much like this: "being swayed by the way in which questions are worded rather than responding just to their substance" (for the lost ticket being allegedly equivalent to $10).
Is a ticket, which costs $10, emotionally equivalent to $10? Once bought, the ticket is unique in my eyes, whereas I'm not even sure how many $10 bills I have in my wallet even now. So if I lose one, well, maybe it wasn't there in the first place.
I guess what I'm trying to say is that the impact of the ticket loss and the impact of loss of a $10 bill are not equivalent in terms of substance. Unless you're an economist.
That's kind of the whole point of the research. Neoclassical economics assumes certain things about peoples' utility functions that aren't true. For example, even though you can ascribe utility to going on a date, there should be no difference between not going on a date at all and having to cancel a date you had expected to go on. Obviously people don't view things that way. Or, to use another example from Kahneman's work, getting a $5,000 raise out of the blue is perceived as very different than getting a $5,000 raise after your boss had told you to expect a $10,000 raise. Or, for that matter, after your coworker got a $10,000 raise.
From the standpoint of neoclassical economics, these outcomes should be the same, but clearly they're not. Delving into the psychology of the rational actor in this way is a pretty fundamental change in economics.
I would dare say unless you're rational, not necessarily an economist. Most of us are not rational all the time, and the whole point of the question is to show an instance where we are not.
You would be correct if there were not other tickets available; in that case, the ticket truly is unique. But in the example given you could acquire another one. In that case the ticket is not unique and is equivalent to the cost of acquiring another one, which is $10.
But I believe it is the duty of the ticket vendor to keep track of who bought tickets. In protest, I would not replace a ticket if I lost it, but if I lost $10 it is completely my responsibility. Is this not rational if I believe I can affect change through the action?
That's an entirely different issue than the one I was responding to. In that case, it would be rational if your actions could reasonably affect change, which in this case I don't think they would.
First, I don't see an obvious need for theaters to keep track of who purchased tickets--save the case of online transactions obviously. Granted, the whole example is contrived--but I don't see the utmost need for it.
And, even if they had a moral imperative to do so and didn't, this would mostly affect people--by definition--who purchased a ticket already, and either lost it or want to return it or something of that nature. It is safe to assume few people lose their tickets and of those that do, not all of them protest by not buying another one. Protests are usually only effective if they hurt the company in terms of reputation or money. Since that is unlikely (given the vague probabilities I mentioned and specially because you already paid for one ticket), your protest will most likely be in vain.
We could of course get into a philosophical debate over the worth of such protest, but this is neither the time or place. Protesting the theater record-keeping policies in the way you mention and for the reasons you mention would probably be, in my opinion and with all the information you provided, rationally irrational.
I actually find it very useful to emotionally detach from a commodity - that is treat all tickets to the movie theater as the same. I generally find it bad to fall into the sunk-cost effect and I am much happier when I avoid it.
A pet peeve of mine in this regard is about throwing away food that is left on a plate. Many people feel a moral duty (often instilled in them from a young age) to eat everything that is on a plate, even if eating that last 10% would make them feel uncomfortable for an hour and would have no utility from a dietary point of view (actually have negative utility, since it only increases their fat intake).
Yet I have a hard time convincing people of this, and some people sometimes feel the need to scold me for throwing away food. Despite that, I've been better off since I internalized the true nature of this (daily recurring) situation.
What I found weird about at case is that it did not consider the investment made by going to the theater. The time spent to go to the theater will be significant relative to that $10 ticket. And that is even ignoring travel and parking costs, and the fact that, by going to the theatre, you cut out other options of spending the night.
Because of that, I think most people would buy a new ticket.
Well I also didn't like that question but for a different reason.
"As you enter the theater, you discover that you have lost the ticket. The theater keeps no record of ticket purchasers, so the ticket cannot be recovered."
Considering how clever they were being in the first couple questions, I assumed it was a trick. People typically give their tickets to the person at the booth outside the theater. Considering I am in the theater already I haven't lost my ticket, but simple given to the men as I walked inside. I'm just forgetful I did so.
Their interpretation of question 5 is just plain wrong. Aside from the fact that losses DO hurt more than gains (because you need to alter your plans to accomodate losses), the assumption of equivalence is fundamentally flawed.
The problem is that the two halves are inverted. Correcting:
5a: Your base position is $900, and you have the option of risking $900 for a possible gain of $100. You have a 90% chance of gaining $100 (a trivial amount) and a 10% chance of losing $900 (a big amount).
5b: Your base position is -$900, and you have the option of risking $100 for a possible gain of $900. You have a 90% chance of losing $100 (a trivial amount) and a 10% chance of gaining $900 (a big amount).
Since you're an individual, and not a statistical average, there's no middle ground in the values gained or lost.
So the rational person is right to choose A, then B. The two are NOT equivalent to an individual. The trivial amount makes little difference as a gain or a loss, but the big amount does.
What does "How many dates did you have last month?" even means?
Maybe I've a different notion of "date" from the rest of the people... but I'd assume that if you've a stable relationship with a significant other, then your number could be definitely higher than 5 in a month...
otherwise, if it's considering all the different dates with all potential partners, 3 to 5 seems an number exceptionally high (in fact: I've got a grand total of 0 dates over the last 24 months, and I'm quite sad about that)
The thing is question three is about dates and question three B is about how happy you are so the two are suppose to be related. If it was question three and question four maybe, but not when they put the two questions together.
"The framing effect is also used to explain the influence of positive and negative information on our decisions—for example, why consumers prefer to buy ground beef labeled 80 percent lean rather than 20 percent fat."
Then why don't politicians say "Employment rates are at 91%" instead of the depressing "Unemployment rates are at 9%"?
Sorry to say it, but this certainly smells like cargo-cult science. First, I'm not a scientist, and not even a Nobel-prize winner, but for example I cannot understand what question 3) has to do with "science". I did have 2 dates in the past month, but I'm at one of the lowest points of my life, because, guess what? the missing context really matters (I'm about to get divorced).
The same goes for question 4) . I chose A, "Yes", assuming it was a good movie worth seeing, but lacking the above-mentioned "context" (is it "Forrest Gump" or "Delta Force IV"?) any answer you give it's just to satisfy the interviewer. And yes, people DO go and see movies that they know they're bad.
A little less reading on "behavioral economics" or "animal spirits" and a little more reading of Hume will do everyone a ton of good.
The questions aren't science - that statistics that predict that most people will get them wrong are.
Or rather - the theories which predict how people will (statistically) make sub-optimal decisions in certain circumstances are science: they make predictions that can be tested.
I believe paganel is referring exactly to the "sub-optimal decision" part, in the explanation (not the questions themselves).
Can you expand on that? The point of the article is that people will (statistically) get the questions wrong, or at least have biases in their responses.
For example, in the first question the grandparent poster may well have had good reasons to answer as they did, but that doesn't discount the fact that most people are vulnerable to the attribute substitution bias[1].
And yes, this quiz seems a load of underspecified cargo-cult horsecrap.
That's to be expected isn't it? They are trying to show five different statistical effects in five questions. To do it properly they would have to ask 10's of questions for each one, but I don't think that would work for a Vanity Fair sidebar article. Instead, they tried to find questions that would demonstrate the principles to most people.
What I meant (and what I believe paganel meant) is not that most people won't get such questions wrong, statistically. They probably will, and the original research probably shows that conclusively.
It's the part of this quiz/post that is supposed to explain why that irks scientifically inclined folks (as manifested many times in this thread :) Here the author missed crucial bits that made the question+explanation originally work, and all that's left is some well-meaning cargo cult nonsense.
Whether it appeared in a sidebar or not is no excuse for getting it wrong imo -- they could have stuck with the original questions if they didn't understand the implications of modifying their premises.
If you want the actual science instead of the quiz consider the book Judgment under Uncertainty: Heuristics and Biases. It's better than Hume. For something you can read right now that might get you to care: http://singinst.org/upload/cognitive-biases.pdf
After the first question: after being primed not to trust my instincts, I decided (SPOILER) it couldn't be (a), and thus guessed (c). Which I suppose reflects another important cognitive bias.
(Although if economics is Math then that is even better: economics is something that can make predictions, and isn't just the black magic that some seem to think)
The scientific method applies is deciding what statistical experiments to do.
(spoilers)
is only correct in a world where lawyers and engineers have all the same characteristics. That the sampled individual is a man will skew things all by itself.
Of the lawyers, approximately 40% will be women, whereas only 11% of the engineers. So our samplee could be one of 27 engineers or 42 lawyers - we've already bumped Peng from .3 to .39! That he likes math puzzles easily takes Peng to over .4, meaning the answer can't be A).