A previously empty bucket. A man has filled that bucket only 50 red balls and 50 black balls (but you can’t see inside as you are picking.) Choose a color, red or black; now reach in to pull out a ball. If the ball you pick matches the color you chose, you win $10000.
1. Which color should you choose?
2. How much would you pay to play this game exactly once?
New game: another bucket, he again fills it with 100 balls that are only red or black, but you do not know the proportion of the balls, only that the total of the red balls and black balls is 100. Choose a color, red or black; and reach in and pull out a ball. If the ball you pick matches the color you chose, you win $10000.
1. Which color should you choose?
2. How much would you pay to play this game exactly once?
Notwithstanding a small, human preference for the color red, I don’t know anyone that would pay the same price to play game 2 as game 1; yet, in game 2, you know that the only choices are red or black balls. So knowing the distribution is 50/50 and blindly pulling one should be the same game as not knowing the distribution and blindly pulling one.
This modified Ellsberg’s Paradox is supposed to show the difference between risk, which can be calculated, and uncertainty, which cannot. And what it shows is that human beings have an aversion to ambiguity.
But the other thing it may show is that people are susceptible to information bias.
The video comes from a talk, “WARNING: Physics Envy May Be Hazardous To Your Health” given at the MIT Sloan School of Management.
The paper runs 60 pages but is well worth reading.
In the video, he describes having meetings with Mark Mueller, a physicist who became a portfolio manager, which turned into a paper about a cause of financial crises: physics envy.
(6:47) Physics envy, is this desire to be able to explain 99% of all economic phenomenon with 3 laws. That’s what physicists can do. In fact we (economists) have 99 laws that explain maybe 3% of all phenomenon.
But he asks whether it isn’t the “laws” that were faulty (e.g. the models) but the way they are used.
Lo describes the history of economic analysis (8:12) starting with Paul Samuelson, who patterned his dissertation on physics. This set economics on the course of the efficient market hypothesis, in which prices fully reflect all available information.” Now that hypothesis is doubted; human beings have been discovered to be irrational.
He offers the above red/black ball example, and finds that the audience would pay $5000 to play Game 1, but much, much less to play Game 2. Game 2 just feels harder.
Tellingly, even after you explain this to people, they still don’t want to play the second game, which suggests that there’s money to be made in the arbitrage.
Everyone on ycombinator is going to weigh in that my example is badly worded, not logically rigorous and etc. And they will find reasons why Game 1 actually is the better game to play. This only reinforces my point (and Lo’s): to the extent that life can be rigidly defined, people will find reasons to choose the way they want to choose– good reasons, real reasons– that still don’t have any effect on the game. For example, someone might complain that you don’t know if the man made the black balls slightly sticky; or they become redder over time, such that he can delay or accelerate your picking based on what he hears you choose.
All of these possibilities may be correct; or none of them may be correct; and all of those possibilities eventually yield a probability of 50/50. That’s how it goes.
For example, in game 2, don’t you still know that red is slightly preferred by humans? And the game host is human, which means there’s a slight chance that he’ll stack the balls in favor of red… should you therefore choose red? But the game host knows this as well; and he knows you know it; and he knows you know he knows it… ad infinitum, back to 50/50.
There’s another factor: if you choose game 1, and lose, that’s the way it goes. But if you choose game 2, and lose, people will think you’re an idiot for having decided to play game 2. That pushes you towards choosing game 1. Which is how many hedge funds choose stocks– no one blames you if you lose money on Apple or Google.
An interesting piece from the paper concerning Level 5 uncertainty: Black Swan events and the limits of probability theory.
The language of probability and statistics is so well-developed and ingrained in the scientific method that we often forget the fact that many probabilistic mechanisms are, in fact, proxies for deterministic phenomena that are too complex to be modeled in any other fashion.
Coin tosses are random, but as they exist in the physical world and are governed by physics, they should be deterministic if we were able/motivated to control/know all the conditions, e.g. a coin flipping machine.
A black swan event in the market offers opportunity for the application of other models. For example, an unexpected crisis may precipitate a selloff, but primarily a selloff in underperforming stocks (which are typically sold off first), so a mean reversion strategy would be to buy those underperfoming stocks.
An even larger source of trouble is simply the people in it.
Among the multitude of advantages that physicists have over ﬁnancial economists is one that is rarely noted: the practice of physics is largely left to physicists. When physics experiences a crisis, physicists are generally allowed to sort through the issues by themselves, without the distraction of outside interference. When a ﬁnancial crisis occurs, it seems that everyone becomes a ﬁnancial expert overnight, with surprisingly strong opinions on what caused the crisis and how to ﬁx it.
He offers three examples.
Is the science the problem?
Mortgages were packed into CDOs under a model that took mortgages of similar credit quality but from different parts of the country, and put them together in a collection to offset the risk. For example, if $300k mortgages in Maine and California have nearly independent default rates, then packing them together protects the overall CDO, similar to diversification in a stock fund.
Was that premise of independent default risks sound? Actually, yes: since 1933, there had never been a nationwide housing market downturn.
So it wasn’t that the model wasn’t useful; it was that the managers didn’t understand them and their limitations. If they had understood their limitations, even if they could not fix them they could prepare for them (e.g limit their exposure in specific ways.) But they didn’t– they followed the models blindly. Since the models had worked so far (and had been backtested) there was no reason to think they wouldn’t work in the future. (No housing collapse for 80 years, so…)
Lo’s final paragraph is as applicable to medicine as it is to finance; simply substitute the professions:
Quantitative illiteracy is not acceptable in science. Although ﬁnancial economics medicine may never answer to the same standards as physics, nevertheless, managers doctors in positions of responsibility should no longer be allowed to take perverse anti-intellectual pride in being quantitatively illiterate in the models and methods on which their businesses depend.
Are too many quants the problem?
If the models were too complex, then the problem was too few quants, not too many quants, running Wall Street.
In 1980 post-grads in engineering and finance made about the same money; since then, finance grads have made more and more.
On a logic of supply and demand, this suggests that the demand for finance grads is high. Fine.
But if its true, e.g. #2, that the models are very complex and require considerable expertise, this graph should be troubling:
I probably don’t have to point out that a similar argument can be made about medicine.
Did the SEC allow too much leverage by its 2004 rule change permitting leverage requirements to go from 12:1 to 30:1, precipitating the crash?
In a January 2009 Vanity Fair article, Nobel-prize-winning economist Joseph Stiglitz (2009) listed ﬁve key “mistakes” that led to the ﬁnancial crisis and “One was the decision in April 2004 by the Securities and Exchange Commission, at a meeting attended by virtually no one and largely overlooked at the time, to allow big investment banks to increase their debt-to-capital ratio (from 12:1 to 30:1, or higher) so that they could buy more mortgage-backed securities, inﬂating the housing bubble”
New York Times:
Over the following months and years, each of the ﬁrms would take advantage of the looser rules. At Bear Stearns, the leverage ratio–a measurement of how much the ﬁrm was borrowing compared to its total assets–rose sharply, [from 12:1] to 33 to 1. In other words, for every dollar in equity, it had $33 of debt. The ratios at the other ﬁrms also rose signiﬁcantly.
This paralleled the average citizen who was similarly overleveraged.
But, in fact that rule change didn’t have any effect on these levels at all– they had already been higher for years:
The point isn’t that they weren’t perhaps overleveraged; the point is that that rule change didn’t permit it, and hence re-changing it isn’t the logical solution.
when new information is encountered, our cognitive faculties are hardwired to question ﬁrst those pieces that are at odds with our mental model. When information conﬁrms our preconceptions, we usually do not ask why.
The authors also note that the NYT has “yet to print a correction of its original stories about the rule, nor did the Times provide any coverage” of the speech by the SEC director who said, “First and most importantly, the Commission did not undo any leverage restrictions in 2004.”
If the media’s mistake is not checking the popular hypothesis against available data, our mistake is taking what we see in the media as data.
News is not data.
No related posts.