I think what the author is describing is simple overfitting.<p><a href="http://en.wikipedia.org/wiki/Overfitting" rel="nofollow">http://en.wikipedia.org/wiki/Overfitting</a><p>It is quite a newbie mistake for a scientist to be surprised by it. It affects every kind of modelling.<p>I thought maybe this article would talk about why economic models are worst than other kinds of models. There are issues that arise when applying scientific models to the economy caused by the fact that when even good models are used to predict markets, the use of the models themselves to do trading, distorts the markets. When multiple parties use good models to compete in markets, they distort the markets in such a way that destroys the predictive power of the models.<p>There is a great explanation by Glen Whitman of Agoraphilia, that uses grocery line wait time predictions as a metaphor for this:<p><a href="http://agoraphilia.blogspot.com/2005/03/doing-lines.html" rel="nofollow">http://agoraphilia.blogspot.com/2005/03/doing-lines.html</a><p>See also:<p><a href="http://lesswrong.com/lw/yv/markets_are_antiinductive/" rel="nofollow">http://lesswrong.com/lw/yv/markets_are_antiinductive/</a><p><a href="http://en.wikipedia.org/wiki/Efficient-market_hypothesis" rel="nofollow">http://en.wikipedia.org/wiki/Efficient-market_hypothesis</a>
This is known to anyone who's ever monkeyed with any type of machine learning: genetic algorithms, Bayesian filters, anything.<p>I agree with many of the commenters in this article. This should be common knowledge.<p>I also, like many commenters, couldn't help but think of model-based climate predictions.
This article is more about how multiple sets of parameters can fit the same data equally well. This is why economists draw a distinction between calibration and estimation. If a parameter is "identified" in some estimation procedure, they mean they have an experiment or quasi-experiment that gives them a credible CI for the true parameter.
I think that with economic models used for trading there is also another big problem: Their application changes the model itself. So, even if you had a perfect model for the market without you applying your model, as soon as you start applying it, the market changes... and this is also true for all the other quants who do the same with their models.<p>IMHO, it was much better when most stock market decisions were mostly based on "fundamentals". Because that way the market was incentivising sound business decisions.
Great discussion! The author doesn't seem to introduce the concept of training/testing datasets which absolutely critical to obtaining any reasonable model. So I don't buy the author's thesis that economic models are always wrong.<p>The solution to the hypothetical problem posed in the article is to separate the historical dataset into training and testing groups. The models should be generated while only 'seeing' the training data. You will, as the author mentioned, get many models that appear to fit the data. Most of these models will be garbage.<p>The fun part is when the testing data is introduced against the many models generated above. Most of the models will completely bomb, but a handful may actually predict the previously 'unseen' testing data with high accuracy. Those few models which pass the testing stage are the ones worth their salt.<p>Due to the self-aware nature of the markets, successful models probably will not be true indefinitely, but it's very possible they may be true long enough to be profitable. The less known your successful models are, the longer they will be successful predictors of the market. Hence why successful quant funds are notoriously secretive with their approaches. Open source would never work in finance.
Every model is "wrong", by definition of it being a "model" and not "reality". It's one of the few mind opening things I've learnt at university.<p>That's not a problem if you take it as an incentive to improve how much you know about the real world. It's a problem when you put the model before the people, and say that "models got us in trouble because of calibration problems".<p>An economic crisis is not an unavoidable natural disaster, it's people screwing up other people.
This article is avoiding terminology, data and any specifics on the problem that it renders it useless.<p>You might be fooled it says something useful if you don't know what a 'model' means in any science.<p>So what is the point of the article? The author is trying to sell you his book where he most probably makes people who don't know anything about economics feel good or push an ideological agenda.
The author is only partially right. The mistake is in defining a closed system that is in fact not closed, and then curve fitting.<p>For instance a great part of growth in the last 100 years has been from man's ability to harness energy from fossil fuels. If your time line is narrow enough, you can disregard the point that fossil fuels is not unlimited, and project continued rise in extraction.<p>Another example is the baby boom, and the introduction of women into the paid work force which led to continued rise in property prices.<p>One more is the introduction of laws which suddenly compel people to invest in the stockmarket. It leads to short term asset inflation but generally makes worse investment all round.<p>That said, it is fitting that an economy is well modelled using the principles of hydraulics. See <a href="http://en.wikipedia.org/wiki/MONIAC_Computer" rel="nofollow">http://en.wikipedia.org/wiki/MONIAC_Computer</a>
The problems with economic model or most modeling are not the methods. It's usually dealing with the quality of the features or parameters. Even in a much simpler problem, no matter how good the methods, if you don't have the right params, your model will suck. And with economic models, it's dealing with a open world system with ever changing params, the challenge is not on the methods, but how to discover quality parameters/features. And that require not just the skills of modelers but many other disciplines.
<i>Financial-risk models got us in trouble before the 2008 crash</i><p>Is this accurate? I remember reading that all the alarms were going off, they were just ignored or the models were "adjusted".
Carter's papers on the subject:<p>Ballester, P. J., & Carter, J. N. (2006). Characterising the parameter space of a highly nonlinear inverse problem. Inverse Problems in Science and Engineering, 14(2), 171-191. doi:10.1080/17415970500258162.<p>Ballester, P., & Carter, J. (2007). A parallel real-coded genetic algorithm for history matching and its application to a real petroleum reservoir. Journal of Petroleum Science and Engineering, 59(3-4), 157-168. doi:10.1016/j.petrol.2007.03.012.
Actually, even correctly parametrized, any predictive model will suffer from the paradox of the oracle : if you have a "oracle" capable of anticipating the decision of an actor, and that this actor knows about the prediction, this actor can make the prediction false.<p>In economy, some actors have an interest in faking the prediction, even if it is costly for them : it is often valuable to be unpredictable.
Is this really surprising? I would have thought this would be self-evident as these kinds of models would seem to be highly chaotic.<p>It's really no different than the meteorology simulations in the 60's that first discovered the butterfly effect.<p><a href="http://en.wikipedia.org/wiki/Butterfly_effect#Origin_of_the_concept_and_the_term" rel="nofollow">http://en.wikipedia.org/wiki/Butterfly_effect#Origin_of_the_...</a>
A "scientist" finds by cross-validation that his model is over fitting the data. Luckily it wasn't published by a reputable source of science journalism.<p><a href="http://en.wikipedia.org/wiki/Cross-validation_(statistics)" rel="nofollow">http://en.wikipedia.org/wiki/Cross-validation_(statistics)</a><p>Also who the heck is Wilmott? He just pops up in the last paragraph with no introduction.
Macroeconomics resembles a science in exactly two ways: it looks at history, and it makes predictions (or prescribes courses of action; these are equivalent).<p>Greek mythology resembled a science in those same two ways.
Isn't this just an instance of a chaotic system, in which the parameter settings that almost match the historical data will inevitably diverge because of sensitive dependence on initial conditions.
From scientist to scientist a little secret: All models are always wrong! If the model would be correct it would be as detailed as reality and thus also as useless.
Economic models are wrong because <a href="https://secure.wikimedia.org/wikipedia/en/wiki/Economic_mobility" rel="nofollow">https://secure.wikimedia.org/wikipedia/en/wiki/Economic_mobi...</a> and <a href="https://secure.wikimedia.org/wikipedia/en/wiki/Social_mobility" rel="nofollow">https://secure.wikimedia.org/wikipedia/en/wiki/Social_mobili...</a> are mutually exclusive