Stress tests, and risk models in general, are subject to the same garbage-in-garbage-out foibles of any computer program. They necessarily ignore, intentionally or otherwise, any number of variables. Economics, after all, has been described as the art of selecting which factors to ignore.
Another problem is confusion over the limitations of hard and soft sciences. Mathematical models for chemistry and physics work because particles have no choice but to obey strict laws. People, on the other hand, make it their business to make the best of a situation. As a result, a grand new scheme devised by policymakers will put the sheeple in place; but the innovators, risk-takers, and those pinched into feeling the mother of invention will branch off in directions the master planners had not considered. Dowd describes this fallacy as “the naïve belief that markets are mathematizable.”
Experts can be described as those on one extreme of games of chance with odds on a par with the proverbial monkey throwing darts. Dowd quotes a number of crocks of mumbo-jumbo from Fed chairmen uttered just before things went the opposite way and apologizes. “I don’t wish to single out Bernanke for particular criticism; most other officials were saying similar things, but he was the most senior [during the Great Recession], and even he is no better able to predict the future than most of the rest of us.”
That said, people do themselves a disservice to put their trust in models made by such experts. Experts usually compile their risk programs from available data, which for the most part describe normal conditions. Disasters typically involve rogue variables, or common factors taking extreme twists.
The mathematical problems are compounded when human elements, like greed, enter the picture. For example, Dowd points out that a number of people with Ph.D.’s in physics became quants because of the lure of money. These mathematical wizards knew they could turn a profit by looking at the Fed’s stress tests and conducting monkey business in areas outside of or undervalued by the assessment. The practice has been described as “stuffing risk into the tail” of bell-curve models. With an edge akin to that of card-counters in a casino, innovative quantitative artisans ensure that “the true risks faced by financial institutions will always be greater than the measured risks.”
Then, there are psychological variables. Executives could be persuaded to bypass their better judgment if a model created by scientists and experts tells them to do otherwise. Dowd explains, “Most risk models and regulatory risk models in particular, are textbook examples of the ritualistic fetishes usually associated with primitive tribes. A fetish can be described as irrational attachment to an object – in this case, a risk model.” At the very least, the risk model provides a false sense of security.
Models, like regulations, introduce market distortions. People under the influence of the model have inducements to act in ways that may not be consistent with their otherwise better judgment. Imposing penalties for one form of coping with a bad situation might cause worse decisions to be made for the environment, hungry children, etc. if the fines are lower. Then again, one can never tell what would have been without the model. A good, albeit imperfect one might appear bad if its predictive powers provide incentives for people to change their courses en masse at the first signs of a crisis.
Dowd argues the Fed’s stress tests are guilty of all of the above shortcomings with the added bonus that their regulatory powers come with a cost of compliance compounded by politics. Specifically, “The Fed’s stress-test scenarios fail to address a considerable number of major risks credibly identified by independent experts, in large part because they are dominated by a quantitative macroeconomic mindset that has only a very limited understanding of other relevant considerations, for example, accounting and banking issues.” Even worse than that, the “one that gets you that you just don’t see” is more likely than not to be the Fed itself. While many threats can be caught before they build into catastrophes, the Fed’s whimsical jerking of interest rates may, over the past few decades, have been the source of more domestic economic havoc than anything the stress-tests are assessing.
Perhaps the worst part of the relationship between the Fed’s stress tests and the financial crises the public assumed they were designed to prevent is something with a nefarious scent. When any layman could see the US population was overextending itself as lenders took on bad risk, the models were giving the most poorly-capitalized banks the best grades. Dowd suggests “a sense of history, judgment, or rules of thumb” would have given better results. But banks with risk models that were working better than the Fed’s were required to get with the program, use tools that gave faultier predictions, and waste money to keep the regulators happy.
Making everybody use the same stress test eliminates competition in risk management. This stultifies innovation and destroys competition that would allow banks with superior models to outlive those with botched algorithms. If the model is flawed, and the Fed’s is, then everybody has to make the same mistakes. According to Dowd, “Market stability requires that players have different strategies so that some are willing to buy when others wish to sell. Thus, the key to market stability is not some magic risk management strategy – those don’t exist – but the presence of those willing to take contrary positions.”