posted: May 7, 2022
tl;dr: As a pragmatic realist, I have good reasons to doubt all models of reality...
“No models can explain the types of patterns we are having.” - Federal Reserve Chair Alan Greenspan in 1992.
“I suspect that the channels that we’re using now, which principally are asset prices, may not be working as well as our models say.” - Federal Reserve Governor, now Chair, Jerome Powell in 2012.
“But there is layering, model upon model, to try to get these effects” - Federal Reserve economist Seth Carpenter in 2012. (All quotes from The Lords of Easy Money, by Christopher Leonard.)
It’s models (instead of turtles) all the way down at the Federal Reserve, apparently. No wonder they and many other academic economists (the primary source of hires into the Federal Reserve) were so wrong on forecasting the economic impact of the COVID-19 endemic and their own response to it, completely missing the resulting consumer price inflation. The Fed economists would have done better had they turned off their computers, ventured forth from the Eccles Building, and talked to people in the real world. They would have discovered a severe supply chain shock, exasperated by lockdowns, at a time when the Fed and the U.S. government were working in tandem to stimulate demand.
The Federal Reserve is hardly the only institution to make bad policy decisions based on faulty models. The COVID-19 endemic has been rife with pessimistic models that have driven politicians to implement measures that have been severely detrimental to schoolchildren, the economy, mental health, and arguably physical health. Then, of course, there are the many climate models, some of which are used to argue for tens of trillions of dollars of investment in the hopes of avoiding forecasted bad outcomes. In the financial world there are certainly ways to make money using quantitative analysis, but if someone had a highly accurate computer model of the entire stock market, that person or institution would be fabulously wealthy. Instead, just when someone thinks they’ve figured it out, we get situations like the blowup of Long Term Capital Management.
For my entire career as a computer professional I’ve been skeptical of attempts to use computers to construct models of complex, at times chaotic, systems. The power of computers to perform billions of mathematical calculations per second can lull their users into a false sense of omnipotence. The problem with the models is not the computers, it is the users. It’s the age-old problem of “garbage in, garbage out”. Again and again I see these major problems with computer models:
Faulty assumptions
The most interesting aspect of any computer model to me is the assumptions that went into building it. The assumptions, to a large degree, determine the result. If you assume that mask mandates cause a significant reduction in a virus’s transmissibility across an entire population of human beings, then that will be reflected in the code that implements your model and will show up in the results.
Complex systems have many variables that affect the outcome. Getting their initial values 100% correct, and then modeling their effects on each other 100% correctly, is an impossible task except for a godlike being. It’s hard enough to solve the three-body problem, let alone a hundreds-of-bodies problem. Simplifications will always be made, and the biases of the modeler will show up in their assumptions.
Failure to account for the butterfly effect
If the COVID-19 endemic started from an accidental lab leak that was not immediately noticed at the time, a scenario I think is entirely plausible (see my review of Viral: The Search for the Origin of COVID-19), then the endemic itself arose from a real-world butterfly effect. The butterfly effect happens when a tiny change in input conditions cascades and gets amplified into a huge effect downstream, much later.
“Does the flap of a butterfly’s wings in Brazil set off a tornado in Texas?,” posed scientist Edward Lorenz. In the case of COVID-19, a lab worker at the Wuhan Institute of Virology might have failed to fully seal his/her laboratory suit on one occasion and been infected by a virus under study. That asymptomatic carrier may then have infected others in the general public, thereby starting a pandemic that has now killed millions of people worldwide. If there were a tiny change in input conditions, i.e. a properly sealed lab suit, millions of people might still be alive today.
Models tend not to take the butterfly effect into account. In fact, because butterfly effects are highly non-linear and hard to predict, models can’t take the butterfly effect into account and yield consistent, repeatable results. Yet butterfly effects and black swan events do happen. Major volcanoes erupt, spewing ash into the atmosphere where it circulates for years, or meteors strike Earth, thereby influencing the climate. New technologies such as nuclear power are invented, which can dramatically alter decades-long trends. The timing of butterfly effects is hard to predict, but given a long enough time period, they can and do happen. When they do, they invalidate existing models.
Math and programming errors
As I described in my Spreadsheets (Arch.) post, spreadsheets of even modest size are error-prone, and I’ve found errors in many spreadsheets in my professional career. Complex software programs are even more error-prone. I’ve spent my life debugging software, and would never claim that any program of moderate complexity is entirely bug free.
Anyone who uses software should agree with this statement: software has bugs. Those bugs can be errors of logic that are visible with a close code inspection, or they can be buried so far deep in the underlying software layers or even hardware (such as floating point math implementations which cause imprecisions to accrue in complex calculations) that they are impossible to detect unless you can compare the actual output to the expected output. Yet the point of a complex model performing many trillions of calculations is that you are relying upon the computer itself to tell you what the output is: there’s no way to check it via other means. You just have to believe that the model is implemented 100% correctly. I don’t believe it, because I’ve never seen it.
There are better solutions than modeling
One better solution is to not do any modeling, and to instead focus on measuring what is actually happening in the real world in segments of the population. The Oxford Centre for Evidence-Based Medicine did exactly this in the very early days of the COVID-19 epidemic by compiling studies of groups of people where everyone was exposed to the SARS-CoV-2 virus. The measured Infected Fatality Rate and overall project population death rates were much lower than other epidemiologists were projecting based upon models and the use of much higher values for Case Fatality Rate. This provided much reassurance to me and anyone who paid attention to what the Oxford CEBM was measuring, instead of what other scientists were modeling.
Another better solution is to examine history for closely analogous situations, to determine as precisely as possible what happened back then, and how the situation resolved itself. The Spanish flu of the late 1910s was more deadly than COVID-19, yet impacted life far less for those Americans who did not succumb to the virus. What lessons could we have learned from that epidemic? We know from long-term historical climate studies that the Earth’s temperature has on multiple occasions been higher than it is now, when human behavior was not a major factor. What caused temperatures to cool again? What butterfly effects occurred in the past that had a major impact on temperature, and might those butterfly effects happen again in the future? It’s hard to model butterfly effects, but given enough time, history teaches us that certain butterfly effects will eventually happen. Now that humans are impacting climate, there will be human-originated butterfly effects that do so too.
It may seem strange for a “computer guy” such as myself to recommend not using computers to model highly complex systems. There’s no way to stop people from building models. I put more emphasis on critiquing those models, finding their flaws, and paying attention to what the historical record tells us.