Though forecast models have been a problem in the way they are utilized in public and environmental health for decades now, they have never before crested public awareness in quite the way they have in the time of COVID-19. People accustomed to seeing forecasts of things that are somewhat remote, somewhat abstract in time, place, and consequences are suddenly being exposed to how the sausage is made in predictive forecasting, and many are not liking what they’ve seen.

Many policy analysts (including myself) have been critical of the way that forecast models have been incorporated into governmental decision making for decades, arguing over the validity of forecasts projecting the impacts of tiny changes in air pollution exposure, radiation and chemical exposure; in predicting species endangerment; in predicting transit system ridership; in predicting “peak oil;” manmade climate changes and much more, COVID-19 has brought the problem out of the tall weeds of policy analysis, and into everyone’s living room, where they a) have more time on their hands than usual, and b) have suddenly realized that putting faith in model projections is more than an abstract concern for policy wonks.

To be clear, computer modeling of complex systems has its place, which is mostly in the computer lab, where one can tinker with one or more variables and pit models vs. models to see which one best explains something in the real world. That’s very valuable. The problem with modeling occurs when it escapes the lab and is abused and misunderstood by policymakers and the public. Unfortunately, space is limited, so here are a few things to understand about models.

The first point should be obvious: computer models are a gross simplification of reality (the technical term is abstraction). Consider a picture of a mouse. The picture of the mouse tells you a lot of things, but really very little about the biology of mice. To understand those things, they must be reduced into ever more tiny aspects of mousehood – it’s shape, it’s chemical composition, its biochemistry, behavior, capabilities, and so on, ad infinitum. Mickey Mouse, for example, is an abstraction of a mouse. When you see Mickey, you see a mouse, but in fact, Mickey tells you remarkably little (and a lot that’s not realistic) about mice. As the great astrophysicist George O. Abell explained in my early science education, to truly model something as simple as a mouse, you would have to have the knowledge to create the mouse, and humanity is far from doing that even for as small a thing as a virus (we still are, and that was 40 years ago now).

The second point you should know is that the farther away from modeling the tiniest of things, the less reliably models reflect the reality of what you are studying. Because in modeling, errors accumulate. And all measurement includes error. So the more complex the model, the greater uncertainty becomes.

The third thing to understand is that trying to model complex things goes well beyond looking at variables we can actually measure, especially if we’re trying to forecast. Instead, we have to approximate those variables, which entails a variety of assumptions. (Indeed, even measuring the variables you can measure involves a host of assumptions about your ability to accurately measure what you’re looking at.) Assumptions are inherently subjective, which renders model outputs relatively useless as forecasting tools. To be fair, that’s why computer modelers talk about “projections” vs. “predictions,” a nuance that quickly gets lost in public policy discussion.

COVID-19 has brought these points home to people in a way they have never been seen before.

There is only space here for one example, though there have been many, from models of COVID-19 mortality that were produced almost daily even as policymakers instituted wide-reaching restrictions on people’s daily lives.

The Washington Post has a good, accessible article on the evolution of modeled death-estimates from COVID-19. The article is long, but well worth reading. This figure, in particular, summarizes the evolution of one of the most relied upon models, from the Institute for Health Metrics and Evaluation at the University of Washington (IHME). As you can see, the estimated mortality from COVID-19 shifted massively over time as some of those variables discussed above were clarified by the incorporation of new data:

Forecast models encounter reality

As the figure shows, the plausible modeled range of fatalities from COVID-19 exceeded 150,000 deaths in the United States in early versions of the model, but they were rapidly downgraded over a matter of days in April, as the model was revised with newer and better information. Even now, as the Post notes, there are battling models that generate quite different estimates of COVID-19 mortality in the US.

All of this would be relatively uninteresting to most people if instead of COVID-19, scientists were modeling the lethality of say, a virus affecting a particular species of, well, mice. But with people craving any kind of certainty they can get their hands on, and policymakers crafting policy in the fog of war, forecast models have to be taken with more than a grain of salt – an entire saltlick would be more appropriate. Hopefully, this new public insight into the limitations of computer modeling of complex systems will stay with them as they evaluate future forecasts of everything from health, to the environment, to economics, to pretty much everything. As superstar-scientist Dr. Anthony Fauci, head of the United States National Institute of Allergy and Infectious Diseases (NIAID) put it recently, “Models are really only as good as the assumptions that you put into the model.”