/ Research in everyday life / No, epidemiological models are not used to predict the future
Détecteur de rumeurs

No, epidemiological models are not used to predict the future

Les articles du Détecteur de rumeurs sont rédigés par des journalistes
scientifiques de l'Agence Science-Presse. Les Fonds de recherche du Québec et
le Bureau de coopération interuniversitaire sont partenaires du Détecteur de rumeurs.

Auteur : Agence Science Presse - Maxime Bilodeau

Epidemiological models have become an easy target for people who want to deny the validity of the lockdown. Some of these models, from Europe to North America, “predicted” more deaths than really occurred. Should they be scrapped?

On March 16, British epidemiologist Neil Ferguson and his Imperial College London team disclosed a frightening epidemiological model. They predicted that COVID-19 could kill about half a million people in the United Kingdom and over 2 million in the United States. That was unless strict measures were taken to restrict its spread. Ten days later, a similar study by the same College made equally gloomy forecasts for many other countries. It projected 326,000 deaths for Canada.

Worst-case scenarios

Four months later, it’s clear that Imperial College overestimated the number of deaths by far. Even though the pandemic isn’t over, the current situation is light-years away from the number announced.

Except that these models were presented in mid-March as a worst-case scenario: “if no action were taken against the epidemic, it could reach…”. In other words, these were computer simulations for the purpose of predicting what would happen in certain cases. That’s if the different countries kept their borders open, took little or no action on lockdown, did little screening, etc.

The same week they were published, the assumption on which the models were based were no longer valid. On March 18, the Canada-U.S. border was closed. On March 23, Québec ordered the closing of all non-essential businesses. As a result, two weeks later, the different simulations presented by Canada’s public health authorities ranged between 11,000 and 22,000 deaths by the end of the pandemic. In mid-July, Canada neared 9,000 deaths.

As for Great Britain the simulation was released at a time when the country still seemed to be following a controversial policy of “herd immunity”. This meant no lockdown and normal continuation of activities. The Ferguson model is often cited as the cause of Prime Minister Boris Johnson’s change of direction a few days later.

Before the lockdown measures were imposed worldwide, Ferguson’s “pessimistic” forecast wasn’t the only one circulating. In the United States in mid-March, Dr. Anthony Fauci talked about “a few hundred thousand” deaths. On March 26, after two weeks of lockdown in several American States, a model by the University of Washington’s Institute for Health Metrics and Evaluation was released. This model, which would be cited by the White House, projected 100,000 to 240,000 deaths. This depended on a scenario in which a severe lockdown would be maintained throughout the country until the summer.

The oldest official forecast available on the Centers for Disease Control (CDC) website is from April 13, after a one-month lockdown. The scenarios then fluctuated between 60,000 and 150,000 deaths for the end of May. The United States passed the 130,000-death mark on July 7, and the toll could reach 150,000 before September.

There was a response to the criticisms of the Ferguson simulation’s computer coding. On June 8, the British journal Nature published observations by experts who tested the model and considered it reliable.

Models designed to be contradicted

No matter whether they’re pessimistic or optimistic, these models are always published in the hope they’ll be contradicted. They aren’t intended to quantify the precise number of cases, hospitalizations and deaths to come. A study published last year reviewed the epidemiological models published during the Ebola epidemic of 2014-2015. It concluded that actions taken by the populations always change the deal. That means these models can’t predict the future beyond a horizon of one to two weeks.

It’s too soon to tell. But it’s possible this conclusion is just as valid for the current deconfinement and second wave scenarios. Everyone is trying to draw the outlines of a future contingency. But they can only do it imperfectly. The big unknown is how millions of people will behave.

“All models are wrong, but some are useful.” This adage is attributed to statistician George Box and mathematician Norman Draper, in their work Empirical Model-Building and Response Surfaces (1987).

That’s the paradox of prevention in a public health context, Benoît Mâsse of the Université de Montréal School of Public Health wrote on June 9. If measures “are effective in controlling the epidemic, they give the impression they weren’t necessary. We then are left vulnerable to another wave of infection”. He was responding to an “economic note” by the Montreal Economic Institute (MEI). The MEI challenges the rationale of the lockdown. It accused the government of relying on the “inaccurate mathematical model” of Imperial College London.

In all cases, in every science that produces models, they rely on their predecessors’ successes and errors. The initial representations give way to more complex representations. For example, we now know a lot more about the circumstances that cause variations in the risk of infection.

 

bandeauDetecteur_siteFinal