This morning I discovered that there is no scientific evidence to support the two-metre distancing rule. Imagine my surprise. Those Germans wearing pool noodles on their heads at the café were idiots after all.
But that’s not the flaw in the news. It assumes that scientific evidence is needed. It assumes scientific evidence has been correct about anything to do with Covid-19 so far…
The International Institute of Forecasters has a blog post out, reviewing how accurate our predictions about Covid-19 ended up being.
The headline is unequivocal” “Forecasting for COVID-19 has failed”. Having read the thing, I’d add “as miserably as usual”.
But it’s the chart below that’s fascinating. It shows the number of intensive care unit deaths for three American states, in red. And compares them to predictions made based on modelling.
The black line is the prediction, with a 95% prediction interval of shaded area around that line. The idea being that there’s a 95% chance the red line will fall within the shaded area given what was observed before the prediction was made.
Source: International Institute of Forecasters
As you can see, the red lines of actual data were wildly different from what models anticipated. The shape, the time period, the severity and everything else was wrong, wrong, wrong.
It’d be funny if these weren’t death statistics.
The blog post goes through the details of why the models were so wrong. It’s a long list. But let’s focus on a different point. Let’s question our own judgement instead of gloating at the mistakes of others.
How predictable was the impressive failure of the predictions? If you had tried to forecast whether the predictions would be correct, what would you have expected? What percentage chance would you have given the red line of being within the grey shaded area?
I’ll give you a hint. It isn’t 95%…
Let’s start with the track record of similar predictions in the past. The blog post points out how similar Covid-19 predictions were to other pandemic predictions:
Failure in epidemic forecasting is an old problem. In fact, it is surprising that epidemic forecasting has retained much credibility among decision-makers, given its dubious track record.
Modeling for swine flu predicted 3,100-65,000 deaths in the UK . Eventually only 457 deaths occurred . The prediction for foot-and-mouth disease expected up to 150,000 deaths in the UK  and led to slaughtering millions of animals. However, the lower bound of the prediction was as low as only 50 deaths , a figure close to the eventual fatalities.
Despite these obvious failures, epidemic forecasting continued to thrive, perhaps because vastly erroneous predictions typically lacked serious consequences.
In other words, it shouldn’t surprise you that the forecasts were wrong.
And even in real time, the wild divergence between the different models’ predictions for Covid-19 should’ve told us the game was up:
Predictions for number of US deaths during week 27 (only ~3 weeks downstream) with these 8 models ranged from 2419 to 11190, a 4.5-fold difference, and the spectrum of 95% confidence intervals ranged from fewer than 100 deaths to over 16,000 deaths, almost a 200-fold difference.
And people think these models are useful for policymaking?
Well, I suppose they are useful in that politicians can pick and choose the model which gives the results they like. Think about what incentive that provides for ambitious scientists.
It seems to me that the failure rate of those promising to be 95% sure about Covid-19’s impact is close to 100%. Yes, I’m suggesting that even those who get it right by luck end up being wrong. I’ll explain why shortly.
If policymaking is based on impressively bad forecasts that promise impressive levels of accuracy, we’re going to get impressively shambolic results.
The surprising thing is that we’re surprised by this any longer.
If a 95% chance of a model being correct is understood to really mean 0%, then that’s fine by me. Everyone would understand that the forecasts are rubbish, alongside a lot of other things coming out of universities these days. And, having kept this in mind for the future, we could move on.
But people actually believe that a 95% prediction interval means what it says on the tin. That’s the embarrassing part – the belief in the face of evidence to the contrary. And it’s embarrassing for us, not those making the models and the forecasts. They’re just responding to our demand to be fed such nonsense.
Let’s mention the consequences of our unwavering gullibility. Not that you need to hear them to understand why I’m irritated after a long lockdown. And I’m still employed, unlike 600,000 others in the UK, with the furloughed to come.
There are still those who claim that the virus, not the lockdown, were what did so much damage to our economies. I’m not so sure based on this Bloomberg story:
‘Striking’ Crisis Gap Exposed as Swedish Economy Stands Out
There’s one country whose economy looks set to fare better than others when it comes to the fallout of Covid-19: Sweden.
In a report on Monday, Capital Economics presented data that give Sweden an irrefutable edge. From peak to trough, Swedish GDP will shrink 8%; in the U.K. and Italy, the contraction is somewhere between 25% and 30%, according to estimates covering the fourth quarter of 2019 through to the second quarter of 2020. The U.S. is somewhere in the middle, it said.
If no nation had locked down, I bet the hit to Sweden would’ve been much smaller.
The ironic thing is that, just days ago, Bloomberg had this headline: “Sweden Says Covid Strategy Was Never About Shielding the Economy”; and the Telegraph had this headline: “’Prof Lockdown’ Neil Ferguson admits Sweden used same science as UK”.
In other words, the Swedes were following the same Covid-19 science to come to the opposite conclusion, thereby saving their economy, comparatively speaking, by mistake…
So, even those who got it right did it for the wrong reasons. Not that anyone can agree they did it right, even after the results are in.
Strangest of all, this week, the British press is busy scolding the government for not modelling the economic fallout of a pandemic. Just when models of the outbreak itself have spectacularly failed, there’s a scandal over the lack of modelling!
It’s failure all round, in all manner of ways. But the biggest failure of all is the steadfast belief in “science”, modelling and government policy.
In the face of overwhelming evidence, people still believe in the tripe coming out of the wrong end of statistical models. We believe that a 95% prediction interval means there’s a 95% chance of the prediction being correct. Talk about a confidence game…
That’s a statistics joke.
Before I leave you, I have a crucial announcement. I’m handing over the reins of Exponential Investor to my friend Sam Volkering.
Don’t worry, he’s Australian too. And much more of a tech-focused investor – where Exponential Investor is supposed to provide you with an edge.
It’s not just Exponential Investor that he’s taking over though. Check out his predictions for the small stock that could turn ordinary household items into thinking, feeling and autonomous robots, ready to do your bidding.
Before you go, I’d like to leave you with a simple rule of thumb to guide you while I’m gone. Whenever you see the number “95%”, expect whatever comes after it to be 100% wrong.
Editor, Southbank Investment Research