Criticism of forecasting accuracy

Adam Creighton has criticised the accuracy of modelling used in forecasting, conflating the performance of pandemic, climate, and economic forecasting. He is an economics editor of The Australian newspaper and his article on June 17 2020 was headed “Coronavirus: Inflated pandemic estimates weaken climate forecasts”.

Here are some excerpts from his article.

“so spectacularly bad was expert modelling of the spread and lethality of the coronavirus, faith in all modelling must surely suffer”.

“Why trust the experts to forecast the climate decades into the future when they were so wrong about a disease related to the common cold?”

“Climate modelling was struggling even before the pandemic, given the planet has warmed about half as much as forecast by the first Intergovernmental Panel on Climate Change report in back 1990”.

“It’s remarkable we put so much faith in expert models, given their history of failure. The Club of Rome in 1972 notoriously forecast that growth would collapse as the world’s resources ran out, ignoring human ingenuity and the shale revolution”.

“Financial models failed to account for — indeed they probably facilitated — the global financial crisis. And as almost every utterance by a central bank governor since has reminded us, economists struggle to know what happened last month, let alone forecast the impact of a policy change tomorrow”.

He has cited modelling from the 1970’s, the 1990’s, a failure to predict the global financial crisis in 2008-09, and modelling of the Pandemic in 2020 – which is still spreading and claiming lives. He identifies only 4 “failures” in the last 50 years!

There are many forecasting errors throughout recent decades, which lead one to question why he selected these particular four to demonise.

Many other forecasting failures are documented elsewhere in my book Forecasting: the essential skills. One standout is political leaders in USA, UK, Australia, and Spain predicting that Iraq had weapons of mass destruction, despite UN arms inspectors on the ground never having found any. The experts got it right and the politicians got it wrong (perhaps wantonly). The cost of this failed prediction was thousands of US armed forces lives, tens of thousands of civilian lives, and arguably a power vacuum filled by ISIS. The financial cost was in the trillions of dollars, as documented by Joseph Stiglitz and Linda Bilmes in the 2008 book “The three trillion dollar war”.

The Australian newspaper’s proprietor, Rupert Murdoch, predicted that the invasion of Iraq would result in the price of oil falling from over $US30 a barrel to $US20 a barrel. He said that the whole world would benefit from cheaper oil. This was presumably a judgemental forecast which was at least as inaccurate as any modelling. In the event, the price never fell below $28 and was over $34 in February2004. The price soared to $74 by mid-2006 and to $134 by mid-2008.

In fact the end of cheap oil was predicted, by modelling. In 1956, M. King Hubbert, a geologist with Shell Oil, correctly predicted that the rate of oil production from the lower 48 American states would peak in 1969. His prediction was based on the observation that oil production from any large region starts to fall when about half the crude is depleted. The output from a region as a whole follows a bell shaped curve. Hubbert extrapolated this to output from whole countries. Campbell and Laherrere extrapolated this pattern to world production (“The End of Cheap Oil”, Scientific American, March 1998). Their modelling was very accurate! More recently, shale oil has increased supply which has contributed to a fall in prices – but not sustainably to $20 a barrel.

There is a common element to the four cases cited by Creighton – perceived damage to economic growth if the forecasts are acted on, in the cases of the Club of Rome Limits to Growth, Climate change, and the COVID-19 pandemic. The reason that the failure of economists to predict the GFC is included may be that economic recession in the USA was not avoided by appropriate policy which could have implemented if recession was predicted. This would have been a self-defeating prophecy!

In the case of the Club of Rome limits to growth, the dire predictions did not occur (yet) because of action, such as the green revolution, in response to the predictions. A self-defeating prophecy.

Similarly in the case of climate change predictions: some actions in response have been taken. Not enough action to slow the rate of growth of carbon dioxide in the atmosphere, but enough to stop an acceleration of it.

At least Creighton acknowledges there has been global warming – half as much as predicted is his claim – so the climate model predictions were directionally accurate. The models used by climate scientists have improved since the 1990’s and will continue to improve in the future as more is learned about feedback mechanisms and other climate effects.

The predictions of epidemiologists concerning COVID-19 also resulted in action to reduce the rate of spread, which did have severe economic consequences. This pandemic has a long way to run yet, with global cases and deaths still increasing rapidly. Hopefully their predictions were also a self-defeating prophecy.

The failure of the vast majority of economic forecasters to predict the global financial crisis was consistent with their track record – they fail to predict turning points.

In the case of the impact of the COVID-19 pandemic in Australia, I was shocked at the scenarios. Australia’s Deputy Chief Medical Officer Paul Kelly said that the number of infections would be in the range 20% to 60% of the population. Deaths would range from 50,000 to 150,000 (The Age 17 March 2020). Australian economist Warwick McKibbin estimated that almost 100,000 Australians could die from COVID-19 (the range was 21,000 to 96,000). This is based on modelling seven different scenarios, building on the experience of the SARS outbreak in 2003 and Spanish Flu in 1918 (Australian Financial Review, 3 March 2020). The media reporting did not mention the specific assumptions used by the experts.

As at 4 August 2020, the number of confirmed infections in Australia was 18,730 and the number of deaths was 232. The accuracy of the forecasts was abysmal. To some degree, this is attributable to the policy response – a shutdown of travel, workplaces, and schools combined with social distancing. It might be argued that without the dire forecasts, politicians may never have agreed to the policies which were effective but which were known to cause an economic recession. Self-defeating prophecy.

There is, however, a second wave occurring concentrated in Victoria. The number of active cases has surged from 129 on 14 June to 6,755 on 4 August.

The actual number of infections is likely to be much higher than the confirmed number – one estimate made on the basis of analysis of blood test from pregnant women and pathology test for other purposes puts the figure at 500,000 (Australian Financial Review, July 22 2020.  If this figure, representing 2% of the population, is accurate it is far lower than the 20% to 60% predicted.

Had Australia experienced the same death rate as Sweden, for example, there would have been 15,000 deaths instead of 232.  Future analysis will ascertain the relative impacts of government policies, population density, inherent population characteristics such as vitamin D levels, and other factors on death rates by country, but Australia does appear to have fared exceptionally well so far.

Suggesting that poor forecasting performance in the case of the pandemic experts automatically casts doubt on the forecasting performance of climate experts is misguided.  The forces at play are very different.

In the case of the pandemic, the characteristics of the virus, interactions with the human body, and the policy response are the key drivers of the outcome.  Early on in the pandemic these were all unknown and so scenarios were needed to crystallise the range of outcomes.  In the case of Australia, a fourth, lower, scenario was not provided initially.  Later, more detailed, modelling guided the policy response of social distancing which has proven to be effective.

In the case of climate change, it is the behaviour of carbon dioxide molecules and the responses of the atmosphere and oceans which is modelled.  The behaviour of carbon dioxide molecules, and those of other greenhouse gasses, is well known and proven empirically.  There are uncertainties in the responses of the atmosphere and oceans – for example, some types of cloud reflect incoming sunlight into space, while other types prevent heat radiation from Earth from escaping into space.  To handle these uncertainties, and the range of responses in terms of reducing emissions, scenarios are constructed to span the range of likely outcome.

I am no staunch defender of current practices in modelling.  Throughout my book I have criticised forecasts based on modelling.   My aim, however, is to contribute to improved modelling and forecasting accuracy.  Creighton, on the other hand demonises modelling without offering any solution to the problem of forecasting inaccuracy.

Creighton did not question why the pandemic itself was not predicted.  Pandemics are rare, but do they occur randomly?  Or is there something about our interaction with nature which is making coronavirus outbreaks more common?

Previous coronavirus outbreaks have included MERS (Middle East respiratory syndrome) in 2012 and SARS (Severe acute respiratory syndrome) in 2003.  While it has been known since the 1960’s that the common cold is caused by a coronavirus, there have been no recorded deadly coronavirus outbreaks until SARS.

While influenza pandemics are rare, are coronavirus pandemics becoming more frequent given three within 20 years?  If so, why?  Could it be due to the clearing of forests, for example.

We need more experts analysing this problem, not a pandemic of mistrust in experts and modelling based on misunderstanding.

Creighton’s article, and similar opinions emanating from some prominent politicians and conservative commentators, present a challenge to expert professionals involved in modelling and forecasting.  The challenge is twofold: improve accuracy and build trust in the court of public opinion.

Charlie Nelson

 
Charlie Nelson