Forecasting accuracy called in to question (again)

Forecasting accuracy has been called into question again in 2020.  Economists, climate scientists, and epidemiologists (and probably others) have been tarred with the same brush and their expert modelling described as having a history of failure.

vanuatu 194asf

The period since mid-2018 has been a challenging one for forecasting accuracy and a review is timely.  Volume 1 of my book “Forecasting: the essential skills” was written over the period 2013 to 2018.  I found many flaws in the practice of economic forecasting and some holes in the models for forecasting weather and climate.  I found that the accuracy of political polling was in decline.  There are, of course, some success stories and also improvements in weather forecasting.

Through case studies and reviews of performance I have identified a set of skills which are essential to improving forecasting accuracy.

Volume 2 is now in preparation.  Two chapters are now available: a review of recent criticism and an evaluation of forecasting accuracy as perceived by the court of public opinion.  A September 2020 survey of the Australian general public measured perceptions of accuracy in the fields of economic, weather, and climate forecasting.  Perceptions are reasonably consistent with the reviews I have conducted and not as bad as suggested by the most strident critics – whose views are reflected in a small minority of the general public.

Future chapters will review the poor performance of forecasts over the period 2018 to 2020 and identify reasons for the shortcomings in economic and political forecasts in Australia.  There will also be a chapter on the performance of demographic forecasting upon which economic forecasts and infrastructure plans depend.

Volume 1 and the first two chapters of Volume 2 are available at  This is important information for both decision makers who rely on forecasts and forecasters who wish to improve their skills.
Charlie Nelson


Criticism of forecasting accuracy

Adam Creighton has criticised the accuracy of modelling used in forecasting, conflating the performance of pandemic, climate, and economic forecasting. He is an economics editor of The Australian newspaper and his article on June 17 2020 was headed “Coronavirus: Inflated pandemic estimates weaken climate forecasts”.

Here are some excerpts from his article.

“so spectacularly bad was expert modelling of the spread and lethality of the coronavirus, faith in all modelling must surely suffer”.

“Why trust the experts to forecast the climate decades into the future when they were so wrong about a disease related to the common cold?”

“Climate modelling was struggling even before the pandemic, given the planet has warmed about half as much as forecast by the first Intergovernmental Panel on Climate Change report in back 1990”.

“It’s remarkable we put so much faith in expert models, given their history of failure. The Club of Rome in 1972 notoriously forecast that growth would collapse as the world’s resources ran out, ignoring human ingenuity and the shale revolution”.

“Financial models failed to account for — indeed they probably facilitated — the global financial crisis. And as almost every utterance by a central bank governor since has reminded us, economists struggle to know what happened last month, let alone forecast the impact of a policy change tomorrow”.

He has cited modelling from the 1970’s, the 1990’s, a failure to predict the global financial crisis in 2008-09, and modelling of the Pandemic in 2020 – which is still spreading and claiming lives. He identifies only 4 “failures” in the last 50 years!

There are many forecasting errors throughout recent decades, which lead one to question why he selected these particular four to demonise.

Many other forecasting failures are documented elsewhere in my book Forecasting: the essential skills. One standout is political leaders in USA, UK, Australia, and Spain predicting that Iraq had weapons of mass destruction, despite UN arms inspectors on the ground never having found any. The experts got it right and the politicians got it wrong (perhaps wantonly). The cost of this failed prediction was thousands of US armed forces lives, tens of thousands of civilian lives, and arguably a power vacuum filled by ISIS. The financial cost was in the trillions of dollars, as documented by Joseph Stiglitz and Linda Bilmes in the 2008 book “The three trillion dollar war”.

The Australian newspaper’s proprietor, Rupert Murdoch, predicted that the invasion of Iraq would result in the price of oil falling from over $US30 a barrel to $US20 a barrel. He said that the whole world would benefit from cheaper oil. This was presumably a judgemental forecast which was at least as inaccurate as any modelling. In the event, the price never fell below $28 and was over $34 in February2004. The price soared to $74 by mid-2006 and to $134 by mid-2008.

In fact the end of cheap oil was predicted, by modelling. In 1956, M. King Hubbert, a geologist with Shell Oil, correctly predicted that the rate of oil production from the lower 48 American states would peak in 1969. His prediction was based on the observation that oil production from any large region starts to fall when about half the crude is depleted. The output from a region as a whole follows a bell shaped curve. Hubbert extrapolated this to output from whole countries. Campbell and Laherrere extrapolated this pattern to world production (“The End of Cheap Oil”, Scientific American, March 1998). Their modelling was very accurate! More recently, shale oil has increased supply which has contributed to a fall in prices – but not sustainably to $20 a barrel.

There is a common element to the four cases cited by Creighton – perceived damage to economic growth if the forecasts are acted on, in the cases of the Club of Rome Limits to Growth, Climate change, and the COVID-19 pandemic. The reason that the failure of economists to predict the GFC is included may be that economic recession in the USA was not avoided by appropriate policy which could have implemented if recession was predicted. This would have been a self-defeating prophecy!

In the case of the Club of Rome limits to growth, the dire predictions did not occur (yet) because of action, such as the green revolution, in response to the predictions. A self-defeating prophecy.

Similarly in the case of climate change predictions: some actions in response have been taken. Not enough action to slow the rate of growth of carbon dioxide in the atmosphere, but enough to stop an acceleration of it.

At least Creighton acknowledges there has been global warming – half as much as predicted is his claim – so the climate model predictions were directionally accurate. The models used by climate scientists have improved since the 1990’s and will continue to improve in the future as more is learned about feedback mechanisms and other climate effects.

The predictions of epidemiologists concerning COVID-19 also resulted in action to reduce the rate of spread, which did have severe economic consequences. This pandemic has a long way to run yet, with global cases and deaths still increasing rapidly. Hopefully their predictions were also a self-defeating prophecy.

The failure of the vast majority of economic forecasters to predict the global financial crisis was consistent with their track record – they fail to predict turning points.

In the case of the impact of the COVID-19 pandemic in Australia, I was shocked at the scenarios. Australia’s Deputy Chief Medical Officer Paul Kelly said that the number of infections would be in the range 20% to 60% of the population. Deaths would range from 50,000 to 150,000 (The Age 17 March 2020). Australian economist Warwick McKibbin estimated that almost 100,000 Australians could die from COVID-19 (the range was 21,000 to 96,000). This is based on modelling seven different scenarios, building on the experience of the SARS outbreak in 2003 and Spanish Flu in 1918 (Australian Financial Review, 3 March 2020). The media reporting did not mention the specific assumptions used by the experts.

As at 4 August 2020, the number of confirmed infections in Australia was 18,730 and the number of deaths was 232. The accuracy of the forecasts was abysmal. To some degree, this is attributable to the policy response – a shutdown of travel, workplaces, and schools combined with social distancing. It might be argued that without the dire forecasts, politicians may never have agreed to the policies which were effective but which were known to cause an economic recession. Self-defeating prophecy.

There is, however, a second wave occurring concentrated in Victoria. The number of active cases has surged from 129 on 14 June to 6,755 on 4 August.

The actual number of infections is likely to be much higher than the confirmed number – one estimate made on the basis of analysis of blood test from pregnant women and pathology test for other purposes puts the figure at 500,000 (Australian Financial Review, July 22 2020.  If this figure, representing 2% of the population, is accurate it is far lower than the 20% to 60% predicted.

Had Australia experienced the same death rate as Sweden, for example, there would have been 15,000 deaths instead of 232.  Future analysis will ascertain the relative impacts of government policies, population density, inherent population characteristics such as vitamin D levels, and other factors on death rates by country, but Australia does appear to have fared exceptionally well so far.

Suggesting that poor forecasting performance in the case of the pandemic experts automatically casts doubt on the forecasting performance of climate experts is misguided.  The forces at play are very different.

In the case of the pandemic, the characteristics of the virus, interactions with the human body, and the policy response are the key drivers of the outcome.  Early on in the pandemic these were all unknown and so scenarios were needed to crystallise the range of outcomes.  In the case of Australia, a fourth, lower, scenario was not provided initially.  Later, more detailed, modelling guided the policy response of social distancing which has proven to be effective.

In the case of climate change, it is the behaviour of carbon dioxide molecules and the responses of the atmosphere and oceans which is modelled.  The behaviour of carbon dioxide molecules, and those of other greenhouse gasses, is well known and proven empirically.  There are uncertainties in the responses of the atmosphere and oceans – for example, some types of cloud reflect incoming sunlight into space, while other types prevent heat radiation from Earth from escaping into space.  To handle these uncertainties, and the range of responses in terms of reducing emissions, scenarios are constructed to span the range of likely outcome.

I am no staunch defender of current practices in modelling.  Throughout my book I have criticised forecasts based on modelling.   My aim, however, is to contribute to improved modelling and forecasting accuracy.  Creighton, on the other hand demonises modelling without offering any solution to the problem of forecasting inaccuracy.

Creighton did not question why the pandemic itself was not predicted.  Pandemics are rare, but do they occur randomly?  Or is there something about our interaction with nature which is making coronavirus outbreaks more common?

Previous coronavirus outbreaks have included MERS (Middle East respiratory syndrome) in 2012 and SARS (Severe acute respiratory syndrome) in 2003.  While it has been known since the 1960’s that the common cold is caused by a coronavirus, there have been no recorded deadly coronavirus outbreaks until SARS.

While influenza pandemics are rare, are coronavirus pandemics becoming more frequent given three within 20 years?  If so, why?  Could it be due to the clearing of forests, for example.

We need more experts analysing this problem, not a pandemic of mistrust in experts and modelling based on misunderstanding.

Creighton’s article, and similar opinions emanating from some prominent politicians and conservative commentators, present a challenge to expert professionals involved in modelling and forecasting.  The challenge is twofold: improve accuracy and build trust in the court of public opinion.

Charlie Nelson

Charlie Nelson


A change in the expectations and concerns of the Australian general public


There has been an abrupt shift in consumer psychology which has implications for government policy, the Reserve Bank, and business decision makers.  Economic pessimism has increased and the level of belief in climate change has lifted.

These changes seem to be at odds with the federal government’s “sticking to our policy” mantra.  The heightened expectation of a rise in unemployment is inconsistent with the Reserve Bank’s hope for a decline in the unemployment rate to 4.5%.  Other shifts in consumer psychology are more positive and provide an opportunity to boost consumer spending growth.

The level of belief in imminent climate change in late 2019 is the second-highest recorded and is slightly higher than in 2007, when John Howard lost his seat in parliament and his government lost office.

The federal government and many businesses need to take more decisive action on climate change to satisfy voter (and customer) expectations.

For several years, prominent Australian economist Ross Garnaut has warned of “the great Australian complacency” which has significantly slowed Australia’s economic growth rate.  That description clearly also applies to the issue of climate change.

As the new decade dawns, more Australians are experiencing the costs of these policy complacencies.

Our tracking survey update was in field in November and early December 2019.

A summary report is available at

Charlie Nelson


Australia’s Bureau of Meteorology needs a new model

On May 8th 2018 the Bureau’s forecast for cumulative rainfall in Melbourne over the following five days was between 35mm and 144mm. The actual received was 39mm. This was, of course, within that wide band but towards the lower end. The mean of the range was 89.5mm so on that basis, the forecast error was quite large. That may have been disappointing for farmers but a relief for emergency services planners. The largest forecast error for any of those days was for the Friday: the forecast was for 25 to 80, but only 8.4 was received.

It would be helpful for planners to have a probability distribution. For example, was the upper limit of the five day forecast (144mm) a one in 100 chance or a one in ten chance?

This forecasting error might be understandable if it was an isolated incident. Melbourne is a difficult location to forecast due to high variability. But this was not an isolated incident.

Only a few months earlier, in December 2017, an unprecedented rainfall event was predicted but did not materialise. In January 2015, another such prediction was made and did not materialise.

These seem to be systematic failures rather than random errors, suggesting that the Bureaus models are not sensitive to an important driving force.

What is that driving force? Find out in my book “Forecasting: the essential skills”. It describes these incidents in more detail and contains my suggestion as to what the Factor X is. The book also reviews forecasting skill in economic forecasting and political forecasting as well as weather and climate forecasting.

Charlie Nelson