Scott J. Armstrong is a founder of the International Journal of Forecasting, Journal of Forecasting, International Institute of Forecasters, and International Symposium on Forecasting, and the author of Long-range Forecasting (1978, 1985), the Principles of Forecasting Handbook, and over 70 papers on forecasting, and today he tabled a statement declaring that the forecasting process used by the Intergovernmental Panel on Climate Change (IPCC) lacks a scientific basis.
His eight reasons include: Contents
1 No scientific forecasts of the changes in the Earth’s climate.
2 Improper peer review process.
3 Complexity and uncertainty of climate render expert opinions invalid for forecasting.
4 Forecasts are needed for the effects of climate change.
5 Forecasts are needed of the costs and benefits of alternative actions that might be taken to combat climate change.
6 To justify using a climate forecasting model, one would need to test it against a relevant naïve model.
7 The climate system is stable.
8 Be conservative and avoid the precautionary principle.
No scientific forecasts of the changes in the Earth’s climate.
Currently, the only forecasts are those based on the opinions of some scientists. Computer modeling was used to create scenarios (i.e., stories) to represent the scientists’ opinions about what might happen. The models were not intended as forecasting models (Trenberth 2007) and they have not been validated for that purpose. Since the publication of our paper, no one has provided evidence to refute our claim that there are no scientific forecasts to support global warming.
We conducted an audit of the procedures described in the IPCC report and found that they clearly violated 72 scientific principles of forecasting (Green and Armstrong 2008). (No justification was provided for any of these violations.) For important forecasts, we can see no reason why any principle should be violated. We draw analogies to flying an aircraft or building a bridge or performing heart surgery the potential cost of errors, it is not permissible to violate principles.
Improper peer review process.
To our knowledge, papers claiming to forecast global warming have not been subject to peer review by experts in scientific forecasting.
Complexity and uncertainty of climate render expert opinions invalid for forecasting.
Expert opinions are an inappropriate forecasting method in situations that involve high complexity and high uncertainty. This conclusion is based on over eight decades of research. Armstrong (1978) provided a review of the evidence and this was supported by Tetlock’s (2005) study that involved 82,361 forecasts by 284 experts over two decades.
Long-term climate changes are highly complex due to the many factors that affect climate and to their interactions. Uncertainty about long-term climate changes is high due to a lack of good knowledge about such things as:
a) causes of climate change,
b) direction, lag time, and effect size of causal factors related to climate change,
c) effects of changing temperatures, and
d) costs and benefits of alternative actions to deal with climate changes (e.g., CO2 markets).
Given these conditions, expert opinions are not appropriate for long-term climate predictions.
Forecasts are needed for the effects of climate change.
Even if it were possible to forecast climate changes, it would still be necessary to forecast the effects of climate changes. In other words, in what ways might the effects be beneficial or harmful? Here again, we have been unable to find any scientific forecasts opposed to speculation our appeals for such studies.
We addressed this issue with respect to studies involving the possible classification of polar bears as threatened or endangered (Armstrong, Green, and Soon 2008). In our audits of two key papers to support the polar bear listing, 41 principles were clearly violated by the authors of one paper and 61 by the authors of the other. It is not proper from a scientific or from a practical viewpoint to violate any principles. Again, there was no sign that the forecasters realized that they were making mistakes.
Forecasts are needed of the costs and benefits of alternative actions that might be taken to combat climate change.
Assuming that climate change could be accurately forecast, it would be necessary to forecast the costs and benefits of actions taken to reduce harmful effects, and to compare the net benefit with other feasible policies including taking no action. Here again we have been unable to find any scientific forecasts despite our appeals for such studies.
To justify using a climate forecasting model, one would need to test it against a relevant naïve model.
We used the Forecasting Method Selection Tree to help determine which method is most appropriate for forecasting long-term climate change. A copy of the Tree is attached as Appendix 1. It is drawn from comparative empirical studies from all areas of forecasting. It suggests that extrapolation is appropriate, and we chose a naïve (no change) model as an appropriate benchmark. A forecasting model should not be used unless it can be shown to provide forecasts that are more accurate than those from this naïve model, as it would otherwise increase error. In Green, Armstrong and Soon (2008), we show that the mean absolute error of 108 naïve forecasts for 50 years in the future was 0.24°C.
The climate system is stable.
To assess stability, we examined the errors from naïve forecasts for up to 100 years into the future. Using the U.K. Met Office Hadley Centre’s data, we started with 1850 and used that year’s average temperature as our forecast for the next 100 years. We then calculated the errors for each forecast horizon from 1 to 100. We repeated the process using the average temperature in 1851 as our naïve forecast for the next 100 years, and so on. This “successive updating” continued until year 2006, when we forecasted a single year ahead. This provided 157 one-year-ahead forecasts, 156 two-year-ahead and so on to 58 100-year-ahead forecasts.
We then examined how many forecasts were further than 0.5°C from the observed value. Fewer than 13% of forecasts of up to 65-years-ahead had absolute errors larger than 0.5°C. For longer horizons, fewer than 33% had absolute errors larger than 0.5°C. Given the remarkable stability of global mean temperature, it is unlikely that there would be any practical benefits from a forecasting method that provided more accurate forecasts.
Be conservative and avoid the precautionary principle.
One of the primary scientific principles in forecasting is to be conservative in the darkness of uncertainty. This principle also argues for the use of the naive no-change extrapolation. Some have argued for the precautionary principle as a way to be conservative. It is a political, not a scientific principle. As we explain in our essay in Appendix 2, it is actually an anti-scientific principle in that it attempts to make decisions without using rational analyses. Instead, cost/benefit analyses are appropriate given the available evidence which suggests that temperature is just as likely to go up as down. However, these analyses should be supported by scientific forecasts.
More here: http://jennifermarohasy.com/blog/2009/01/no-scientific-forecasts-to-support-global-warming/
And if the long url is mangled here is a tiny: http://tinyurl.com/dbpxlu
Jennifer Marohasy BSc PhD
Senior Fellow, Institute of Public Affairs, Melbourne Chair, Australian Environment Foundation Columnist, The Land, Rural Press Blogger, www.jennifermarohasy.com/blog
M: 0418 873 222 E: firstname.lastname@example.org W: jennifermarohasy.com