12.08.2013 Views

final_program_abstracts[1]

final_program_abstracts[1]

final_program_abstracts[1]

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

11 IMSC Session Program<br />

Uncertain climate forecasts: When to use them, when to<br />

ignore them, and when and how to adjust them<br />

Wednesday - Parallel Session 11<br />

Steve Jewson 1 and Ed Hawkins 2<br />

1<br />

Risk Management Solutions<br />

2 Reading University, UK<br />

Suppose that we have a forecast for a change in a climate variable (perhaps derived<br />

from an ensemble mean), and that we also have a reasonable estimate of the<br />

uncertainty around that forecast (perhaps derived from an ensemble spread). If the<br />

ratio of the size of the change to the size of the uncertainty is large then it makes sense<br />

to use the forecast. If, however, the ratio of the size of the change to the size of the<br />

uncertainty is small, then, when minimizing expected mean squared error is the goal,<br />

the forecast would be better ignored. In between these two extremes there is a grey<br />

area where it may make sense to make a compromise between using and ignoring the<br />

forecast by reducing the forecast towards zero.<br />

First, we discuss this problem and its relation to standard statistical ideas of<br />

overfitting, model selection, bias-variance tradeoff, shrinkage and biased estimation.<br />

We then discuss the application of standard model selection rules (including BIC and<br />

Bayes Factors) to decide between using a forecast and ignoring it.<br />

Secondly, we discuss the development of new minimum mean squared error<br />

shrinkage estimators that attempt to do the best we possibly can in the grey area (see<br />

[3]). We call the new adjusted forecasts that result “damped” forecasts.<br />

Thirdly, we apply these ideas to AR4 rainfall projections. We inflate the spread of the<br />

AR4 ensemble to account for positive correlations due to common errors between<br />

different models using the method described in [1]. We then test whether it would be<br />

best to use, ignore, or „damp‟ the predictions, as a function of lead time. We find that<br />

at short lead times it would be best to ignore the forecasts, at intermediate lead-times<br />

it would be best to damp the forecasts, and at long lead times it would be best to use<br />

the forecasts as is (see [2]).<br />

Finally we discuss the relevance of these ideas to other kinds of forecasts such as<br />

seasonal forecasts and decadal forecasts.<br />

[1] “CMIP3 Ensemble Spread, Model Similarity, and Climate Prediction Uncertainty”<br />

(2009), Jewson S. and Hawkins E., http://arxiv.org/abs/0909.1890<br />

[2] “Improving the Expected Accuracy of Forecasts of Future Climate Using a Simple<br />

Bias-Variance Tradeoff” (2009), Jewson S. and Hawkins E.,<br />

http://arxiv.org/abs/0911.1904<br />

[3] “Improving Uncertain Climate Forecasts Using a New Minimum Mean Square<br />

Error Estimator for the Mean of the Normal Distribution” (2009), Jewson S. and<br />

Hawkins E., http://arxiv.org/abs/0912.4395<br />

Abstracts 193

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!