12.08.2013 Views

final_program_abstracts[1]

final_program_abstracts[1]

final_program_abstracts[1]

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

11 IMSC Session Program<br />

The CMIP multi model ensemble and IPCC: Lessons learned<br />

and questions arising<br />

Thursday - Plenary Session 1<br />

Reto Knutti<br />

Institute for Atmospheric and Climate Science, ETH Zurich, Switzerland<br />

Recent coordinated efforts, in which numerous general circulation climate models<br />

have been run for a common set of experiments, have produced large datasets of<br />

projections of future climate for various scenarios. Those multi-model ensembles<br />

sample initial condition, parameter as well as structural uncertainties in the model<br />

design, and they have prompted a variety of approaches to quantifying uncertainty in<br />

future regional climate change. International climate change assessments like IPCC<br />

rely heavily on these models and often provide model ranges as uncertainties and<br />

equal-weighted averages as best-guess results, the latter assuming that individual<br />

model biases will at least partly cancel and that a model average prediction is more<br />

likely to be correct than a prediction from a single model. This is based on the result<br />

that a multi-model average of present-day climate generally out-performs any<br />

individual model. However, there are several challenges in averaging models and<br />

interpreting spread from such ensembles of opportunity.<br />

Among these challenges are that the number of models in these ensembles is usually<br />

small, their distribution in the model or parameter space is unclear and the fact that<br />

extreme behavior is often not sampled when each institution is only developing one or<br />

two model versions. The multi model ensemble should probably be interpreted as a<br />

set of ‘best guess’ models from different institutions, all carefully tuned to the same<br />

datasets, rather than a set of models representing the uncertainties that are known to<br />

exist or trying to push the extremes of plausible model response.<br />

Model skill in simulating present day climate conditions is often weakly related to the<br />

magnitude of predicted change. It is thus unclear how the skill of these models should<br />

be evaluated, i.e. what metric should be used to define whether a model is ‘good’ or<br />

‘bad’, and by how much our confidence in future projections should increase based on<br />

improvements in simulating present day conditions, a reduction of intermodel spread<br />

or a larger number of models. Metrics of skill are also likely to depend on the<br />

question and quantity of interest.<br />

In many probabilistic methods, the models are assumed to be independent and<br />

distributed around the truth, which implies that the uncertainty of the central tendency<br />

of the ensemble decreases as the number of models increases. Because all models are<br />

based on similar assumptions and share common limitations, this behavior is unlikely<br />

to be meaningful at least for a large number of models. Indeed the averaging of<br />

models and the correlation structure suggest that the effective number of independent<br />

models is much smaller than the number of models in the ensemble, and that model<br />

biases are often correlated.<br />

The bottom line is that despite of a massive increase computational capacity and<br />

despite of (or maybe because of) an increase in model complexity, the model spread<br />

in future projections is often not decreasing. Even on the largest scale, e.g. for climate<br />

sensitivity, the range covered by models has remained virtually unchanged for three<br />

decades. Probabilistic projections based on Bayesian methods that determine weights<br />

Abstracts 227

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!