10.07.2015 Views

IPCC_Managing Risks of Extreme Events.pdf - Climate Access

IPCC_Managing Risks of Extreme Events.pdf - Climate Access

IPCC_Managing Risks of Extreme Events.pdf - Climate Access

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Chapter 3Changes in <strong>Climate</strong> <strong>Extreme</strong>s and their Impacts on the Natural Physical EnvironmentIn the case <strong>of</strong> statistical downscaling, uncertainties are induced by,inter alia, the definition and choice <strong>of</strong> predictors (Benestad, 2001;Hewitson and Crane, 2006; Timbal et al., 2008) and the underlyingassumption <strong>of</strong> stationarity (Raje and Mujumdar, 2010). In general, bothapproaches to downscaling are maturing and being more widely appliedbut are still restricted in terms <strong>of</strong> geographical coverage (Maraun et al.,2010). For many regions <strong>of</strong> the world, no downscaled information existsat all and regional projections rely only on information from GCMs (seeTable 3-3).For many user-driven applications, impact models need to be includedas an additional step for projections (e.g., hydrological or ecosystemmodels). Because <strong>of</strong> the previously mentioned issues <strong>of</strong> scale discrepanciesand overall biases, it is necessary to bias-correct RCM data before inputto some impacts models (i.e., to bring the statistical properties <strong>of</strong> presentdaysimulations in line with observations and to use this information tocorrect projections). A number <strong>of</strong> bias correction methods, includingquantile mapping and gamma transform, have recently been developedand exhibit promising skill for extremes <strong>of</strong> daily precipitation (Piani etal., 2010; Themeßl et al., 2011).3.2.3.3. Ways <strong>of</strong> Exploring and Quantifying UncertaintiesUncertainties can be explored, and quantified to some extent, throughthe combined use <strong>of</strong> observations and reanalyses, process understanding,a hierarchy <strong>of</strong> climate models, and ensemble simulations. Ensembles <strong>of</strong>model simulations represent a fundamental resource for studying therange <strong>of</strong> plausible climate responses to a given forcing (Meehl et al.,2007b; Randall et al., 2007). Such ensembles can be generated either by(i) collecting results from a range <strong>of</strong> models from different modellingcenters (multi-model ensembles), to include the impact <strong>of</strong> structuralmodel differences; (ii) by generating simulations with different initialconditions (intra-model ensembles) to characterize the uncertaintiesdue to internal climate variability; or (iii) varying multiple internal modelparameters within plausible ranges (perturbed and stochastic physicsensembles), with both (ii) and (iii) aiming to produce a more systematicestimate <strong>of</strong> single model uncertainty (Knutti et al., 2010b).Many <strong>of</strong> the global models utilized for the AR4 were integrated asensembles, permitting more robust statistical analysis than is possible if amodel is only integrated to produce a single projection. Thus the availableCMIP3 Multi-Model Ensemble (MME) GCM simulations reflect both interandintra-model variability. In advance <strong>of</strong> AR4, coordinated climate changeexperiments were undertaken which provided information from 23 modelsfrom around the world (Meehl et al., 2007a). The CMIP3 simulationswere made available at the Program for <strong>Climate</strong> Model Diagnosis andIntercomparison (www-pcmdi.llnl.gov/ipcc/about_ipcc.php). However,the higher temporal resolution (i.e., daily) data necessary to analyzemost extreme events were quite incomplete in the archive, with onlyfour models providing daily averaged output with ensemble sizesgreater than three realizations and many models not included at all.GCMs are expensive to run, thus a compromise is needed between thenumber <strong>of</strong> models, number <strong>of</strong> simulations, and the complexity <strong>of</strong> themodels (Knutti, 2010).Besides the uncertainty due to randomness itself, which is the canonicalstatistical definition, it is important to distinguish between the uncertaintydue to insufficient agreement in the model projections, the uncertaintydue to insufficient evidence (insufficient observational data to constrainthe model projections or insufficient number <strong>of</strong> simulations from differentmodels or insufficient understanding <strong>of</strong> the physical processes), and theuncertainty induced by insufficient literature, which refers to the lack <strong>of</strong>published analyses <strong>of</strong> projections. For instance, models may agree on aprojected change, but if this change is controlled by processes that arenot well understood and validated in the present climate, then there isan inherent uncertainty in the projections, no matter how good themodel agreement may be. Similarly, available model projections mayagree in a given change, but the number <strong>of</strong> available simulations mayrestrain the reliability <strong>of</strong> the inferred agreement (e.g., because theanalyses need to be based on daily data that may not be available fromall modelling groups). All these issues have been taken into account inassessing the confidence and likelihood <strong>of</strong> projected changes inextremes for this report (see Section 3.1.5).Uncertainty analysis <strong>of</strong> the CMIP3 MME in AR4 focused essentially on theseasonal mean and inter-model standard deviation values (Christensen etal., 2007; Meehl et al., 2007b; Randall et al., 2007). In addition, confidencewas assessed in the AR4 through simple quantification <strong>of</strong> the number <strong>of</strong>models that show agreement in the sign <strong>of</strong> a specific climate change(e.g., sign <strong>of</strong> the change in frequency <strong>of</strong> extremes) – assuming that thegreater the number <strong>of</strong> models in agreement, the greater the robustness.However, the shortcoming <strong>of</strong> this definition <strong>of</strong> model agreement is thatit does not take account <strong>of</strong> possible common biases among models.Indeed, the ensemble was strictly an ‘ensemble <strong>of</strong> opportunity,’ withoutsampling protocol, and the possible dependence <strong>of</strong> different models onone another (e.g., due to shared parameterizations) was not assessed(Knutti et al., 2010a). Furthermore, this particular metric, which assessessign agreement only, can provide misleading conclusions in cases, forexample, where the projected changes are near zero. For this reason, inour assessments <strong>of</strong> projected changes in extreme indices we considerthe model agreement as a necessary but not a sufficient condition forlikelihood statements [e.g., agreement <strong>of</strong> 66% <strong>of</strong> the models, as indicatedwith shading in several <strong>of</strong> the figures (Figures 3-3, 3-4, 3-6, 3-8, and3-10), is a minimum but not a sufficient condition for a change beingconsidered ‘likely’].Post-AR4 studies have concentrated more on the use <strong>of</strong> the MME inorder to better characterize uncertainty in climate change projections,including those <strong>of</strong> extremes (Kharin et al., 2007; Gutowski et al., 2008a;Perkins et al., 2009). New techniques have been developed for exploitingthe full ensemble information, in some cases using observationalconstraints to construct probability distributions (Tebaldi and Knutti,2007; Tebaldi and Sanso, 2009), although issues such as determiningappropriate metrics for weighting models are challenging (Knutti et al.,2010a). Perturbed-physics ensembles have also become available (e.g.,131

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!