12.08.2013 Views

The tenth IMSC, Beijing, China, 2007 - International Meetings on ...

The tenth IMSC, Beijing, China, 2007 - International Meetings on ...

The tenth IMSC, Beijing, China, 2007 - International Meetings on ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

University of Cantabria<br />

Carmen Sordo<br />

University of Cantabria<br />

Ant<strong>on</strong>io S. Cofiño<br />

University of Cantabria<br />

In this work we describe a simple statistical method to validate seas<strong>on</strong>al forecasts,<br />

comparing them with random predicti<strong>on</strong>s. This method provides an estimati<strong>on</strong> of the statistical<br />

significance of the skill and, hence, allows us to find out predictable situati<strong>on</strong>s where the<br />

seas<strong>on</strong>al system significantly outperforms a random forecast. We also analyze the advantages<br />

of post-processing the predicti<strong>on</strong>s with some appropriate statistical downscaling method. <str<strong>on</strong>g>The</str<strong>on</strong>g><br />

technique is applied to precipitati<strong>on</strong> and temperature forecasts c<strong>on</strong>sidering two regi<strong>on</strong>s with<br />

different seas<strong>on</strong>al behavior: Peru (in the tropics) and Spain (mid-latitudes). Results show high<br />

predictability over Peru during El Niño periods. Here, the use of a downscaling method clearly<br />

improves the forecast skill. Over Spain the forecast signal is much weaker, but some<br />

predictability related to El Niño and La Niña events is found.<br />

Finally, some sensitivity studies are presented. On the <strong>on</strong>e hand, we compare raw stati<strong>on</strong><br />

data with high resoluti<strong>on</strong> gridded interpolated data. On the other hand, different temporal<br />

aggregati<strong>on</strong> patterns are used for the analog downscaling method (daily, weekly and m<strong>on</strong>thly),<br />

comparing the different results obtained.<br />

For this work we have used data from the DEMETER multimodel project.<br />

Tests for evaluating rank histograms from ensemble forecasts<br />

Speaker: Ian Jolliffe<br />

Ian Jolliffe<br />

University of Exeter<br />

ian@sandloch.fsnet.co.uk<br />

Cristina Primo<br />

ECMWF<br />

Rank histograms are often plotted in order to evaluate the forecasts produced by an<br />

ensemble forecasting system - an ideal rank histogram is `flat'. It has been noted previously<br />

that the obvious test of `flatness', the well-known χ2 goodness-of-fit test, spreads its power<br />

thinly and hence is not good at detecting specific alternatives to flatness, such as bias or<br />

over/under-dispersi<strong>on</strong>. Other tests, which focus their power and are therefore more successful<br />

in detecting such alternatives, will be discussed and illustrated.<br />

<str<strong>on</strong>g>The</str<strong>on</strong>g> talk will emphasise new tests, which decompose the overall χ2 statistic, but the<br />

Cramér-v<strong>on</strong> Mises family of tests will also be described.<br />

78

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!