12.08.2013 Views

final_program_abstracts[1]

final_program_abstracts[1]

final_program_abstracts[1]

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

11 IMSC Session Program<br />

Bridging the gap from indirect to direct climate data –<br />

experience with homogenizing long climate time series in the<br />

early instrumental period<br />

Tuesday - Plenary Session 8<br />

Reinhard Böhm<br />

Central Institute for Meteorology and Geodynamics, Climate Research Department,<br />

Vienna, Austria<br />

Climate data from weather services are usually regarded as kind of “official” data of<br />

great quality and are used as “ground-truth” against which the skill of models and/or<br />

paleo-proxies have to be tested. Working in a weather service I am glad about this and<br />

I can approve it. We spend much time and invest much money, manpower and savvy<br />

in our quality controls. But the aim is to produce data of internal and spatial physical<br />

consistence according to the current state of the respective measuring site. It is these<br />

data which are stored in the databanks, exchanged all over the globe, and published in<br />

yearbooks. It does not belong to the principal canon of the duties of weather services<br />

to have a look at the longterm stability of their data. But we have to be aware that<br />

“original climate time series” in no case contain climate information exclusively. In<br />

fact there is much random noise in them and (even worse) also systematic breaks or<br />

(the worst of all) trends or other things not representing climate but growing cities,<br />

trees, technological progress in measuring instruments, data processing, quality<br />

control mechanisms and an number of other non climatic things.<br />

Some basic findings from the experience of our group:<br />

• No single longterm climate time series is a priori homogeneous (free from non<br />

climatic noise)<br />

• At average each 20 to 30 years a break is produced which significantly<br />

modifies the series at an order comparable or exceeding the real climate signal<br />

• Many but not all of these single breaks are random if the regional (global)<br />

sample is analyzed - even regionally or globally averaged series contain biases<br />

in the order of the real climate signal<br />

• There are a number of mathematical procedures which - preferably if<br />

combined with metadata information from station history files – are able to<br />

detect and remove (or at least reduce) the non climatic information<br />

• This is much work so it should preferably be done by specialized regional<br />

groups close to the metadata – this produces the best results, is more effective<br />

and saves the time of research groups wanting to analyze the data<br />

The instrumental period in climatology usually is regarded to have started shortly<br />

after the mid 19 th century. Respective benchmarks are the starting point of the global<br />

mean temperature timeseries in the 1850s or the founding of many of the national<br />

meteorological services in the following two to three decades. But there is a<br />

considerable and valuable amount of measured climate data decades to a century<br />

earlier. But the demands on these early instrumental data in terms of their<br />

comparability with modern data are increasingly difficult to fulfil progressively back<br />

in time. Decreasing network density makes mathematical homogeneity testing and<br />

Abstracts 90

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!