14.02.2013 Views

GPANet-Report-2012

GPANet-Report-2012

GPANet-Report-2012

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

23 Birger Heldt<br />

Mass Atrocities Early Warning Systems:<br />

Data Gathering, Data Verification, and Other Challenges<br />

manner, independently of each other, and across any case, and that conclusions/<br />

predictions can be replicated given the use of identical data. Unless this is achieved,<br />

early warning falls back to being an art form, without transparency, without a<br />

rigorous and systematic character. To exemplify, with the help of Harff’s (2003)<br />

transparent model and findings, any policy planner can calculate new and replicate<br />

old genocide risk scores for any country. For early warning models to become useful<br />

for practitioners, it is important to raise the bar to this level.<br />

The detailed knowledge of country and case experts is impressive and valuable, but<br />

has in general been found to be an imperfect basis for predictions. Citing findings<br />

from a study that covered 27,000 judgements/predictions covering 55 countries<br />

over 20 years, Tetlock (2011) reports that 1/3 of the experts did not fare better than a<br />

random guess generator, whereas 60% of the experts did not outperform the simple<br />

decision rule “tomorrow will be just like today” (i.e., if it is peace this year, then predict<br />

there will be peace next year too). Corroborating findings are reported by Green &<br />

Armstrong (2007) in that expert forecasts built on the assumption that the case was<br />

unique, were not more accurate than forecasts made by novices, and the accuracy was<br />

pretty much identical to what could have been expected by chance alone (Ibid.: 12).<br />

Interestingly, experts’ success rate increased sharply (39%) when they were asked to<br />

base forecasts on outcomes from similar cases in the past - or retrospectively based<br />

predictions – instead of treating the cases as unique. 13<br />

These findings do not imply that ”unique case assumption” experts do not provide<br />

accurate predictions, or that some case experts do not have a very good track record of<br />

accurate predictions. What can be inferred is instead that in general retrospectively<br />

based predictions - and hence predictions that are based on the assumption that there<br />

are empirical patterns and general causes - outperform the unique case assumption<br />

approach.<br />

Given that retrospectively based predictions based on general patterns are overall<br />

superior to the unique case assumption approach, should predictions be based on an<br />

inventory of findings from case studies, or on findings from large comparative studies<br />

(i.e., statistical studies)? Large comparative studies offer broad retrospective lessons<br />

and empirical scope, but at the cost of detailed, case-specific insights and texture that<br />

are the hallmark of case studies. The predictions are also far from perfect. But scope,<br />

broad and sweeping lessons are valuable in that the “big picture” helps us think more<br />

13 An analogy is medical doctors that regard every patient as unique, yet approach symptoms, analysis,<br />

treatments and predictions on the retrospectively based knowledge and experience of the medical field.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!