31.12.2013 Views

Practical Mechanism Design for Information Elicitation and ...

Practical Mechanism Design for Information Elicitation and ...

Practical Mechanism Design for Information Elicitation and ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Practical</strong> <strong>Mechanism</strong> <strong>Design</strong> <strong>for</strong><br />

In<strong>for</strong>mation <strong>Elicitation</strong> <strong>and</strong> Aggregation<br />

Blake Riley<br />

University of Illinois Urbana-Champaign<br />

Abstract<br />

Good decisions depend on good in<strong>for</strong>mation. Since decision-makers rarely possess all data relevant<br />

to their situation, experts are called on to assess costs <strong>and</strong> benets, ll gaps in knowledge, provide<br />

<strong>for</strong>ecasts, <strong>and</strong> develop alternate courses of action. Expert judgment ranges from the personal, with<br />

doctors advising patients, to the global, with climatologists assessing the impact of carbon emission<br />

policies. However, purported experts might be misin<strong>for</strong>med, biased, or have conicts of interest,<br />

leaving decision-makers with the new problem of deciding who to trust. Formal tools to si accurate<br />

judgments from pools of advisers could substantially ease this burden <strong>and</strong> improve decision quality.<br />

Formal procedures <strong>for</strong> eliciting judgments have a long history in economics, statistics, meteorology,<br />

<strong>and</strong> articial intelligence, among other elds. Un<strong>for</strong>tunately, existing procedures tend to be<br />

applicable in only a narrow range of situations, are too abstract <strong>for</strong> realistic implementation, or address<br />

only one aspect of the entire process of elicitation, aggregation, <strong>and</strong> decision-making. New tools are<br />

needed that:<br />

• Encourage honesty from participants without the truth eventually being publicly observed,<br />

• Acknowledge participants are not perfect Bayesians <strong>and</strong> might have systematic cognitive biases,<br />

• Anticipate participants strategically misrepresenting their judgments to favor personal interests,<br />

• Are exible about the space of possible hypotheses, how participants <strong>for</strong>m their opinions, <strong>and</strong><br />

how much participants know about the beliefs of others,<br />

• Can operate without payments to participants <strong>and</strong> have low transactions <strong>and</strong> communication<br />

costs, <strong>and</strong><br />

• Can elicit <strong>and</strong> incorporate qualitative in<strong>for</strong>mation.<br />

In this dissertation, I develop mechanisms <strong>for</strong> making inferences <strong>and</strong> decisions that address the above<br />

considerations, drawing on recent advancements in the theory of robust mechanism design. I emphasize<br />

simplicity <strong>and</strong> robustness over optimality so the resulting mechanisms can be put into practice<br />

with minimal barriers. Numerical simulations <strong>and</strong> experiments supplement theoretical results.<br />

Proposal<br />

Chapter 1: Eliciting <strong>and</strong> Aggregating Probabilities from Experts<br />

I rst survey results on proper scoring rules, which <strong>for</strong>m the foundation of incentive-compatible elicitation.<br />

Proper scoring rules assign rewards conditional on an eventually observed outcome so that<br />

agents maximize their expected scores by reporting their true beliefs about the likelihood of each event.<br />

If probabilities are solicited from multiple experts, these must then be combined into a single distribution<br />

to be useful. Aggregation methods might average expert beliefs, looking <strong>for</strong> consensus, or attempt<br />

1


to combine the in<strong>for</strong>mation implicit in each probability, treating each opinion as data in a larger inference<br />

procedure or mimicking a process of Aumann agreement through discussion. Prediction markets<br />

based on market scoring rules are one means of simultaneously eliciting <strong>and</strong> aggregating probabilities.<br />

Chapter 2: Truthful Revelation without External Verication<br />

I next survey work on peer-prediction mechanisms, including Drazen Prelec’s Bayesian truth serum.<br />

As “peer-prediction” suggests, participants predict the answers of other agents in addition to stating<br />

their own opinion. Comparing predictions against each other rather than against an exogenously<br />

revealed outcome generates truthfulness internally, enabling the elicitation of judgments on vague,<br />

hypothetical, or very long-term questions. As long as one is willing to assume agents share a common<br />

prior, I show large classes of these mechanisms exist. <strong>Mechanism</strong>s can be easily tailored to the question<br />

at h<strong>and</strong>, taking advantage of natural orderings or hierarchies in possible answers.<br />

Chapter 3: Accounting <strong>for</strong> Preferences<br />

Scoring rules, prediction markets, <strong>and</strong> peer-prediction mechanisms assume participants care only<br />

about maximizing their score from the mechanism, oen given to the expert as a payment. However,<br />

agents are also likely to care about how their responses are used. e principal-agent <strong>and</strong> in<strong>for</strong>mationtransmission<br />

literature in economics describes how conicts in preferences can limit communication,<br />

but yields little practical advice. Although preferences constrain what can be implemented, this comes<br />

with a silver lining. If agents want the nal outcome to reect their opinion (though are still willing<br />

to misrepresent their evidence in order to persuade), honest revelation can be achieved without relying<br />

on payments. Since agents care about the conclusions drawn from reports, inference <strong>and</strong> decision<br />

procedures must also satisfy incentive-compatibility constraints. I investigate maximally accurate inference<br />

procedures in this setting.<br />

Chapter 4: Robust <strong>Mechanism</strong> <strong>Design</strong> <strong>for</strong> <strong>Elicitation</strong><br />

A mechanism is robust if it doesn’t depend on a precise specication of agent preferences or beliefs.<br />

In particular, robust mechanism design typically attempts to relax assumptions about the common<br />

knowledge of participants. In common economic settings, higher-order beliefs of participants have<br />

no relevance <strong>for</strong> how a good should be allocated, so a robust mechanism should not be sensitive to beliefs<br />

about the beliefs of other agents. In an in<strong>for</strong>mation elicitation setting, higher-order beliefs could<br />

contain useful in<strong>for</strong>mation <strong>and</strong> can be exploited with peer-prediction mechanisms. I argue <strong>for</strong> appropriate<br />

notions of strategic robustness in elicitation, centering around the notion of rationalizability.<br />

Chapter 5: Assessing Qualitative In<strong>for</strong>mation<br />

Although tools <strong>for</strong> collecting <strong>and</strong> aggregating probabilistic beliefs are vital, participants might be unwilling<br />

or unable to translate qualitative knowledge into quantitative judgments. In situations when<br />

probabilities can’t be naturally interpreted as frequencies, mechanisms should be capable of h<strong>and</strong>ling<br />

qualitative input. is design challenge is tied to developing practical mechanisms <strong>for</strong> innite hypothesis<br />

or type spaces. One initial solution, proposed by Vincent Conitzer, separates agents into two<br />

groups: one providing in<strong>for</strong>mation <strong>and</strong> the other translating statements into probabilitiesis. I investigated<br />

this proposal further.<br />

2

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!