15.12.2012 Views

Bayesian Programming and Learning for Multi-Player Video Games ...

Bayesian Programming and Learning for Multi-Player Video Games ...

Bayesian Programming and Learning for Multi-Player Video Games ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

programmer-specified states, the (modes of) behaviors of the autonomous agent are limited<br />

to what has been authored. Again, probabilistic modeling enables one to recognize unspecified<br />

states as soft mixtures of known states. For instance, with S the state, instead of having either<br />

P(S = 0) = 1.0 or P(S = 1) = 1.0, one can be in P(S = 0) = 0.2 <strong>and</strong> P(S = 1) = 0.8, which<br />

allows to have behaviors composed from different states. This <strong>for</strong>m of continuum allows to deal<br />

with unknown state as an incomplete version of a known state which subsumes it. Autonomy<br />

can also be reached through learning, being it through online or offline learning. Offline learning<br />

is used to learn parameters that does not have to be specified by the programmers <strong>and</strong>/or game<br />

designers. One can use data or experiments with known/wanted situations (supervised learning,<br />

rein<strong>for</strong>cement learning), or explore data (unsupervised learning), or game states (evolutionary<br />

algorithms). Online learning can provide adaptability of the AI to the player <strong>and</strong>/or its own<br />

competency playing the game.<br />

3.2 The <strong>Bayesian</strong> programming methodology<br />

3.2.1 The hitchhiker’s guide to <strong>Bayesian</strong> probability<br />

Figure 3.1: An advisor introducing his student to <strong>Bayesian</strong> modeling, adapted from L<strong>and</strong> of Lisp<br />

with the kind permission of Barski [2010].<br />

The reverend Thomas Bayes is the first credited to have worked on inversing probabilities:<br />

by knowing something about the probability of A given B, how can one draw conclusions on the<br />

probability of B given A? Bayes theorem states:<br />

P(A|B) = P(B|A)P(A)<br />

P(B)<br />

Laplace [1814] independently rediscovered Bayes’ theorem <strong>and</strong> published the work of inductive<br />

reasoning by using probabilities. He presented “inverse probabilities” (inferential statistics) allowing<br />

the study of causes by observing effects: the first use of learning the probabilities of<br />

observations knowing states to then infer states having only observations.<br />

Later, by extending logic to plausible reasoning, Jaynes [2003] arrived at the same properties<br />

than the theory of probability of Kolmogorov [1933]. Plausible reasoning originates from logic,<br />

whose statements have degrees of plausibility represented by real numbers. By turning rules of<br />

inferences into their probabilistic counterparts, the links between logic <strong>and</strong> plausible reasoning<br />

are direct. With C = [A ⇒ B], we have:<br />

43

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!