22.12.2013 Views

Investigations of Faraday Rotation Maps of Extended Radio Sources ...

Investigations of Faraday Rotation Maps of Extended Radio Sources ...

Investigations of Faraday Rotation Maps of Extended Radio Sources ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

26 CHAPTER 1. INTRODUCTION<br />

• Probability depends on our state <strong>of</strong> knowledge, which is different for different<br />

people. Therefore probability is necessarily subjective.<br />

Consider two events A and B where P (A) and P (B) are the probabilities <strong>of</strong> the<br />

event A or B, respectively. It is clear that for the probability P (A) the following holds<br />

0 ≤ P (A) ≤ 1. Furthermore, for two events A and B<br />

P (A ∪ B) = P (A) + P (B) − P (A ∩ B) (1.75)<br />

P (A ∩ B) = P (B|A)P (B) = P (B|A)P (A), (1.76)<br />

where ∪ denotes the logical OR and ∩ denotes the logical AND. The probability<br />

P (A ∪ B) is the logical sum and P (A ∩ B) is the logical product <strong>of</strong> two probabilities<br />

P (A) and P (B). The latter is also <strong>of</strong>ten called the joint probability. The term P (A|B)<br />

describes the probability <strong>of</strong> A under the condition that B is true and is shortened by<br />

saying probability <strong>of</strong> A given B.<br />

Another important property is the probabilistic independence <strong>of</strong> events. If the<br />

probability <strong>of</strong> A does not change the status <strong>of</strong> B, the events A and B are said to be<br />

independent. In that case P (A|B) = P (A), and P (B|A) = P (B). Inserting this into<br />

Eq. (1.76) yields<br />

P (A ∩ B) = P (A)P (B). (1.77)<br />

From Eq. (1.76), Bayes Theorem is easily derived:<br />

P (B|A) =<br />

P (A|B)P (B)<br />

, (1.78)<br />

P (A)<br />

where P (B) is called the prior probability, P (B|A) is called the posterior probability<br />

and P (A|B) is the likelihood. If one identifies event A with an observation and event<br />

B with some set <strong>of</strong> model parameters, the likelihood can be literally described as the<br />

probability <strong>of</strong> the observation A given the specific hypotheses B. In the same context,<br />

the probability <strong>of</strong> the observation P (A) is a constant although it is unknown leaving<br />

the proportionality<br />

P (B|A) ∝ P (A|B)P (B), (1.79)<br />

the prior probability P (B) is a statement about our knowledge <strong>of</strong> the hypotheses and is<br />

mostly assumed to be uniform when one does not know anything about the probability<br />

<strong>of</strong> the hypotheses. However, Bayes postulates that all priors should be treated as equal.<br />

So far, it was implicitly assumed that the variable x is discrete and a probability<br />

function p(x) is interpreted as the probability <strong>of</strong> the proposition P (A), where A is<br />

true when the value <strong>of</strong> the variable is equal to x. However in most cases, continuous<br />

variables x have to be considered and the probability will be a continuous function<br />

interpreted as a probability density function p(x)dx. In terms <strong>of</strong> the probability P (A),<br />

it is understood as A is true when the value <strong>of</strong> the variable lies in the range x + dx. In<br />

the further discussion, the latter perspective is assumed.<br />

Assuming a set <strong>of</strong> data being the observations d and a set <strong>of</strong> models θ describing<br />

our expectations, then Eq. (1.79) becomes<br />

p(θ|d) ∝ p(d|θ)p(θ). (1.80)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!