06.03.2013 Views

Artificial Intelligence and Soft Computing: Behavioral ... - Arteimi.info

Artificial Intelligence and Soft Computing: Behavioral ... - Arteimi.info

Artificial Intelligence and Soft Computing: Behavioral ... - Arteimi.info

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

The main steps [8], [12] of the belief–propagation algorithm of Pearl are<br />

outlined below.<br />

1. During initialization, we set all λ <strong>and</strong> ∏ messages to 1 <strong>and</strong> set<br />

∏B(A) messages from root to the prior probability [ P (A1)<br />

P(A2),…., P(Am)] T <strong>and</strong> define the conditional probability matrices.<br />

Then estimate the prior probabilities at all nodes, starting from the<br />

children of the root by taking the product of transpose of the<br />

conditional probability matrix at the link <strong>and</strong> the prior probability<br />

vector of the parent. Repeat this for all nodes up to the leaves.<br />

2. Generally the variables at the leaves of the tree are instantiated.<br />

Suppose, the variable E= E2 is instantiated. In that case, we set [7]<br />

λE (B) = [ 0 1 0 0 0 0… 0],<br />

where the second element corresponds to instantiation of E = E2.<br />

3. When a node variable is not instantiated, we calculate its λ values<br />

following the formula, outlined in fig. 9.6.<br />

4. The λ <strong>and</strong> ∏ messages are sent to the parents <strong>and</strong> the children of the<br />

instantiated node. For the leaf node there is no need to send the ∏<br />

message. Similarly, the root node need not send the λ message.<br />

5. The propagation continues from the leaf to its parent, then from the<br />

parent to the gr<strong>and</strong>parent, until the root is reached. Then down<br />

stream propagation starts from the root to its children, then from the<br />

children to gr<strong>and</strong>children of the root <strong>and</strong> so on until the leaves are<br />

reached. This is called an equilibrium condition, when the λ <strong>and</strong> ∏<br />

messages do not change, unless instantiated further. The belief value<br />

at the nodes now reflects the belief of the respective nodes for ‘the car<br />

does not start’ ( in our example tree).<br />

6. When we want to fuse the beliefs of more than one evidence, we can<br />

submit the corresponding λ messages at the respective leaves one<br />

after another, <strong>and</strong> repeat from step 3, otherwise stop.<br />

The resulting beliefs at each node now appear to be the fusion of the joint<br />

effect of two or more observed evidences.<br />

We presented Pearl’s scheme for evidential reasoning for a tree structure only.<br />

However, the belief propagation scheme of Pearl can also be extended

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!