13.07.2015 Views

A Hybrid Factored Frontier Algorithm for Dynamic Bayesian Networks

A Hybrid Factored Frontier Algorithm for Dynamic Bayesian Networks

A Hybrid Factored Frontier Algorithm for Dynamic Bayesian Networks

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

The overall error at time t, denoted ∆ t is given by ∆ t = max u∈V n(|P (X t = u) − B t (u)|). Usinga reasoning similar to [1], this error can be bounded as: ɛ 0 ( ∑ tj=0 βj ), where 0 ≤ β ≤ 1 is a constantdetermined by the stochastic transition matrix associated with the DBN. Further, β < 1 under fairly mildrestrictions placed on the underlying Markov chain and in this case we have ∑ tj=0 βj < 1/(1 − β).A technical analysis of ɛ 0 shows that ɛ 0 converges to 1 as n, the number of variables, tends to ∞.Interestingly, the FF error on a marginal can be large in practice too. Specifically, <strong>for</strong> the simple network ofFigure 1, we get an error as high as 0.16 on one of the marginals.4 The <strong>Hybrid</strong> <strong>Factored</strong> <strong>Frontier</strong> <strong>Algorithm</strong>During the error analysis <strong>for</strong> FF, we observed that if ɛ t is large then B t (v) is large <strong>for</strong> some v and henceM t (i, v i ) is large <strong>for</strong> every i. But then there can’t be too many such v. For instance, there can be only onesuch v if we want M t (i, v i ) > 1 2 <strong>for</strong> each i. Thus if we can record Bt (v) explicitly <strong>for</strong> a small subset ofV n <strong>for</strong> which M t is high <strong>for</strong> all dimensions then one can significantly improve FF. Un<strong>for</strong>tunately this cannot be done exactly since it will involve an exhaustive search through V n . Instead we will have to do thisapproximately.Accordingly, the <strong>Hybrid</strong> FF algorithm works as follows. Starting with t = 0, we inductively computeand maintain the tuple (M t , S t , B t H , αt ), where:– M t is a marginal function.– S t ⊆ V n is a set of tuples called spikes.– BH t : V n → [0, 1] is a function such that BH t (u) = 0 if u ∉ St and ∑ u∈S t Bt H(u) < 1.– α t = ∑ u∈S t Bt H (u).We define M t H (i, v) = [M t (i, v) − ∑ {u∈S t |u i =v} Bt H (u)]/(1 − αt ) <strong>for</strong> all i and v. It is easy to observethat this is a marginal function. We next define B t as follows:B t (u) = B t H(u) + (1 − α t ) ∏ iM t H(i, u i ) (1)We need to use M t H rather than M t since cumulative weight of the contribution made by the spikes needsto be discounted from M t . This will ensure that B t is a well defined belief state. The crucial parameter <strong>for</strong>our algorithm is σ, the number of spikes we choose to maintain. The accuracy of the algorithm improvesas σ increases but so does the running time. We have found σ = n 3 to be more than ample and stillcomputationally feasible <strong>for</strong> a large network as shown in the next section.4.1 The algorithmWe initialize with M 0 = C 0 , S 0 = ∅, BH 0 = 0 and α0 = 0 and fix σ.Then, we inductively compute (M t+1 , S t+1 , B t+1H, αt+1 ) from (M t , S t , BH t , αt ) as follows.Step 1: Compute M t+1 asM t+1 (i, v) = ∑ [Cit+1 (x | uî) × BH(u)]tu∈S t+(1 − α t ) × ∑ uî[C t+1i(x | uî) × ∏ j∈îB t H(j, u j )]5

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!