Views
3 years ago

Probabilistic Exploitation of the Lucas and Kanade Smoothness ...

Probabilistic Exploitation of the Lucas and Kanade Smoothness ...

For the

For the same reasons as mentioned for the temporal transitionfactor (8) we choose f k to be also an adaptive Gaussiankernel. Again, combining both factors (9) and (10) and integratingx ′′ we get the second pairwise potentialφ k (v t′ k ′x ,Vt′k ) = ∑ x ′′ N(x ′′ |x,Σ tkk ,x )×all the data Y 1:t,1:K . Nevertheless, future implementationswill need to evaluate whether propagating also back willimprove the accuracy significantly.More precisely, the factored observation likelihood andthe transition probability we introduced in (1) and (2) ensurethat the forward propagated joint beliefN(v t′ k ′x |vt′ kx ′′,σ k ) , (11)P(V t,1:K |Y 1:t,1:K ) = ∏ xP(v t,1:Kx |Y 1:t,1:K ) (12)that imposes a spatial smoothness constraint on the flowfield via adaptive spatial weighting of motion estimationsfrom coarser scale. The combination of both potentials (8)and (11) results in the complete conditional flow field transitionprobability as given in (2).We impose adaptive spatial constraints on every factor ofthe V -transition. The transition factors (8) and (11) allow usto unroll two different kinds of spatial constraints along thetemporal and the scale axes while adapting the uncertaintiesfor scale and time transition differently. This is doneby splitting not only the transition in two pairwise potentials,one for the temporal- and one for the scale-transition,but also every potential in itself in two factors, one for thetransition noise and the other one for an additional spatialconstraint. In this way, the coupling of the potentials (8)and (11) realizes a combination of (A) scale-time predictionand (B) an integration of motion information neighboring intime, in space and in scale.2.3 Approximate InferenceTo gain a recurrent optical flow filtering we proposean approximate inference based on belief propagation [15]with factored Gaussian belief representations. The structureof the graphical model in Fig. 1 is similar to a MarkovRandom field. To derive a forward filter suitable for onlineapplications we propose the following message passingscheme. Let us assume, we isolate one time slice at time tand neglect all past and future beliefs, then we would haveto propagate the messages m k→k ′ (see Fig. 1) from coarseto fine and the messages m k′ →k from fine to coarse to computea posterior belief over the scale Markov chain. Thetwo-dimensional scale-time filter (STF) combines this withforward passing of temporal messages m t→t ′ and the computationof the likelihood messages m Y →v = l(v t′ k ′x ) at allscales k.As a simplification we restrict ourselves to propagatingmessages only in one direction k → k ′ and neglect passingback the message m k′ →k. The consequence of this is thatnot all the V-nodes at time t have seen all the data Y 1:t,1:Kbut only all past data up to the current scale Y 1:t,1:k . Thisincreases computational efficiency and is a suitable approximationsince we are only interested in the flow field onthe finest scale V t,K which is now the only node that seeswill remain factored. In addition, we assume the belief overV tk and V tk′ at time t to be factored which implies thatalso the belief over V t′k and V tk′ factorizes.P(V t′k ,V tk′ |Y 1:t′ ,1:k ′ \ Y t′ k ′ ) ==P(V t′k |Y 1:t′ ,1:k )P(V tk′ |Y 1:t,1:k′ ) (13)= ∏ xα(v t′ kx )α(v tk′x ) ,where we used α’s as the notation for forward filtered beliefsand \ for excluding Y t′ k ′from the set of measurementsY 1:t′ ,1:k ′ . The STF forward filter can now be definedby the computation of updated beliefs as the productof incoming messages,withα(v tkx ) ∝ m Y →v(v tkx ) m t→t ′(vtk x ) m k→k ′(vtk x ) , (14)∫m t→t ′(v t′ k ′x ) = φ t (v t′ k ′x )α(V ,Vtk′ tk′ )dV tk′V tk′= ∑ N(v t′ k ′x |x − x′ ,Σ tkt,x )× (15)x ′∫m k→k ′(v t′ k ′x ) =N(v t′ k ′v tk′x ′∫V t′ kx |vx tk′ ′, σ t)α(v tk′x ′ )dvtk′ x ′ ,φ k (v t′ k ′x ,V t′k )α(V t′k )dV t′ k= ∑ x ′′ N(x ′′ |x,Σ tkk ,x)× (16)∫N(v t′ k ′ kx |vt′ x ′′,σ k )α(v t′ k kx ′′ )dvt′ x . ′′v t′ kx ′′For reasons of computational complexity we introduce a lastapproximative restriction. We want every factor of the posteriorprobability (14) to be Gaussian distributedα(v tkx ) ∝ m Y →v(v tkx ) m t→t ′(vtk x ) m k→k ′(vtk x ):≈ N(v tkx |µtk x ,Σtk x ) . (17)4

We fulfill this constraint by making all single messagesGaussian distributed 1 . This already holds for the observationlikelihood m Y →v (vx tk ). Inserting Gaussian distributedbeliefs α into the propagation equations (15, 16) leads totwo different Mixture of Gaussians (MoG’s) for the resultingmessageswithˆp t′ k ′x ′ˆµ t′ k ′x ′ˆΣ t′ k ′x ′andwithm t→t ′(v t′ k ′x ) = ∑ x ′ ˆp t′ k ′x ′ N(vt′ k ′x |ˆµt′ k ′x ′ , ˆΣ t′ k ′x ′ )≈ N(v t′ k ′x |ωt′ k ′x ,Ωt′ k ′x ) , (18)= N(x − x′ |µ tk′ tk′x ′ , ˇΣ x ) , (19)′= (σ t + Σ tk′ tk′x ′ )ˇΛ x (x − ′ x′ ) + Σ tkt,xˇΛtk′x ′ µtk′ x ′ , (20)= Σtk tk′t,x ˇΛ x (σ ′ t + Σ tk′x ) , (21)′ˇΣ tk′x ′ = [ˇΛtk ′x ′ ] −1= σt + Σ tkt,x + Σ tk′x ′ ,p t′ k ′x ′′m k→k ′(v t′ k ′x ) = ∑ x ′′ p t′ k ′x ′′ N(vt′ k ′x |µt′ kx ′′,Σt′ k ′x ′′ )≈ N(v t′ k ′x |πt′ k ′x ,Πt′ k ′x ) , (22)= N(x′′ |x,Σ tkk ,x ) , Σt′ k ′x ′′= σ k + Σ t′ kx ′′ . (23)In order to satisfy the Gaussian constraint formulated in(17) the MoG’s are collapsed into single Gaussians (18,22) again. This is derived by minimizing the Kullback-Leibler Divergence between the given MoG’s and the assumedGaussians for the means ωx tk , πx tk and the covariancesΩ tkx ,Πtk x which results in closed-form solutions forthese parameters. The final predictive belief α(vx tk)followsfrom the product of these Gaussiansα(vx tk ) =l(vx tk ) N(v tk[˜Σ tkx =Πtk x˜µ tkx =Ω tkxΠ tkx[Π tkx + Ωtk xΠ tkx + Ω tkx[Π tkx + Ω tkxx |˜µ tk tkx , ˜Σ] −1Ωtk] −1πtkx +x ) , (24)x , (25)] −1ωtkx . (26)By applying the approximation steps (17, 18) and (22) weguarantee the posterior (14) to be Gaussian which allows1 A more accurate technique (following assumed density filtering)would be to first compute the new belief α exactly as a MoGs and then collapseit to a single Gaussian. However, this would mean extra costs. Futureresearch will need to investigate the tradeoff between computational costand accuracy for different collapsing methods.for Kalman-filter like update equations since the observationis defined to factorize into Gaussian factors (3). Thefinal recurrent motion estimation is given byα(v tkx ) = N(v tkx | µ tkx ,Σ tkx ) (27)=N(−I tkt,x | (∇Itk x )T vx tk ,ΣtkN(v tkx | ˜µ tkx ,l,x )טΣtkx ) , (28)Σ tk[˜Λtk x = x + ∇I tkx Λtk l,x (∇Itk x )T] −1, (29)µ tkx = ˜µtk x − Σtk x ∇Itk x Λtk l,xĨtk t,x . (30)For reasons explained in [11] the innovations process is approximatedas the followingĨ tkt,x ≈ ∂/∂tT ( I tkx , ) ˜µtk x , (31)with T applying a backward warp plus bilinear interpolationon the image I tkx using the predicted velocities ˜µtk xfrom (26). What we gain is a general probabilistic scaletimefilter (STF) which is, in comparison to existent filteringapproaches [7], [11], [13], not a Kalman Filter realizationbut a Dynamic Bayesian Network. If we have access to abatch of data (or a recent window of data) and do not focuson online-oriented pure forward filtering we can computesmoothed posteriors γ(vx tk ) := P(vx tk |Y 1:T,1:k ). Therefore,we follow a Two-Filter realization for optical flowsmoothing as proposed in [14].3 Adaptivity InformationNow that we have set up probabilistic filtering equations(30, 29) for recurrent optical flow computation that constrainthe estimation based on the extended Lucas-Kanadeassumption that the movement within a multidimensional(x, k, t) neighborhood is constant, we continue to specifythe neighborhood relations. As defined in section 2 wewant the integration of neighboring velocity estimates tobe adaptable in scale k, time t and location x. Therefore,the corresponding covariances Σ tkI ,x , kΣt′ t,x , Σtk′ k ,xof the differentGaussian kernels are adapted dependent on the localstructural information of the underlying intensity patchesI tkx within the neighborhood.We assume that neighbors along the orientation of the localstructure are more likely to influence the velocity of thecenter pixel than neighbors that are located beside the orientation.For this reason, we increase the spatial uncertaintyfor the location of the center pixel along the orientation ofthe structure by increasing the uncertainty of the covariancematrices Σ tkI ,x , kΣt′ t,x , Σtk′ k ,xaligned with the orientation. Onthe other hand, we reduce the spatial uncertainty orthogonalto the orientation to strengthen the assumption that weare more certain that the position of the pixel is somewhere5

TARGET TRACKING WITH LUCAS-KANADE OPTICAL FLOW AND ...
Introduction to Lucas-Kanade Algorithm. (In Chinese)
Fixed-Lag Smoothing for Bayes Optimal Exploitation of ... - MC Impulse
Some Improvements to the Lucas-Kanade Optical ... - Conferences
Exploiting Probabilistic Independence for Permutations - Stanford ...
Kernelized Probabilistic Matrix Factorization: Exploiting Graphs and ...
Exploiting system hierarchy to compute repair plans in probabilistic ...
Exploiting Probabilistic Latent Information for the ... - CiteSeer
Exploiting Graph-Structured Data in Generative Probabilistic Models
managing and exploiting rich correlations in probabilistic ... - Linqs
A smooth probabilistic extension of concurrent constraint programming
Exploiting Temporal Term Specificity Into a Probabilistic Ranking ...
Exploiting Logical Structure in Lifted Probabilistic Inference - CiteSeer
Local Color Transfer via Probabilistic Segmentation ... - Yu-Wing Tai
Probabilistic Connectivity Measure in Diffusion Tensor Imaging via ...
A probabilistic framework for joint head tracking and pose estimation
Probabilistic Methods in Multiple Target Tracking - Laboratory of ...
Spatial smoothing in fMRI using prolate spheroidal ... - Tor D. Wager
Adaptive Probabilistic Tracking Embedded in Smart Cameras for ...
Probabilistic and sequential computation of optical flow using ...
An Edge-Enhanced Modified Lee Filter for the Smoothing of SAR ...