- Text
- Filter,
- Particle,
- Density,
- Kernel,
- Conditional,
- Covariance,
- Gaussian,
- Resampling,
- Matrix,
- Terrain,
- Isif,
- Ftp.isif.org

A Kalman-Particle Kernel Filter and its Application to Terrain ... - ISIF

A = m [4 /(n+ 2)] (1/( n +4 )) (12) i f(x k - x k|k-1 i | P k|k-1 )f(y k - H k (x k ) | R k ) (14) † where m is a tuning parameter related **to** the multimodality of the conditional density. † † 4 The **Kalman**-**Particle** **Kernel** **Filter** This filter combines the good properties of the **Kalman** filter **and** the regularized particle filter. The main idea is **to** represent the (predictive) probability density as a mixture of Gaussian densities of the form where { x ,..., } N 1 i pˆ ( x) = Âf ( x - x i | P ) (13) N i= 1 i 1 x N are a set of particles, P are positive definite matrices. In classical kernel density estimation, one takes P i 2 = P equal **to** h times the sample matrix of the particle x 1 ,..., xN , h being a parameter **to** be adjusted. But the structure (13) with covariance matrices i 2 P being of the order h may not be preserved in time. In order that it is so, we introduce 2 kinds of resampling : the partial **and** full resamplings. Partial resampling † is performed **to** limit the Monte Carlo fluctuations, in the case where the particle weights are nearly uniform so as there is little risk of degeneracy. The filter algorithm consists of 4 steps. - The initialization step : We initialize the algorithm at the first correction (**and** not prediction), that is we initialize the density p ˆ 1| 0 . Based on the kernel density estimation method, we simply draw 1 N N particles x 1/ 0 ,...,x 1/ 0 from the unconditional distribution of X 1 **and** † take N i pˆ 1 | 0 ( x1| 0 ) = ( 1/ N)Âf ( x10 | | P1 | 0 ) , where P 1| 0 equals i= 1 2 † h times the sample covariance matrix of the particles. - The correction step : According **to** formula (5), the joint predictive density of X k , given { Y 1 = y1,..., Y k - 1 = y }, is k -1 Since P i k| k - 1 becomes negligible as soon as Thus in (14) one can linearize i i is small, f( x k - xk| k -1 | Pk | k - 1) xk is not close **to** H k around Which yields that (14) can be approximated by x i k| k - 1 . x i k| k - 1 . i i i i i i f ( x k - x k | k -1 | P k | k -1) f( y k - y k | k -1 + — Hk ( x k - x k | k -1) | Rk ) (15) where y i i i k| k -1 = H k ( xk| k - 1) **and** — H k denotes the i gradient (matrix) of H k at the point x k| k - 1 . It can be shown, using similar calculations as in the derivation of the **Kalman** filter, that (15) can be re-fac**to**rized as f(x k - x k i where | P i i k )f(y i - y k|k-1 | S k i ) x i i i i k = xk| k -1 + Gk ( yk - yk| k - 1 ) (16) i i i T i -1 G k = Pk | k -1— H k ( Sk ) (17) P i i i i T i -1 i i k = Pk | k -1 - Pk | k -1— H k ( Sk ) — H k Pk | k - 1 (18) S k i i i = —H k iP k|k-1 —H T k + R k (19) Therefore pk ( xk , yk | y1,..., yk -1) ª N i i i i Âwk| k -1f ( xk - xk | Pk ) f( yk - yk| k -1 i= 1 | S i k ) The conditional density p k ( xk | y1,..., yk ) of X k given the observations Y 1 = y1,..., Y k = yk , being proportional **to** p k ( xk , yk | y1,..., yk -1) , is thus given by † p k (x k , y k | y 1 ,..., y k-1 ) = N i Âw k|k-1 i=1 i f(x k - x k|k-1 i | P k|k-1 )f(y k - H k (x k ) | R k ) This is mixture of distribution of densities † p k (x k | y 1 ,..., y k ) = w i i k f( x k - x k where N Â i=1 | P k i ) (20) †

i i i w | - - | - | S i k k 1f ( yk yk \k 1 k ) w = (21) k N j j j Â = w | - - | - | S j k k yk y 1 1f ( k k 1 k ) One see that the conditional density p k ( xk | y1,..., yk ) is also a mixture of Gaussian densities, as states before. Note that the covariance matrices P of the components i of this mixture, by (18), is bounded above by P k| k - 1 , hence remain small they are so before. Finally, one can interpret this correction step as composed of two types of correction : a **Kalman** type correction defined by (17), (19) **and** a particle type correction defined by (19) **and** (21). - The prediction step : The correction step has provided an approximation **to** the conditional density p k ( xk | y1,..., yk ) in the form of a mixture of Gaussian density (20) with the mixture i component matrices P k| k - 1 being small. By (4) **and** (20), the predictive density at the next step equals pk + 1| k ( xk + 1 | y1,..., yk ) = N i i i Â wk Úf( xk + 1 - Fk + 1( u) | Sk + 1) f( u - xk | Pk ) i= 1 R n i i but since f( u - x k | Pk ) becomes negligible a soon as u is not close **to** x , one can again make the approximation i k i i i Fk + 1( u) ª Fk + 1( xk ) + — Fk + 1( u - xk ) where i — Fk + 1 denotes the gradient (matrix) of Fk + 1 at the point x . Using this approximation, it can be shown that, i k pk + 1| k ( xk + 1 | y1,..., yk ) = N Â i= 1 w i k f i i i i T ( xk - Fk + 1( xk ) | — Fk + 1Pk — Fk + 1 i k + Sk + 1) Thus the predictive density is still a mixture of Gaussian distribution, with the covariance matrix of the i-th component of the mixture equal **to** i i i iT P k + 1 | k = — Fk + 1Pk — Fk + 1 + Sk + 1 (22) i i **and** with the weights w k +1| k = wk . However, the mixture component covariance matrices may not be small. This may be due **to** presence of the additive term S k + 1 **and** the amplification effect of the multiplication by i Fk + 1 . - The resampling step : To reduce the errors introduced by resampling, we adopt a simple rule, which wa**its** for m filter cycles before possibly resampling (m being a tuning parameter). In these m filter cycles resampling is skipped. One simply i set 1 1 ( i i x k + | k = Fk + xk ) **and** w k +1 | k = wk . After these cycles a full or partial resampling will be made depending on the entropy criterion (7). It purpose is both i P k | k **to** keep the matrices +1 low **and** **to** avoid degeneracy. To perform resampling, one first computes the matrices : N P k + 1| k = Â k k + 1| k k + k k i= 1 i i w P + cov( F 1( x ) | w ) where cov( Fk + 1( x k ) | wk ) denotes the sample covariance matrix of the vec**to**rs 1 ( i F k + x k ) relative **to** i the weights w k , i = 1,..., N . Then one computes **its** T *2 Cholesky fac**to**rization P k +1 | k = CC **and** then h , the minimum of the smallest eigenvalues of the matrices -1 i T -1 C P k + 1| k ( C ) . The next calculations depend on whether partial or full resampling is required. Partial resampling : if the weights are nearly uniform, that is, if the entropy criterion is less than the threshold (7), we do : For each i add **to** 1 ( i F k + x k ) a r**and**om Gaussian vec**to**r with zero mean **and** covariance matrix i * P k + 1| k - h P k + 1| k **to** obtain the new particles i x k +1 | k . Then set i i w k +1| k = wk . Full resampling : if the weights are disparate, that is the entropy criterion is greater than the threshold (7), we do : Select N particles among F k 1( 1 + xk ) ,…, 1 ( N F k + x k ) 1 N according **to** the probabilities w ,..., , then add **to** k w k each of them a r**and**om Gaussian vec**to**r with zero mean

- Page 1 and 2: A Kalman-Particle Kernel Filter and
- Page 3: † p( xk | y1,..., yk ) = pk| k-1(
- Page 7 and 8: (Root Mean Square) for each filter.