25.07.2013 Views

Robust Extended Kalman Filtering in Hybrid Positioning Applications

Robust Extended Kalman Filtering in Hybrid Positioning Applications

Robust Extended Kalman Filtering in Hybrid Positioning Applications

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

4th WORKSHOP ON POSITIONING, NAVIGATION AND COMMUNICATION 2007 (WPNC’07), HANNOVER, GERMANY<br />

results of simulation tests are discussed <strong>in</strong> section V.<br />

II. MODEL<br />

The state of the system at time tk (k =1, 2,...) is modelled<br />

as a stochastic variable xk. The evolution of the state <strong>in</strong> time<br />

is given by a stochastic difference equation<br />

xk+1 =g k(xk, wk) (1)<br />

and the <strong>in</strong>itial state x0 is assumed to be normally distributed<br />

with mean x0 and covariance matrix P0. The measurements<br />

yk are related to the state by<br />

yk =hk(xk, vk). (2)<br />

Here wk describes the noise <strong>in</strong> the state dynamics and vk<br />

the noise <strong>in</strong> the measurements. In this paper the noises are<br />

assumed to be white zero-mean mutually <strong>in</strong>dependent random<br />

processes, and also <strong>in</strong>dependent with the <strong>in</strong>itial state. Let<br />

y1:k = {yi,i =1,...,k} be the measurements before time<br />

step tk+1. The measurements used <strong>in</strong> this work are GPS<br />

pseudorange measurements and their derivatives, and base<br />

station range and altitude measurements. Because there is<br />

unknown clock bias <strong>in</strong> the satellite measurements, difference<br />

measurements are used <strong>in</strong>stead. One satellite is chosen as<br />

reference and all the differences are formed with respect to<br />

it. Let rk denote the position of the user, rs the position of<br />

the satellite (basestation) and rs0 the reference satellite. The<br />

range measurements are<br />

and the difference measurements<br />

hk+1(rk+1) =||rs − rk+1|| (3)<br />

hk+1(rk+1) =||rs − rk+1||−||rs0 − rk+1|| (4)<br />

The problem now is to solve the conditional probability density<br />

function (cpdf) of the posterior state, that is the cpdf of<br />

xk conditioned on y1:k = y1:k. Us<strong>in</strong>g the Bayes rule with<br />

the assumption of white noise sequences the cpdf may be<br />

formulated as<br />

p(xk|y1:k) = p(xk|y1:k−1)p(yk|xk)<br />

(5)<br />

p(yk|y1:k−1)<br />

where p(xk|y1:k−1) is the prior density, i.e. the cpdf of xk<br />

conditioned on y1:k−1 = y1:k−1, p(yk|xk) is the likelihood<br />

function giv<strong>in</strong>g the probability of measurement yk given state<br />

xk and p(yk|y1:k−1) is the predicted measurement density.<br />

To derive statistical quantities like mean and covariance from<br />

cpdf (5) requires <strong>in</strong>tegration of numerous multidimensional<br />

<strong>in</strong>tegrals us<strong>in</strong>g numerical <strong>in</strong>tegration methods. Grid based<br />

methods and sequential Monte Carlo (particle filter) methods<br />

have been used succesfully to calculate the posterior estimates.<br />

They however require a lot of computation and are not suitable<br />

<strong>in</strong> real time position<strong>in</strong>g with todays mobile devices. In the<br />

next section, one approximate analytic solution, namely the<br />

extended <strong>Kalman</strong> filter, is <strong>in</strong>troduced.<br />

56<br />

A. <strong>Extended</strong> <strong>Kalman</strong> Filter<br />

The extended <strong>Kalman</strong> filter is an approximate analytic<br />

solution to (5) if the noises wk and vk are additive gaussian<br />

noise sequences. The state and measurement functions are<br />

l<strong>in</strong>earized accord<strong>in</strong>g to<br />

Gk = ∂ g <br />

k(xk) <br />

∂xk<br />

,<br />

xk=ˆxk<br />

∂ hk(xk)<br />

<br />

<br />

Hk = <br />

∂xk xk=ˆx −<br />

k<br />

(6)<br />

where ˆx −<br />

k and ˆxk are the prior and posterior mean estimates<br />

at time step tk. The system model becomes<br />

xk+1 =Gkxk + wk<br />

yk =Hkxk + vk<br />

Now the prior density is gaussian with mean and covariance<br />

given as<br />

(7)<br />

(8)<br />

E(xk|y1:k−1) = ˆx −<br />

k = Gk ˆxk−1 (9)<br />

V(xk|y1:k−1) = ˆ P −<br />

k = Gk ˆ Pk−1G T k +Qk (10)<br />

where Qk is the covariance of wk. The posterior mean may<br />

be shown to be<br />

E(xk|y1:k) =ˆxk =ˆx −<br />

k + ˆ P −<br />

k HTk s(yk − Hk ˆx −<br />

k ) (11)<br />

where s(yk − Hk ˆx −<br />

k ) = −∇ ln p(yk|y1:k−1). Denot<strong>in</strong>g the<br />

covariance of vk with Rk the posterior mean becomes<br />

ˆxk =ˆx −<br />

k +Kk(yk − Hk ˆx −<br />

k<br />

and the posterior covariance is given by<br />

ˆPk =(I− KkHk) ˆ P −<br />

k<br />

) (12)<br />

(13)<br />

where Kk = ˆ P −<br />

k HT k (Hk ˆ P −<br />

k HT k +Rk) −1 is the <strong>Kalman</strong> ga<strong>in</strong><br />

matrix. Rk is assumed to be positive def<strong>in</strong>ite to make sure that<br />

the <strong>in</strong>verse exists.<br />

III. CLASSES OF DENSITIES<br />

As seen <strong>in</strong> the previous section, <strong>Kalman</strong> filter<strong>in</strong>g is based on<br />

the assumption of gaussian measurement noise and gaussian<br />

prior distribution. As argued before, the strict assumption of<br />

gaussian measurement noise may not be reasonable <strong>in</strong> hybrid<br />

position<strong>in</strong>g applications and therefore some ”more robust”<br />

densities are presented here.<br />

Huber [2] proposed a game where Nature choses the distribution<br />

F ∈ F and the eng<strong>in</strong>eer chooses the estimator<br />

T ∈T, and the asymptotic variance V(T,F) is the pay-off<br />

to the eng<strong>in</strong>eer. The asymptotic variance of an estimator is<br />

the variance of the estimator when the sample size tends to<br />

<strong>in</strong>f<strong>in</strong>ity. It can be shown that for certa<strong>in</strong> classes of distributions<br />

there exists a m<strong>in</strong>-max solution (T0,F0) such that<br />

m<strong>in</strong><br />

T ∈T max<br />

F ∈F V(T,F)=V(T0,F0) =max<br />

F ∈F m<strong>in</strong><br />

T ∈T V(T,F).<br />

F0 is called the least favorable distribution of class F and T0<br />

the m<strong>in</strong>-max robust estimator which is actually the maximum<br />

likelihood estimator for the least favorable density F0.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!