08.06.2013 Views

Bernese GPS Software Version 5.0 - Bernese GNSS Software

Bernese GPS Software Version 5.0 - Bernese GNSS Software

Bernese GPS Software Version 5.0 - Bernese GNSS Software

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

9. Combination of Solutions<br />

9.2 Sequential Least-Squares Estimation<br />

In this section we review the concept of sequential least-squares estimation (LSE) techniques.<br />

The result of a LSE using all observations in one step is the same as when splitting<br />

up the LSE in different parts and combining the results later as long as these parts are independent.<br />

To prove the identity of both methods we first solve for the parameters according<br />

to the common one-step adjustment procedure. Thereafter, we verify that the same result<br />

is obtained using a sequential adjustment.<br />

Let us start with the observation equations (for the notation we refer to Section 7.2):<br />

y 1 + v1 = A1 p c with D(y 1) = σ 2 1P −1<br />

1<br />

y 2 + v2 = A2 p c with D(y 2) = σ 2 2<br />

P −1<br />

2<br />

. (9.1)<br />

In this case we divide the observation array y c (containing all observations) into two independent<br />

observation series y 1 and y 2. We would like to estimate the parameters p c common<br />

to both parts using both observation series y 1 and y 2. We assume furthermore, that there<br />

are no parameters which are relevant for one of the individual observation series, only. This<br />

assumption is meaningful if we pre-eliminate “uninteresting” parameters according to Section<br />

9.4.4. The proof of the equivalence of both methods is based on the assumption that<br />

both observation series are independent.<br />

The division into two parts is sufficiently general. If both methods are leading to the same<br />

result, we might derive formulae for additional sub-divisions by assuming one observation<br />

series to be already the result of an accumulation of different observation series.<br />

9.2.1 Common Adjustment<br />

In matrix notation we may write the observation equations (9.1) in the form:<br />

which is equivalent to<br />

<br />

y 1<br />

y 2<br />

<br />

with D(<br />

<br />

+<br />

<br />

y 1<br />

y 2<br />

v1<br />

v2<br />

<br />

<br />

=<br />

) = σ 2 c<br />

<br />

A1<br />

A2<br />

<br />

P −1<br />

1<br />

<br />

p c<br />

∅<br />

∅ P −1<br />

2<br />

<br />

<br />

(9.2)<br />

y c + vc = Acp c with D(y c) = σ 2 cP −1<br />

c . (9.3)<br />

The matrices y c, vc, Ac, p c, and P −1<br />

c may be obtained from the comparison of Eqn. (9.3)<br />

with Eqn. (9.2). The independence of both observation series is given by the special form<br />

of the dispersion matrix (zero values for the off-diagonal matrices). Substitution of the<br />

appropriate values for y c, Ac and p c in Eqn. (7.4) leads to the normal equation system of<br />

the LSE:<br />

<br />

A ⊤ 1 P 1A1 + A ⊤ 2 P 2A2<br />

<br />

p c<br />

<br />

= A⊤ 1P 1 y1 + A⊤ 2P 2 y2 <br />

. (9.4)<br />

Page 184 AIUB

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!