02.07.2013 Views

MlNlMAX RESULTS IN LINEAR LEAST SQUARES PREDlCTlON ...

MlNlMAX RESULTS IN LINEAR LEAST SQUARES PREDlCTlON ...

MlNlMAX RESULTS IN LINEAR LEAST SQUARES PREDlCTlON ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>MlNlMAX</strong> <strong>RESULTS</strong> <strong>IN</strong> L<strong>IN</strong>EAR <strong>LEAST</strong> <strong>SQUARES</strong><br />

<strong>PREDlCTlON</strong> APPROACH TO TWO-STAGE SAMPL<strong>IN</strong>G<br />

THÈSE No 397 (1981)<br />

PH. EICHENBERGER<br />

PRÉSENTÉE AU DEPARTEMENT DE MATHÉMATIQUES<br />

ÉCOLE POLYTECHNIQUE FÉDÉRALE DE LAUSANNE<br />

(Switzerland)


II.<br />

III.<br />

<strong>IN</strong>TRODUCTION<br />

-1-<br />

C O N T E N T S<br />

L<strong>IN</strong>EAR PREDICTORS OF M<strong>IN</strong>IMAL VARIANCE<br />

I .1 Introduction 5<br />

1.2 Particular cases 1 eading to equal i t y of the<br />

estimators 9<br />

L<strong>IN</strong>EAR PREDICTORS FOR TWO-STAGE SAMPL<strong>IN</strong>G<br />

11.1 Introduction<br />

A 2<br />

11.2 The predictor T(0, 1 1 v(x), p, 9 )<br />

11.3 The predictor ?(l , 1 1 v(x), p, 9z)<br />

2 2<br />

II .4 The parti cul ar case of ?(O, 1 / x , p, 9 )<br />

M<strong>IN</strong>IMAX PREDICTORS FOR TWO-STAGE SAMPL<strong>IN</strong>G<br />

111.1 Introduction<br />

III .2 Relative precision of the predictors<br />

III .3 Minimax predictors : one dimensional case<br />

111.4 Minimax predictors : general case


IV. M<strong>IN</strong>IMAX DESIGN<br />

IV. 1 Introduction 72<br />

IV.2 Unidimensional case : Q = p 1 73<br />

IV.3 Minimax design for the mode1 : E(Y) = DO 7 7<br />

IV.4 Minimax design for the mode1 : FIY) = B 1 x 86<br />

IV.5 Minimax design for the mode1 : E(Y) = Do + BI x 89<br />

V. NUMERICAL ILLUSTRATION<br />

V.l Introduction 9 3<br />

V.2 Type of system 93<br />

V.3 Numeri cal eval uation of the minimax predictor 102<br />

APPENDIX : PROGRAMS<br />

A.l G~neral description<br />

A.2 1 npu t<br />

A.3 Output<br />

A.4 Program for solving system number 1<br />

A. 5 Program for solving system number 2<br />

A.6 Program for sol ving system number 3<br />

REFERENCES


O, <strong>IN</strong>TRODUCTION<br />

By the conventional assumption in the superpopulation approach<br />

to fini te population sampl ing, the actual population vector<br />

= (y, , y*,. . . , y#) is a real ized outcome of the random vector<br />

Y w = (Y,, Y 2,... , YM) .<br />

In order to estimate the total of the population T =<br />

yi ,<br />

i =l<br />

a sample of m of the M components of Y - is chosen for observation<br />

according to a nonrandom procedure and thus al1 probability statements<br />

refer to the joint distribution of Y .<br />

-<br />

After the survey has been carried out, Yk<br />

is known for the<br />

uni ts in the sample and in order to estimate T , i t remains to predict<br />

TI , the sum of the random variables corresponding to unsampled uni ts.<br />

Suppose that the expected value of Yi<br />

is a linear function<br />

of p auxiliary variables, the values of which are known for each unit<br />

of the population.<br />

In Chapter 1 the following model is used :<br />

E(!) = X 13 var (y) = V<br />

For this mode1 , the best linear unbiased predictor iGLS<br />

has<br />

al ready been given expl ici tly.


1 t must be noted that the BLU predictor fGLS is based on the<br />

knowl edge of the covariance matrix V . Therefore i t woul d be interesti ng<br />

to find for which V matrices îGLS is identical to the ordinary least<br />

2<br />

squares predictor f , which is optimal when V = o 1 .<br />

OLS<br />

The greatest part of Chapter 1 is devoted to this question.<br />

From now on i t is assumed that the fini te population of interest<br />

consists of M elements arranged in N clusters. .The i-th cluster con-<br />

N<br />

tains Mi elements, so that Mi = M .<br />

i =l<br />

Moreover, we consider for Y ., the following mode1 :<br />

where 6i = r<br />

where Yij (resp. x ) denotes the variable Y (resp. x) associated<br />

i j<br />

with the j-th unit of the i-th cluster and ,v(x) is a positive function.<br />

In Chapter II, the predictors of T and their error-variances<br />

are given explicitly for p = 1 and 6 - = (y) or (;) .<br />

1


The best linear unbiased predictor of T is normally of 1 imited<br />

practical value because of i ts dependance on variances and correlation coef-<br />

ficients which are usually unknown.<br />

Two mi nimax approaches attempt to remedy these di fficul ti es.<br />

Chapter III deals wi th minimax predictors.<br />

2<br />

Let us assume that v(x) and 0 , i = 1, 2 . . N are known.<br />

Let i(p) - be the BLU predictor of T for the mode1 (M) . For any<br />

r * in [O,<br />

n<br />

11 let<br />

the relative increase of variance due to the use of ?(Y) - instead of the<br />

,.<br />

BLU T(p)<br />

- when the true correlations are Q .<br />

- r* is solution of<br />

Then it can be shown that the minimax predictor T(c*) , where<br />

is obtained under some weak conditions by solving a system of at most n<br />

non 1 inear equations.<br />

Minimax samples are studied in Chapter IV.


Let 7 be a 1 inear unbiased predictor of T . Then if the<br />

function v(x) is known and if pi ~p for i = 1, 2 ,..., N , it is<br />

shown how to find the minimax sample which satisfies<br />

mi n<br />

{sa:; es}<br />

2 - T) .<br />

max ~ ~ ( 7<br />

rl s P s r2<br />

This procedure i s then appl ied to the fol lowing mode1 s :<br />

- E(Y..) = BO and x 1<br />

1 J i j<br />

In the last chapter, two numerical examples are given about the<br />

results of chapter III. The first one concerns the kind of system to be<br />

sol ved for the mi nimax predi ctor; the other one investi gates numerical 1 y<br />

the qua1 i ty of the niinimax predictor on the basis of real data.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!