10.07.2015 Views

Economics 620, Lecture 16 - courses.cit.cornell.edu - Cornell ...

Economics 620, Lecture 16 - courses.cit.cornell.edu - Cornell ...

Economics 620, Lecture 16 - courses.cit.cornell.edu - Cornell ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Economics</strong> <strong>620</strong>, <strong>Lecture</strong> <strong>16</strong>: Estimation of SimultaneousEquations ModelsNicholas M. Kiefer<strong>Cornell</strong> UniversityProfessor N. M. Kiefer (<strong>Cornell</strong> University) <strong>Lecture</strong> <strong>16</strong>: SEM II 1 / 14


Consider y 1 = Y 2 + X 1 + " 1 which is an equation from a system.We can rewrite this at y 1 = Z + " 1 where Z = [Y 2 X 1 ] and = [ 0 0 ] 0 .Note that Y 2 is jointly determined with y 1 , soplim(1=N)Z 0 " 1 6= 0 (usually):IV Estimation:The point of IV estimation is to …nd a matrix of instruments W so thatandwhere Q is nonsingular.plim W 0 " 1N = 0plim W 0 ZN= QProfessor N. M. Kiefer (<strong>Cornell</strong> University) <strong>Lecture</strong> <strong>16</strong>: SEM II 2 / 14


The IV estimator is (W 0 Z) 1 W 0 y 1 . As in the lecture on dynamic models,multiplying the model by the transpose of the matrix of instruments yieldsW 0 y 1 = W 0 Z + W 0 " 1 which gives ^ IV .Asymptotic distribution of ^ IV :Note that ^ IV = (W 0 Z) 1 W 0 " 1 . Assume thatW 0 "p 1! N 0; 2 W 0 W:NN(Is this a sensible assumption?Recall the CLT.)Then pN(^ IV ) ! N(0; 2 P )Professor N. M. Kiefer (<strong>Cornell</strong> University) <strong>Lecture</strong> <strong>16</strong>: SEM II 3 / 14


whereP = N(W 0 Z) 1 W 0 W (W 0 Z) 1 = (1=N)Q 1 W 0 WQ 1 .The question is what to use for W . Suppose we use X .Multiplying by the tranpose of the matrix of instruments givesX 0 y 1 = X 0 Z + X 0 " 1 .For this system of equations to have a solution, X 0 Z has to be square andnonsingular. When is this possible?Note the following dimensions: X is N K, X 1 is N K 1 and Y 2 isN (G 1 1). This, of course, requires K = K 1 + G 1 1.(Recall the order condition: K K 1 + G 1 1.)Professor N. M. Kiefer (<strong>Cornell</strong> University) <strong>Lecture</strong> <strong>16</strong>: SEM II 4 / 14


Thus, the above proc<strong>edu</strong>re works when the equation is just identi…ed.The resulting IV estimates are indirect least squares which we saw lasttime.Suppose K < K 1 + G 1 1. Then what happens? Consider the supplyand demand example. This is the underidenti…ed case.Suppose K > K 1 + G 1 1. Then X 0 y 1 = X 0 Z + X 0 " 1 is K equations inK 1 + G 1 1 unknowns (setting X 0 " 1 to 0 which is its expectation). Wecould choose K 1 + G 1 1 equations to solve for - there are many ways todo this, typically leading to di¤erent estimates. This is the overidenti…edcase.Professor N. M. Kiefer (<strong>Cornell</strong> University) <strong>Lecture</strong> <strong>16</strong>: SEM II 5 / 14


Another way to look at this case is as a regression model - with K“observations” on the dependent variable.We could apply the LS method, but the GLS is more e¢ cient sinceV (X 0 " 1 ) = 2 (X 0 X )(6= 2 I ).The observation matrix is X 0 y 1 and X 0 Z.GLS gives the estimator^ = [Z 0 X (X 0 X ) 1 X 0 Z]1 Z 0 X (X 0 X ) 1 X 0 y 1 .In the just-identi…ed case (where X 0 Z is invertible),^ = (X 0 Z) 1 X 0 X (Z 0 X ) 1 Z 0 X (X 0 X ) 1 X 0 y 1= (X 0 Z) 1 X 0 y 1 = ^ IV with W = X .Professor N. M. Kiefer (<strong>Cornell</strong> University) <strong>Lecture</strong> <strong>16</strong>: SEM II 6 / 14


We will write out the expression for ^. ^Y ^ =2 0 ^Y 2^Y 2 0X 1X1 0 ^Y 2 X1 0X 1 1 ^Y 0 2 y 1X 0 1 y 1:Now: MY 2 = X (X 0 X ) 1 X 0 Y 2 = ^Y 2 = X ^ 2 which is the LS predictor ofY 2 :Z 0 Y0MZ = 2MY 2 Y2 0 MX 1X1 0 MY 2 X1 0 MX 1Note that X1 0 MX 1 = X1 0X 1. R[X 1 ] R[X ] ) MX 1 = X 1 ; MX = X .Also: Y2 0 MY 2 = Y2 0 M MY 2 = ^Y 2 0 ^Y 2 .So, ^Y ^ =2 0 ^Y 2^Y 2 0X 1X1 0 ^Y 2 X1 0X 1 1 ^Y 0 2 y 1X 0 1 y 1:Professor N. M. Kiefer (<strong>Cornell</strong> University) <strong>Lecture</strong> <strong>16</strong>: SEM II 8 / 14


^ is the coe¢ cient vector from a regression of y 1 on ^Y 2 and X 1 .Interpretation as 2SLS? Interpretation as IV?Proposition: 2SLS is IV estimation with W = [ ^Y 2 X 1 ]:Proof :Note that ^YW 0 Z =2 0Y 2^Y 2 0X 1X1 0 ^Y 2 X1 0X 1 ^Y=2 0 ^Y 2^Y 2 0X 1X1 0 ^Y 2 X1 0X 1:This is the matrix appearing inverted in ^. Professor N. M. Kiefer (<strong>Cornell</strong> University) <strong>Lecture</strong> <strong>16</strong>: SEM II 9 / 14


Asymptotic distribution of ^:We know this from IV results.Note that ^ = + (Z 0 MZ) 1 Z 0 M" 1 . The asymptotic variance ofN 1=2 (^ ) is the asymptotic variance of N 1=2 (Z 0 MZ) 1 Z 0 M" 1 = u.Var(u) = N 2 (Z 0 MZ) 1 Z 0 MZ(Z 0 MZ) 1 ]= N 2 (Z 0 MZ) 1 .Remember to remove the N in calculating estimated variance for ^.(Why?)Professor N. M. Kiefer (<strong>Cornell</strong> University) <strong>Lecture</strong> <strong>16</strong>: SEM II 10 / 14


Estimation of 2 :^ 2 = (y 1 Z^) 0 (y 1 Z^)=N.Note that Z = [Y 2 X 1 ] appears in the expressions for ^ 2 , not [ ^Y 2 X 1 ].If you regress y 1 on ^Y 2 and X 1 , you will get the right coe¢ cients but thewrong standard errors.Professor N. M. Kiefer (<strong>Cornell</strong> University) <strong>Lecture</strong> <strong>16</strong>: SEM II 11 / 14


GEOMETRY OF 2SLS:Take:N = 3 (observations)K = 2 (exogenous variables),K 1 = 1 (included exogenous variables) andG 1 = 2 (included endogenous variables - one is normalized).How many parameters?Professor N. M. Kiefer (<strong>Cornell</strong> University) <strong>Lecture</strong> <strong>16</strong>: SEM II 12 / 14


Professor N. M. Kiefer (<strong>Cornell</strong> University) <strong>Lecture</strong> <strong>16</strong>: SEM II 13 / 14


^Y 2 is in the plane spanned by X 1 and X 2 .spanned by ^Y 2 and X 1 .y 1 is projected to the planeNote that X 1 and X 2 and X 1 and ^Y 2 span the same plane.(Why?)Model is just identi…ed (projection of both stages is to the same plane).What happens if the model is overidenti…ed?is, no included regressors).(For example, K 1 = 0, thatWhat if underidenti…ed?regressors).(For example, K 2 = 2, that is, no excludedProfessor N. M. Kiefer (<strong>Cornell</strong> University) <strong>Lecture</strong> <strong>16</strong>: SEM II 14 / 14

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!