07.07.2015 Views

Formulas Useful for Linear Regression Analysis and Related Matrix ...

Formulas Useful for Linear Regression Analysis and Related Matrix ...

Formulas Useful for Linear Regression Analysis and Related Matrix ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Index 125always nonnegative definite 10of the prediction error 32, 78partial covariance 26crazy, still vCronbach’s alpha 118Ddata matrixX y D .X 0 W y/ 6best rank-k approximation 79keeping data as a theoretical distribution 8decomposition see eigenvalue decomposition,see full rank decomposition, seesingular value decompositionHartwig–Spindelböck 112derivatives, matrix derivatives 120determinant† 22 25Laplace expansion 86recursive decomposition of det.S/ 34recursive decomposition of det.†/ 34Schur complement 87DFBETA i 45diagonalizability 103discrete uni<strong>for</strong>m distribution 23discriminant function 79disjointnessC .A/ \ C .B/ D f0g, several conditions84C .VH/ \ C .VM/ 64C .X 0 .1/ / \ C .X0 .2/ / 47C .X/ \ C .VM/ 53C .X 1 / \ C .X 2 / 12estimability in outlier testing 46distance see Cook’s distance, see Mahalanobisdistance, see statisticaldistancedistribution see Hotelling’s T 2 , see normaldistributionF -distribution 28t-distribution 28Bernoulli 23binomial 23chi-squared 27, 28discrete uni<strong>for</strong>m 23Hotelling’s T 2 29of MHLN 2 .x; ; †/ 27of y 0 .H J/y 28of y 0 .I H/y 28of z 0 Az 27, 28of z 0 † 1 z 27Wishart-distribution 28, 77dominant eigenvalue 101, 112Draper, Norman R. vDurbin–Watson test statistic 36Dürer’s Melencolia I 113EEckart–Young theoremnon-square matrix 111symmetric matrix 105efficiency of OLSE see Watson efficiency,Bloomfield–Watson efficiency, Rao’sefficiencyeigenvalue, dominant 101, 112eigenvaluesdefinition 100.; w/ as an eigenpair <strong>for</strong> .A; B/ 106ch.† 22 / 25ch.A; B/ D ch.B 1 A/ 106ch.A 22 / 100ch 1 .A; B/ D ch 1 .B C A/ 108jA Ij D 0 100jA Bj D 0 105nzch.A; B/ D nzch.AB / 108nzch.AB/ D nzch.BA/ 103algebraic multiplicity 103characteristic polynomial 100eigenspace 103geometric multiplicity 103intraclass correlation 101, 102of ˛2I C a1 0 C 1a 0 105of cov.X Qˇ/ 108of P A P B 82, 96, 97of P K P L P F 82spectral radius 101elementary column/row operation 103equalityof BLUPs of y f under two models 71of BLUPs under two mixed models 74of OLSE <strong>and</strong> BLUE, several conditions 58of OLSE <strong>and</strong> BLUE, subject toC .U/ C .X/ 60of the BLUEs under two models 59of the OLSE <strong>and</strong> BLUE of ˇ2 61Oˇ.M .i/ / D Qˇ.M .i/ / 66Oˇ2.M 12 / D Qˇ2.M 12 / 61Qˇ2.M 12 / D Qˇ2. M 12 / 67BLUP.y N f / D BLUE.X f ˇ/ 70BLUP.y f / D OLSE.X f ˇ/ 70SSE.V / D SSE.I / 57X Oˇ D X Qˇ 58Ay D By 54P XIV1 D P XIV2 59estimability


126 Indexdefinition 51constraints 39contrast 52in hypothesis testing 37, 41of ˇ 52of ˇk 52of ı in the extended model 46of ı in the extended model 44of K 0ˇ 51of X 2ˇ2 51of c 0 52extended (mean-shift) model 44, 46, 47Ffactor analysis 79factor scores 80finiteness matters 24fitted values 12frequency table 23Frisch–Waugh–Lovell theorem 15, 22generalized 61Frobenius inequality 84full rank decomposition 92, 111GGalton’s discovery of regression vGauss–Markov model see linear modelGauss–Markov theorem 53generalized inversethe 4 Moore–Penrose conditions 90Banachiewicz–Schur <strong>for</strong>m 93least-squares g-inverse 94minimum norm g-inverse 94partitioned nnd matrix 93reflexive g-inverse 91, 93through SVD 92generalized variance 10, 11generated observations from N 2 .0; †/ 121HHadi, Ali S. vHartwig–Spindelböck decomposition 112Haslett, Stephen J. vihat matrix H 12Henderson’s mixed model equations 73Hotelling’s T 2 28, 29, 77when n 1 D 1 30hypothesisˇ2 D 0 38ˇx D 0 31, 40ı D 0 in outlier testing 46I D 0 29 1 D 2 30, 77ˇ1 D D ˇk D 0 40ˇi D 0 40k 0ˇ D 0 40ı D 0 in outlier testing 44 1 D D g 41 1 D 2 42K 0ˇ D d 39, 40K 0ˇ D d when cov.y/ D 2 V 40K 0ˇ D 0 37K 0 B D D 77overall-F -value 40testable 41under intraclass correlation 59idempotent matrixeigenvalues 97full rank decomposition 97rank D trace 97several conditions 97illustration of SST = SSR + SSE 121independencelinear 9statistical 24Nu <strong>and</strong> T D U 0 .I J/U 29observations, rows 28, 29of dichotomous variables 24quadratic <strong>for</strong>ms 27, 28inequalitych i .AA 0 / ch i .AA 0 C BB 0 / 103 n x 0 Ax=x 0 x 1 104kAxk kAx C Byk 93kAxk 2 kAk 2 kxk 2 110tr.AB/ ch 1 .A/ tr.B/ 103Bloomfield–Watson–Knott 62Cauchy–Schwarz 116Frobenius 84Kantorovich 116Samuelson’s 118Sylvester’s 84Wiel<strong>and</strong>t 117inertia 115intercept 14interlacing theorem 103intersectionC .A/ \ C .B/ 84C .A/ \ C .B/, several properties 96intraclass correlationF -test 59OLSE D BLUE 59eigenvalues 101, 102


Index 127invarianceof C .AB C/ 91of AB C 91of X 0 .V C XUX 0 / X 50of r.AB C/ 91inverseof A nn 86intraclass correlation 101of A D fa ij g D fmin.i; j /g 36of R 89of R xx 17, 90of X 0 X 16, 88of † 22 25of a partitioned block matrix 87of autocorrelation matrix 35of partitioned pd matrix 88Schur complement 87irreducible matrix 112KKantorovich inequality 116Kronecker product 119in multivariate model 76in multivariate sample 29LLagrangian multiplier 39Laplace expansion of determinant 86L A TEX vileast squares see OLSELee, Alan J. vLehmann–Scheffé theorem 68leverage h ii 45likelihood function 30likelihood ratio 30linear hypothesis see hypothesislinear independence 9collinearity 46linear model 5multivariate 76linear prediction sufficiency 72linear zero function 54, 70linearly complete 68linearly sufficient see sufficiencyLöwner ordering see minimizingdefinition 113Albert’s theorem 114Y 0 MY L .Y XB/ 0 .Y XB/ 22, 117Mmagic square 113Magnus, Jan R. vMahalanobis distanceMHLN 2 .Ny 1 ; Ny 2 ; S / 30, 77, 79MHLN 2 .u .i/ ; Nu; S/ 8MHLN 2 .x; ; †/ 9, 27MHLN 2 .x ; Nx; S xx / 18MHLN 2 .x .i/ ; Nx; S xx / 45, 118MHLN 2 . 1 ; 2 ; †/ 79major/minor axis of the ellipse 25matrixirreducible 112nonnegative 112permutation 112reducible 112shorted 115spanning C .A/ \ C .B/ 84stochastic 113matrix RMP VPMP V 48P WPMP W D RM W 49matrix PMM.MVM/ M 48SSE.V / D y 0 PMymatrix PM 156, 57M 1 .M 1 VM 1 / M 1 57, 99matrix PM 1WM 1 .M 1 W 2 M 1 / M 1 57matrix angle 65matrix of corrected sums of squaresT D Xy 0 CX y 7T xx D X 0 0 CX 0 7maximizingtr.G 0 AG/ subject to G 0 G D I k 104.a 0 x/ 2 =x 0 Bx 106.x 0 Ay/ 2 =.x 0 Bx y 0 Cy/ 110.x 0 Ay/ 2 =.x 0 x y 0 y/ 110Œa 0 . 1 2 / 2 =a 0 †a 79Œa 0 .Nu 0 / 2 =a 0 Sa 29Œa 0 .Nu 1 Nu 2 / 2 =a 0 S a 79Œa 0 .u .i/ Nu/ 2 =a 0 Sa 9†.Vz; z/ 65cor.a 0 x; b 0 y/ 80cor.y; b 0 x/ 33cor d .y; Xˇ/ 20cor d .y; X 0 b/ 33cos 2 .u; v/; u 2 A; v 2 B 81.˛0A 0 Bˇ/ 2˛0A 0 A˛ ˇ0B 0 Bˇ 81kHVMk 2 F66u 0 u subject to u 0 † 1 u D c 2 25var.b 0 z/ 78x 0 Ax=x 0 Bx 106x 0 Ax=x 0 Bx, B nnd 108x 0 Hx=x 0 Vx 108


Index 129H 12J 6M 12P 221 97P C .A/\C .B/ 96P C .A/\C .B ? / 96P XIV 1 D I P 0 MIV 49, 99commuting P A P B D P B P A 96decomposition of P .X1WX2/IV C 99decomposition of P .X1WX2/IV 1 57, 99difference P A P B 96sum P A C P B 95orthogonal rotation 111overall-F-value 40PP<strong>and</strong>ora’s BoxBLUE 53, 55BLUP 70mixed model 72partial correlationpcor d .Y j X/ 21% ij x 26% xy´ 26r xy´ 22decomposition of 1 % yx 2 34decomposition of 1 R 2 y12:::k21decomposition of 1 R 2 y12:::k89population 26partial covariance 26partitioned matrixg-inverse 93inverse 87MP-inverse 93nonnegative definiteness 114pencil 106, 107permutationdeterminant 86matrix 112Perron–Frobenius theorem 112Poincaré separation theorem 104polar decomposition 111predictable, unbiasedly 69prediction errorGy y f 69y BLP.yI x/ 26, 32e i1:::i 1 34Mahalanobis distance 18with a given x 18prediction interval <strong>for</strong> y 19principal component analysis 78matrix approximation 79predictive approach 78sample principal components 78principal componentsfrom the SVD of QX 79principal minorsum of all i i principal minors 100principal submatrix 86projectorP AjB 98P XjVM 53oblique 95, 97proper eigenvaluesch.A; B/ 107ch.GVG 0 ; HVH/ 64ch.V; H/ 108PSTricks viQsee orthogonal projectorQR-decomposition 111quadratic <strong>for</strong>mE.z 0 Az/ 27var.z 0 Az/ 28distribution 27, 28independence 27, 28quadratic risk 68Rr<strong>and</strong>om sample without replacement 23rankdefinition 9simple properties 82of .A W B/ 83of .1 W X 0 / 9, 88of cov.X Qˇ/ 55of AB 83of AP B 85of A 92of HP V M 64of T xx 9, 12of X 0 .V C XUX 0 / X 50of X 0 2 M 1X 2 12of correlation matrix 9, 88of model matrix 9, 88rank additivityr.A C B/ D r.A/ C r.B/ 83Schur complement 87rank cancellation rule 83rank-subtractivity see minus partial orderingRao’s efficiency 67Rayleigh quotient x 0 Ax=x 0 x 104recursive decompositionof 1 % yx 2 34of 1 R 2 y12:::k21of 1 R 2 y12:::k89


130 Indexof det.R/ 89of det.S/ 34of det.†/ 34reduced modelR 2 .M 121 / 21M 121 61M 1H 65AVP 22SSE.M 121 / 21reducible matrix 112reflection 112regression coefficients 14old ones do not change 15st<strong>and</strong>ardized 16regression function 27regression towards mean vrelative efficiency of OLSE see Watsonefficiencyresidualy BLP.yI x/ 26, 32e i1:::i 1 34after elimination of X 1 20externally Studentized 43, 46internally Studentized 43of BLUE 55, 56of OLSE 13, 43predicted residual 45scaled 43residual mean square, MSE 19residual sum of squares, SSE 19rotation 111of observations 79SSamuelson’s inequality 118Schur complement 87determinant 87MP-inverse 93rank additivity 87Schur’s triangularization theorem 111Seber, George A. F. v, vishorted matrix 115similarity 103Simon, Paul vsimultaneous diagonalizationby a nonsingular matrix 107by an orthogonal matrix 107of commuting matrices 107singular value decomposition 109skew-symmetric: A 0 D A 1Smith, Harry vsolutionto A.X W VX ? / D .X f W V 21 X ? / 70to A.X W VX ? / D .X f W V 21 X ? / 70to AX D B 91to AXB D C 91to Ax D y 91to A 91to G.X W VM/ D .X W 0/ 53to X 0 Xˇ D X 0 y 13to X.A W B/ D .0 W B/ 98spectral radius 101, 112spectrum see eigenvalues, 100square root of nnd A 101st<strong>and</strong>ard error of Oˇi 18statistical distance 9Stigler, Stephen M. vstochastic matrix 113stochastic restrictions 74sufficiencylinear 67linear prediction 72linearly minimal 68sum of products SP xy D t xy 8sum of squareschange in SSE 20, 37, 39change in SSE(V) 41predicted residual 45SS y D t yy 8SSE 19SSE under M 121 21SSE, various representations 20SSR 19SST 19weighted SSE 40, 56sum of squares <strong>and</strong> cubes of integers 23Survo viSylvester’s inequality 84TtheoremAlbert 114Cochran 28Courant–Fischer 104Eckart–Young 105, 111Frisch–Waugh–Lovell 15, 22, 61Gauss–Markov 53interlacing 103Lehmann–Scheffé 68Perron–Frobenius 112Poincaré separation 104Schur’s triangularization 111Wedderburn–Guttman 87Thomas, Niels Peter viTiritiri Isl<strong>and</strong> 122total sum of squares, SST 19


Index 131transposition 86Trenkler, Götz vitriangular factorization 34triangular matrix 34, 111Uunbiased estimator of 2O 2 19O 2 .i/43Q 2 40, 41, 56unbiasedly predictable 69Vvariable space 6variable vector 6variancevar s .y/; var d .y/ 8var.x/ D E.x x / 2 3se 2 . Oˇi / 18var.a 0 x/ D a 0 †a 10var. Oˇi / 17var.O" i / 14of a dichotomous variable 23of prediction error with a given x 18variance function 27vec 119in multivariate model 76in multivariate sample 29vector of means Nx 6Vehkalahti, Kimmo viVIF 17, 90volume of the ellipsoid 38WWatson efficiencydefinition 62decomposition 66factorization 66lower limit 62weakly singular linear model: C .X/ C .V/53Wedderburn–Guttman theorem 87Weisberg, San<strong>for</strong>d vWiel<strong>and</strong>t inequality 117Wilks’s lambda 30Wishart-distribution see distributionWorking–Hotelling confidence b<strong>and</strong> 19

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!