- Text
- Variables,
- Measurement,
- Regression,
- Parameter,
- Latent,
- Variable,
- Wald,
- Models,
- Likelihood,
- Squares,
- Draft,
- Toronto

First Draft of the paper - University of Toronto

Table 2: Marginal Mean Estimated Type I Error RatesSample Size50 100 250 500 10000.1908 0.2744 0.3946 0.4834 0.5651Correlation Between ξ 1 and ξ 20.0 0.2 0.4 0.6 0.80.0500 0.1660 0.5154 0.5505 0.6262Proportion **of** Variance Explained by ξ 10.25 0.50 0.750.2733 0.3847 0.4869Reliability **of** X 10.50 0.75 0.80 0.90 0.950.6064 0.4698 0.4207 0.2669 0.1445Reliability **of** X 20.50 0.75 0.80 0.90 0.950.3081 0.3751 0.3875 0.4125 0.4250Base DistributionNormal Pareto Student’s t Uniform0.3869 0.3690 0.3831 0.387518

each value **of** γ 2 . For each data set, we fit Model (1) and tested H 0 : β 2 = 0at α = 0.05 with **the** usual F -test. Each test was classified as significant witĥβ 2 > 0, significant with ̂β 2 < 0, or nonsignificant.Figure 2 shows **the** results. For substantial negative values **of** γ 2 , **the** nullhypo**the**sis H 0 : β 2 = 0 is rejected at a high rate with ̂β 2 < 0, leading to**the** correct conclusion even though **the** model is wrong. As **the** value **of** γ 2increases, **the** proportion **of** significant tests decreases to near zero aroundγ 2 = −0.76 Then for values **of** γ 2 closer to zero (but still negative), **the** nullhypo**the**sis is increasingly rejected again, but this time with ̂β 2 > 0, leadingto **the** conclusion **of** a positive relationship, when in fact **the** relationshipwas negative. This example shows how ignoring measurement error in **the**independent variables can lead to firm conclusions that are directly oppositeto reality.Figure 2: Probability **of** Rejecting H 0 : β 2 = 0Probability0.0 0.2 0.4 0.6 0.8 1.0With β^2 > 0With β^2 < 0−1.0 −0.8 −0.6 −0.4 −0.2 0.0γ 219

- Page 1 and 2: Inflation of Type I error in multip
- Page 3 and 4: But if the independent variables ar
- Page 5 and 6: sion coefficients are different fro
- Page 7 and 8: and the model is not formally ident
- Page 9 and 10: X i,1 = ν 1 + ξ i,1 + δ i,1X i,2
- Page 11 and 12: the same direction, but if they hav
- Page 13 and 14: Thus we may manipulate the reliabil
- Page 15 and 16: 1.2.2 ResultsAgain, this is a compl
- Page 17: marized in Table 1.2.2, which shows
- Page 21 and 22: estimation of it is a possibility.
- Page 23 and 24: Γ is an m × p matrix of unknown c
- Page 25 and 26: giving further thought to model ide
- Page 27 and 28: It is instructive to see how this w
- Page 29 and 30: We emphasize that the simulations r
- Page 31 and 32: For the severe parameter configurat
- Page 33 and 34: In Table 4, using the base distribu
- Page 35 and 36: weighted least squares test for the
- Page 37 and 38: Figure 3: Power of the normal likel
- Page 39 and 40: measurement error, this fits neatly
- Page 41 and 42: We started with two correlated bina
- Page 43 and 44: Well-established solutions are avai
- Page 45 and 46: is that the client has data, and li
- Page 47 and 48: University of Wisconsin, Madison.Be
- Page 49: Robustness in the Analysis of Linea