- Text
- Variables,
- Measurement,
- Regression,
- Parameter,
- Latent,
- Variable,
- Wald,
- Models,
- Likelihood,
- Squares,
- Draft,
- Toronto

First Draft of the paper - University of Toronto

level.The simulations were carried out exactly as for Table 4, so that **the** errorterms in **the** latent regression are multiples **of** standardized variates from**the** base distribution. A sample size **of** n = 250 was employed for **the** mildparameter configuration, while n = 1000 was necessary for all three teststo have an approximate 0.05 Type I error rate with **the** severe parameterconfiguration. For each curve in Figures 3, ten thousand simulated data setswere generated for each **of** eleven equally spaced γ 2 values, ranging from -0.5to +0.5, and a cubic spline was fit to **the** points to produce smooth curves.Note that **the** likelihood ratio tests and Wald tests are based on a normalmodel for all **the** curves, even though only **the** data in **the** top panel arenormal.In Figure 3, we see that **the** shapes **of** **the** power curves depend substantiallyupon **the** parameter configuration, but very little upon **the** basedistribution. The power curves **of** **the** three tests coincide almost exactly for**the** mild parameter configuration, and it is noteworthy that **the** distributionfreetest based on weighted least squares does about as well as **the** likelihoodratio test with normal data.For **the** severe parameter configuration, with its strongly correlated latentindependent variables and substantial measurement error, **the** Wald andweighted least squares tests are biased; that is, **the** minimum probability **of**rejecting **the** null hypo**the**sis occurs at a parameter value for which **the** nullhypo**the**sis is false. Compared to **the** normal Wald test, **the** weighted leastsquares test is clearly more powerful for **the** skewed and heavy tailed dataarising from a Pareto base distribution.The likelihood ratio test is unbiased, but still **the** power curve is notsymmetrical. There is a better chance **of** detecting **the** incorrectness **of** **the**null hypo**the**sis for parameter values that are negative. Apart inadmissiblylow power for small negative values **of** γ 2 , Figure 3 provides little basis forchoosing among **the** three tests. Their performance is equivalent for **the**mild parameter configuration, while for **the** severe parameter configuration**the** likelihood ratio test is more powerful against some alternatives, while**the** Wald and weighted least squares tests are more powerful against o**the**rs.Recall from Tables 3 and 4, though, that **the** likelihood ratio test protectsmuch better against Type I error. And protection against Type I error isprimary, from both a **the**oretical and an applied point **of** view.Thus, a likelihood ratio test based upon **the** assumption **of** a multivariatenormal distribution appears to be practically superior to both **the** Wald and34

weighted least squares test for **the** case we are examining, regardless **of** **the**distribution **of** **the** data. Of course simulations are much better at establishingthat something is wrong than **the**y are at establishing that everythingis okay. Still, our intuition is that likelihood ratio tests based on **the** normalmodel are likely to work well for measurement error regression modelsin general. Though **the**re are plenty **of** available methods (for example seeFuller, 1989), we still suggest that a normal likelihood ratio test should be**the** practitioner’s first choice, regardless **of** **the** distribution **of** **the** data. Itis particularly convenient that normal likelihood methods are available in all**the** commercial structural equation modelling s**of**tware with which we arefamiliar, so it is convenient to actually perform **the** kind **of** analysis we arerecommending.It is worth noting that **the** robustness we observe for **the** normal-**the**orylikelihood ratio test under **the** marked kurtosis **of** **the** heavy-tailed t andPareto distributions goes beyond what one would expect based on **the** literature(for example Browne, 1984; Satorra and Bentler, 1990; Lee and Xia,2006).35

- Page 1 and 2: Inflation of Type I error in multip
- Page 3 and 4: But if the independent variables ar
- Page 5 and 6: sion coefficients are different fro
- Page 7 and 8: and the model is not formally ident
- Page 9 and 10: X i,1 = ν 1 + ξ i,1 + δ i,1X i,2
- Page 11 and 12: the same direction, but if they hav
- Page 13 and 14: Thus we may manipulate the reliabil
- Page 15 and 16: 1.2.2 ResultsAgain, this is a compl
- Page 17 and 18: marized in Table 1.2.2, which shows
- Page 19 and 20: each value of γ 2 . For each data
- Page 21 and 22: estimation of it is a possibility.
- Page 23 and 24: Γ is an m × p matrix of unknown c
- Page 25 and 26: giving further thought to model ide
- Page 27 and 28: It is instructive to see how this w
- Page 29 and 30: We emphasize that the simulations r
- Page 31 and 32: For the severe parameter configurat
- Page 33: In Table 4, using the base distribu
- Page 37 and 38: Figure 3: Power of the normal likel
- Page 39 and 40: measurement error, this fits neatly
- Page 41 and 42: We started with two correlated bina
- Page 43 and 44: Well-established solutions are avai
- Page 45 and 46: is that the client has data, and li
- Page 47 and 48: University of Wisconsin, Madison.Be
- Page 49: Robustness in the Analysis of Linea