11.07.2015 Views

Smooth Transitions, Neural Networks, and Linear Models - CiteSeerX

Smooth Transitions, Neural Networks, and Linear Models - CiteSeerX

Smooth Transitions, Neural Networks, and Linear Models - CiteSeerX

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

An appropriate null hypothesis isH ¼ ­ ·½ ¼ (11)Note that (10) is only identified under the alternative. Using a third order expansion <strong>and</strong> after rearrangingterms, the resulting model isÝ Ø ¼ Þ Ø ····ÕÕ½ Ô ÕÕ½½ ½ Ô ÕÕ ¼ Þ Ø ´­ ´ ¼ Ü Ø µµ Ü Ø Ü Ø ·ÕÕÔ ÕÕ½ ½¬ Þ £ ØÜ Ø Ü Ø ·Õ½ ½ Ь Þ £ ØÜ Ø ·Õ ÕÕ½ Ь Ð Þ £ ØÜ Ø Ü Ø Ü ÐØ · £ Ø Õ ÕÕ½ Õ Ü Ø Ü Ø Ü Ø Ð Ü Ø Ü Ø Ü Ø Ü ÐØ(12)The null hypothesis is defined as H ¼ ¼¬ ¼ ¼. Again, st<strong>and</strong>ard asymptoticinference is available.2.2.2 Parameter EstimationAfter specifying the model, the parameters should be estimated by nonlinear least squares (NLS). Hencethe parameter vector © of (4) is estimated as© argmin© É Ì ´©µ argmin©Ìؽ´Ý Ø ´Þ Ø Ü Ø ©µµ ¾ (13)Under some regularity conditions the estimates are consistent <strong>and</strong> asymptotically normal (Davidson <strong>and</strong>MacKinnon 1993).The estimation procedure is carried together with the test for the number of hidden units. First we testfor linearity against a model given by (4) with ½. If linearity is rejected we estimate the parametersof the nonlinear model <strong>and</strong> test for the second hidden unit. If the null hypothesis is rejected, we use theestimated values for the first hidden unit as starting values <strong>and</strong> use the procedure described in Medeiros<strong>and</strong> Veiga (2000a) to compute initial values for the second hidden unit. We proceed in that way until thefirst acceptance of the null hypothesis.8

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!