12.07.2015 Views

chapter 1

chapter 1

chapter 1

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Chapter 13: Nonlinear and Multiple RegressionForward Stepping:Step 1: After fitting all 5 one-variable models, the model with x 3 had the t-ratio with thelargest magnitude (t = -4.82). Because the absolute value of this t-ratio exceeds 2, x 3 wasthe first variable to enter the model.Step 2: All 4 two-variable models that include x 3 were fit. That is, the models {x 3 , x 1 }, {x 3 ,x 2 }, {x 3 , x 4 }, {x 3 , x 5 } were all fit. Of all 4 models, the t-ratio 2.12 (for variable x 5 ) waslargest in absolute value. Because this t-ratio exceeds 2, x 5 is the next variable to enterthe mo del.Step 3: (not printed): All possible tree-variable models involving x 3 and x 5 and anotherpredictor, None of the t-ratios for the added variables have absolute values that exceed 2,so no more variables are added. There is no need to print anything in this case, so theresults of these tests are not shown.Note; Both the forwards and backwards stepping methods arrived at the same final model,{x 3 , x 5 }, in this problem. This often happens, but not always. There are cases when thedifferent stepwise methods will arrive at slightly different collections of predictorvariables.61. If multicollinearity were present, at least one of the four R 2 values would be very close to 1,which is not the case. Therefore, we conclude that multicollinearity is not a problem in thisdata.262. Looking at the h ii( k + 1)column and using=819= .421 as the criteria, three observationsnappear to have large influence. With h ii values of .712933, .516298, and .513214,observations 14, 15, 16, correspond to response (y) values 22.8, 41.8, and 48.6.63. We would need to investigate further the impact these two observations have on the equation.Removing observation #7 is reasonable, but removing #67 should be considered as well,before regressing again.64.a.2( k + 1)n6= = .6;10since h 44 > .6, data point #4 would appear to have large influence.(Note: Formulas involving matrix algebra appear in the first edition.)x , so β ˆ β ˆ ( ) =b. For data point #2, (′) = ( 1 3.453 4.920)− 7661−.302. −1( ′ X )2−427− 2⎛ 1 ⎞ ⎛.3032⎞⎛−.333⎞⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎜ 3.453 ⎟ = −1.0974⎜.1644⎟= ⎜−.180⎟⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎝−4.920⎠⎝.1156⎠⎝−.127⎠⎛ .106 ⎞ˆ⎜ ⎟− βˆ4= ⎜−.040⎟⎜ ⎟⎝ .030 ⎠X and similarcalculations yield β ( ).

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!