14.03.2014 Views

Modeling and Multivariate Methods - SAS

Modeling and Multivariate Methods - SAS

Modeling and Multivariate Methods - SAS

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Chapter 16 Performing Choice <strong>Modeling</strong> 423<br />

Segmentation<br />

• Select Profiler from the drop-down menu to obtain the results shown in Figure 16.28. Notice that the<br />

parameter estimates <strong>and</strong> the likelihood ratio test results are identical to the results obtained for the<br />

Choice Model with only two tables, shown in Figure 16.6 <strong>and</strong> Figure 16.7.<br />

Figure 16.28 Prediction Profiler for Pizza Data One-Table Analysis<br />

Segmentation<br />

Market researchers sometimes want to analyze the preference structure for each subject separately in order to<br />

see whether there are groups of subjects that behave differently. However, there are usually not enough data<br />

to do this with ordinary estimates. If there are sufficient data, you can specify “By groups” in the Response<br />

Data or you could introduce a Subject identifier as a subject-side model term. This approach, however, is<br />

costly if the number of subjects is large. Other segmentation techniques discussed in the literature include<br />

Bayesian <strong>and</strong> mixture methods.<br />

You can also use JMP to segment by clustering subjects using response data. For example, after running the<br />

model using the Pizza Profiles.jmp, Pizza Responses.jmp, <strong>and</strong> the optional Pizza Subjects.jmp data sets,<br />

click on the drop-down menu for the Choice Model platform <strong>and</strong> select Save Gradients by Subject. A<br />

new data table is created containing the average Hessian-scaled gradient on each parameter, <strong>and</strong> there is one<br />

row for each subject.<br />

Note: This feature is regarded as an experimental method, since, in practice, little research has been<br />

conducted on its effectiveness.<br />

These gradient values are the subject-aggregated Newton-Raphson steps from the optimization used to<br />

produce the estimates. At the estimates, the total gradient is zero, <strong>and</strong><br />

Δ = H – 1 g = 0<br />

where g is the total gradient of the log-likelihood evaluated at the MLE, <strong>and</strong><br />

H – 1 is the inverse Hessian function or the inverse of the negative of the second partial derivative of the<br />

log-likelihood.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!