12.07.2015 Views

Applied Bayesian Modelling - Free

Applied Bayesian Modelling - Free

Applied Bayesian Modelling - Free

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

REGRESSIONS WITH LATENT MIXTURES 111in a Normal example). Finite mixture regressions introduce covariates into either thedetermination of the latent class indicators or to describe the relation between the meanof subject i on each latent class m ij and that subject's attribute profile. Mixture regressionshave been applied to modelling the behaviour or attitudes of human subjectsso that each individual has their overall mean determined by their membershipprobabilities (Wedel et al., 1993).Thus, for univariate Normal observations y i , a p-dimensional vector of predictors X i ,define latent indicators z i of class membership among possible classes j ˆ 1, : : J. Werethe indicators known,y i jz i ˆ j N(b j X i , t j ) (3:20)where b j is a class specific regression vector of length p, and t j is the conditionalvariance. If l ij ˆ Pr(z i ˆ j) then the overall mean for subject i isl i1 m i1 l i2 m i2 . . . : l iJ m iJwhere m ij ˆ b j X i . The indicators z i may be sampled from a multinomial withoutadditional covariates, so that the multinomial has parameters l ij ˆ l j . Alternatively,an additional regression seeks to model the z i as functions of covariates W i such that themultinomial is defined by parametersXl ij ˆ exp (f j W i )=[1 exp (f k W i )] j > 1kˆ2as in Section 3.3.1.Several applications of regression mixtures have been reported in consumer choicesettings: Jones and McLachlan (1992) consider metric outcomes y, namely consumerpreference scales for different goods, which are related to product attributes (appearance,texture, etc.). They find sub-populations of consumers differing in the weight theyattach to each attribute. Here the multinomial logit regressions for z i might involvecovariates W such as consumers' age, class, or type of area of residence, while themodelling of the y i might involve covariates X describing the quality or price of goods.Binomial, ordinal or multinomial mixture regressions have utility both in representingdepartures from the baseline model assumptions (e.g. overdispersion), as well asdifferential regression slopes between sub-populations (Cameron and Trivedi, 1986).For example, Wedel et al. (1993) argue for using a latent class mixture in an applicationinvolving Poisson counts y, both because of its advantage in modelling differentialpurchasing profiles among customers of a direct marketing company, and its potentialfor modelling over-dispersion in relation to the Poisson assumption.As mentioned in Chapter 2, there are the usual problems in <strong>Bayesian</strong> analysis (as infrequentist analysis) concerning the appropriate number of components. Additionally,<strong>Bayesian</strong> sampling estimation may face the problems of empty classes at one or moreiterations (e.g. no subjects are classified in the second of J ˆ 3 groups) and the switchingof labels unless the priors are constrained. On the other hand, the introduction ofpredictors provides additional information that may improve identifiability. To counterlabel switching we might apply a constraint to one or more of the intercepts, regressioncoefficients, variance parameters, or mixture proportions that ensures a consistentlabelling. In some situations one may be able to specify informative priors consistentwith widely separated, but internally homogenous groups (Nobile and Green, 2000).These ensure (a) that different groups are widely separated, for instance a prior onintercepts b 0j when J ˆ 2 might be b 01 N( 5, 2), b 02 N(0, 2), effectively ensuring

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!