Available as pdf. - Pontificia Universidad Católica de Chile

mat.puc.cl

Available as pdf. - Pontificia Universidad Católica de Chile

Semiparametric Bayesian Inference for MultilevelRepeated Measurement DataPeter Müller 1 , Fernando A. Quintana 2 , and Gary L. Rosner 11 Department of Biostatistics & Applied Mathematics, The University of Texas, M. D. AndersonCancer Center,Box 447, 1515 Holcombe Boulevard, Houston, Texas 77030, U.S.A.2 Departamento de Estadística, Facultad de Matemáticas,Pontificia Universidad Católica de Chile, Casilla 306, Correo 22,Santiago, CHILEAbstractWe discuss inference for data with repeated measurements at multiple levels. The motivatingexample are data with blood counts from cancer patients undergoing multiple cyclesof chemotheraphy, with days nested within cycles. Some inference questions relate to repeatedmeasurements over days within cycle, while other questions are concerned with thedependence across cycles.When the desired inference relates to both levels of repetition, it becomes important toreflect the data structure in the model. We develop a semi-parametric Bayesian modelingapproach, restricting attention to two levels of repeated measurements. For the top level longitudinalsampling model we use random effects to introduce the desired dependence acrossrepeated measurements. We use a non-parametric prior for the random effects distribution.Inference about dependence across second-level repetition is implemented essentially by theclustering implied in the non-parametric random effects model. Practical use of the modelrequires that the posterior distribution on the latent random effects be reasonably precise.1 IntroductionWe consider semiparametric Bayesian inference for data with repeated measurements at multiplelevels. The motivating data are blood count measurements for chemotherapy patientsover multiple courses of chemotherapy. In earlier papers (Müller and Rosner, 1997; Mülleret al., 2004), we considered inference for the first course of chemotherapy only. Naturally,such data do not allow inference about changes between cycles. In clinical practice, however,cancer patients receive chemotherapy over multiple courses or cycles of predetermined1


duration. These courses of therapy typically consist of a period during which the patientreceives active drug therapy, followed by a no-drug period to allow the patient to recover forthe next round of chemotherapy. Often some aspect of the treatment protocol is intendedto mitigate deterioration of the patient’s performance across repeated treatment cycles. Inferencerelated to such aspects of the treatment involves a comparison across cycles, whichrequires modeling of the entire data set, including data from later cycles. In this extendeddata set, repeated measurements occur at two levels. Each patient receives multiple cyclesof chemotherapy, and within each cycle, measurements are recorded over time. Another typicalexample of this data structure is drug concentration measurements over repeated dosingstudies of pharmacokinetics.A standard parametric approach would base inference on conjugate distributions for thesampling model, hierarchical priors, and random effects distributions. Bayesian inferencefor such multilevel hierarchical models is reviewed, among many others, in Goldstein et al.(2002) who also discuss software for commonly used parametric models. Browne et al. (2002)compare Bayesian and likelihood-based methods.The proposed semi-parametric Bayesian inference replaces traditional normal randomeffects distributions with nonparametric Bayesian models. Nonparametric Bayesian randomeffects distributions in mixed-effects models were first introduced in Bush and MacEachern(1996). Applications to longitudinal data models are developed in Kleinman and Ibrahim(1998a), Müller and Rosner (1997) and Walker and Wakefield (1998), among many others.Kleinman and Ibrahim (1998b) extend the approach to allow binary outcomes, using generalizedlinear models for the top level likelihood. In each of these paper, the authors usevariations of Dirichlet process (DP) models to define flexible nonparametric models for anunknown random effects distribution. The DP was introduced as a prior probability modelfor random probability measures in Ferguson (1973) or Antoniak (1974). See these papersfor basic properties of the DP model. A recent review of semiparametric Bayesian inferencebased on DP models appears in Müller and Quintana (2004).The rest of this article is organized as follows. In section 2, we introduce the proposedsampling model. In section 3, we focus on the next level of the hierarchy by proposingsuitable models to represent and allow learning about dependence at the second level ofthe hierarchy. Section 4 discusses implementation of posterior simulation in the proposedmodel. Section 5 contains inference for the application that motivated this discussion. Afinal discussion section concludes the article.2


2 First Level Repeated Measurement ModelIn our semi-parametric Bayesian model representing repeated measurements at differentlevels of a hierarchy, the hierarchy follows the structure of the data. The key elements of theproposed approach are as follows. We consider two nested levels of measurement units, witheach level giving rise to a repeated measurement structure. Assume data y ijk are recordedat times k, k = 1, . . . , n ij , for units j, j = 1, . . . , n i , nested within higher-level units i,i = 1, . . . , n. We will refer to the experimental units i as “subjects” and to experimentalunits j as “cycles” to simplify the following discussion and remind us of the motivatingapplication.We start by modeling dependence of the repeated measurements within a cycle, y ij =(y ijk , k = 1, . . . , n ij ):y ij ∼ p(y ij | θ ij , η). (1)We assume p(y ij | θ ij , η) to be a nonlinear regression parametrized by cycle-specific randomeffects θ ij . Here and throughout the following discussion, η are hyperparameters commonacross subjects i and cycles j. Figure 1 shows typical examples of continuous outcomesy−1 0 1 2●●●●●●●●●●● ● ●●●●●● ● ●●● ● ●●●●123PAT 2y−1 0 1 2●●●●●●●●●●●●●●●●●● ●● ●●●●●●● ●●●●●●● ●●123PAT 13y−1 0 1 2●●●●●● ●●●●●●●●●● ● ● ●●●●●●●●●●12PAT 15y−1 0 1 2●●● ●●●●●●●●●●●● ●●●●12PAT 17● ● ●0 5 10 15 20 25 30DAY0 5 10 15 20 25 30DAY0 5 10 15 20 25 30DAY0 5 10 15 20 25 30DAYFigure 1: Repeated measurements over time (DAY) and cycles. Each panel shows data forone patient. Within each panel, the curves labeled 1, 2, and 3 show profiles for the first,second and third cycle of chemotherapy (only two cycles are recorded for patients 15 and17). The curves show posterior estimated fitted profiles. The observed data are indicatedby dots “•”.y ijk , measured over multiple cycles, with repeated measurements within each cycle. Wedefine dependence within each cycle by assuming that observations arise according to someunderlying mean function plus independent residualsy ijk = f(t ijk ; θ ij ) + e ijk . (2)3


Here f is a nonlinear regression with parameters θ ij , and e ijk are assumed i.i.d. N(0, σ 2 )normal errors. Marginalizing with respect to θ ij model (2) defines a dependent probabilitymodel for y ij . This use of random effects to introduce dependence in models for repeatedmeasurement data is common practice. Choice of f(·; θ) is problem-specific. In the implementationreported later, we use a piecewise linear-linear-logistic function. In the absence ofmore specific information, we suggest the use of generic smoothing functions, such as splinefunctions (Denison et al., 2002).3 Second-Level Repeated Measurement ModelWe introduce a dependent random effects distribution on (θ i1 , . . . , θ ini ) to induce dependenceacross cycles. We proceed with the most general approach, leaving the nature of thedependence unconstrained. We achieve this by considering a non-parametric prior for thejoint distribution p(θ i1 , . . . , θ ini ). We use a non-parametric mixture prior. See details below.In this model, inference about dependence of the θ ij is essentially driven by the empiricaldistribution of the imputed cycle-specific random effects θ ij . The approach works well forcontinuous responses with a non-linear regression model (2), assuming the residual varianceis small enough to leave little posterior uncertainty for the θ ij .We define a random effects distribution for θ i = (θ i1 , . . . , θ ini ) as a mixture model H(θ i ) ≡p(θ i | G, η) = ∫ p(θ i | µ, η)dG(µ) with a non-parametric prior on the mixing measure G. Forlater reference, we introduce the notation H(·) for the mixture model. As usual in mixturemodels, posterior inference proceeds with an equivalent hierarchical model:θ i ∼ p(θ i | µ i , η) and µ i ∼ G. (3)Introducing the latent variables µ i , i = 1, . . . , n, replaces the mixture model with a hierarchicalprior.The probability model for G is the main mechanism for learning about dependenceacross cycles. The proposed model is most easily described in terms of the prior predictivedistribution. Assume patients i = 1, . . . , n have been observed. The prior predictivep(θ n+1 | θ 1 , . . . , θ n ) for patient n + 1 is of the following type. With some probability, θ n+1is similar to one of the previously recorded patients. And with the remaining probability,θ n+1 is generated from a baseline distribution G ⋆ defined below. The notion of “similarity”isformalized by assuming a positive prior probability for a tie of the latent variables µ i . Letk ≤ n denote the number of unique values among µ 1 , . . . , µ n and denote such values byµ ∗ 1, . . . , µ ∗ k . Let m h, h = 1, . . . , k, denote the number of latent variables µ i equal to µ ∗ h . We4


assume⎧⎨µ n+1 = µ ∗ hp(µ n+1 | µ 1 , . . . , µ n ) =, with prob. w h(m 1 , . . . , m k ), h = 1, . . . , k⎩G ⋆ with prob. w k+1 (m 1 , . . . , m k ).(4)In words, the predictive distribution is a mixture of the empirical distribution of the alreadyobserved values and a base measure G ⋆ . The relative weights are determined by weightfunctions w h . The predictive rule (4) is a defining property for a class of random probabilitymeasures known as species sampling models (SSM) (Pitman, 1996; Ishwaran and James,2003). Letting w = {w h } denote the weight functions, we writeG ∼ SSM(w, G ⋆ ). (5)The predictive rule (4) is attractive in many applications. For example, consider the applicationto the multi-cycle hematologic counts. The model implies that with some probabilitythe response for the new patient replicates one the previous patient responses (up to residualvariation), and with the remaining probability the response is generated from an underlyingbase measure G ⋆ .The SSM includes the popular Dirichlet process (DP) as a special case with w h =m h /(M + n) and w k+1 = M/(M + n). We write DP (M, G ⋆ ) for a DP model with basemeasure G ⋆ and total mass parameter M. See, for example, MacEachern and Müller (2000)for a review of DP mixture models as in (3). Because of its traditional use in nonparametricBayesian inference, we suggest the DP as a default SSM specification, unless a particular setof weight functions is desired.Inference in (3) is greatly simplified by assuming that the θ ij are independent given µ i .As before, letting θ i = (θ i1 , . . . , θ ini ), this assumption impliesp(θ i | µ i , η) = ∏ jp(θ ij | µ i , η). (6)In a Gibbs sampler implementation, the complete conditional posterior for the θ ij is conditionallyindependent of the data and parameters from other cycles and subjects. As a defaultchoice for the density in (6), we suggest a kernel of the form p(θ ij | µ i , η) = p(θ ij | µ ij , η),where µ i = (µ i1 , . . . , µ ini ) parallels the partitioning of θ i into cycle-specific subvectors. Atypical choice is p(θ ij | µ ij , η) = N(µ ij , S), where S would be one of the hyperparameters includedin η. Partitioning µ i = (µ i1 , . . . , µ ini ) naturally leads us to use the same factorizationas in (6) for the base measure G ⋆ :∏n iG ⋆ (µ i ) = p(µ ij | η). (7)j=15


The advantage of this choice of base measure is that hyperparameters η that define G ⋆ onlyneed to be defined for the random vector µ ij instead of the higher dimensional vector µ i .For example, consider a multivariate normal base measure N(x; m, S) for random vector xwith moments (m, S). Using G ⋆ (µ i ) = ∏ N(µ ij ; m, B), we only need to specify the moments(m, B) for the lower dimensional subvector µ ij . Using a base measure (7) with conditionalindependence across cycles, any inference about dependence across cycles for a future patientarises from the data-driven clustering of the imputed µ i vectors. Clustering over locationsallows modeling dependence in much the same way as a mixture of bivariate standard normalskernel can approximate any bivariate distribution, with arbitrary variance-covariance matrix,in a bivariate kernel density estimate.4 Posterior InferenceConsider the SSM mixture model (4) and (6). We assume a multivariate normal kernel, withµ i partitioned as µ i = (µ i1 , . . . , µ ini ) and a kernelp(θ ij | µ i , η) = N(µ ij , S). (8)The common covariance matrix S is part of the hyperparameter vector η. The use of the normalkernel reduces posterior inference for θ ij to the traditional problem of normal nonlinearregression with normal priors (conditional on the mixing variable µ i and the hyperparametersη).An important feature of the model is the conditional independence of the θ ij across cyclesj given µ i and η. This allows us to consider one cycle at a time when updating θ ij in theGibbs sampler. Details of the conditional posterior distribution for θ ij and the choice of asuitable MCMC move depend on the nature of (2). Choosing the base measure G ⋆ for thenonparametric prior, we use a conjugate prior to the kernel p(θ ij | µ ij , η) in the base measure.This greatly simplifies posterior simulation for the µ ij . As usual in SSMs, posterior inferenceproceeds in the marginal model (4), after analytically integrating out the random measureG. Posterior simulation in this model is straightforward. See, for example, MacEachern andMüller (2000), Neal (2000), or Jain and Neal (2004) The references are specific to the DPmodel but are easily modified to the SSM by using the appropriate prior predictive weightsgiven in (4). See also Ishwaran and James (2003) and references therein.In many applications, the number of cycles n i varies across subjects. This poses noproblem in the SSM model. Let ¯n denote the maximum across all subjects. First considera model with parameters θ ij , j = 1, . . . , ¯n, for all patients. The fact that the likelihood6


does not include θ ij , j = n i + 1, . . . , ¯n, poses no difficulty. Next, note that θ ij , j > n i ,is easily marginalized analytically, allowing us to use only θ ij , j = 1, . . . , n i , in the actualimplementation.A minor modification of the model allows us to include cycle-specific covariates. Let x ijdenote a vector of covariates for cycle j of patient i. This could, for example, include dose ofa treatment in cycle j. A straightforward way to include a regression on x ij is to extend theprobability model on θ ij to a probability model on ˜θ ij ≡ (x ij , θ ij ). The implied conditionaldistribution p(θ ij | x ij ) formalizes the desired density estimation for θ as a function of x.This approach is used, for example, in Mallet et al. (1988) and Müller and Rosner (1997).5 Modeling Multiple Cycle Hematologic DataModeling patient profiles (e.g., blood counts, drug concentrations, etc.) over multiple treatmentcycles requires a hierarchical extension of a basic one-cycle model. Model (2) togetherwith (3), with θ i including random effects for all cycles, provides such a generalization.Several important inference questions can only be addressed in the context of a joint probabilitymodel across multiple cycles. For example, in a typical chemotherapy regimen, someaspects of the proposed treatment are aimed at mitigating deterioration of the patient’soverall performance over the course of the treatment. Immunotherapy, growth factors, orother treatments might be considered to ensure reconstitution of blood cell counts after eachchemotherapy cycle.We analyze data from a phase I clinical trial with cancer patients carried out by theCancer and Leukemia Group B (CALGB), a cooperative group of university hospitals fundedby the U.S. National Cancer Institute to conduct studies relating to cancer therapy. Thetrial, CALGB 8881, was conducted to determine the highest dose of the anti-cancer agentcyclophosphamide (CTX) one can safely deliver every two weeks in an outpatient setting(Lichtman et al., 1993). The drug is known to cause a drop in white blood cell counts(WBC). Therefore, patients also received GM-CSF, a colony stimulating factor given to spurregrowth of blood cells (i.e., for hematologic support). The protocol required fairly extensivemonitoring of patient blood counts during treatment cycles. The number of measurementsper cycle varied between 4 and 18, with an average of 13. The investigators treated cohortsof patients at different doses of the agents. Six patients each were treated at the followingcombinations (CTX, GM-CSF) of CTX (in g/m 2 ) and GM-CSF (in µg/kg): (1.5, 10), (3.0,2.5), (3.0, 5.0), (3.0, 10.0) and (6.0, 5.0). Cohorts of 12 and 10 patients, respectively, weretreated at dose combinations of (4.5, 5.0) and (4.5, 10.0). Hematologic toxicity was the7


primary endpoint.In Müller and Rosner (1997) and Müller et al. (2004), we reported analyses restricted todata from the first treatment cycle. However, the study data include responses over severalcycles for many patients, allowing us to address questions related to changes over cycles.We use the model proposed in Sections 2 and 3 to analyze the full data.The data areWBC in thousands, on a logarithmic scale, y ijk = log(WBC/1000), recorded for patient i,cycle j, on day t ijk . The times t ijk are known, and reported as days within cycle. We usea non-linear regression to set up p(y ij | θ ij , η). For each patient and cycle, the responsey ij = (y ij1 , . . . , y ijnij ) follows a typical “bath tub” pattern, starting with an initial base line,followed by a sudden drop in WBC at the beginning of chemotherapy, and eventually a slowS-shaped recovery. In Müller and Rosner (1997) we studied inference for one cycle alone,using a non-linear regression (2) in the form of a piecewise linear and logistic curve. Themean function f(t; θ) is parameterized by a vector of random effectsθ = (z 1 , z 2 , z 3 , τ 1 , τ 2 , β 1 ):⎧⎪⎨ z 1 t < τ 1f(t; θ) = rz 1 + (1 − r)g(θ, τ 2 ) τ 1 ≤ t < τ 2(9)⎪⎩g(θ, t) t ≥ τ 2where r = (τ 2 − t)/(τ 2 − τ 1 ) and g(θ, t) = z 2 + z 3 /{1 + exp[2.0 − β 1 (t − τ 2 )]}. The interceptin the logistic regression was fixed at 2.0 after finding in a preliminary data analysis that avariable intercept did not significantly improve the fit. We again use model (9) and assumemodel (6), θ ij ∼ N(µ ij , S), independently across patients i and cycles j. Dependence acrosscycles is introduced by the nonparametric prior µ i ∼ G. Specifying the SSM prior for G, weuse the predictive rules corresponding to the DP prior, i.e., G ∼ DP (M, G ⋆ ).Finally, we include a regression on covariates x ij . We use the bivariate covariate of thetreatment doses of CTX and GM-CSF in cycle j, patient i. Both doses are centered andscaled to zero mean and standard deviation 1.0 using the empirical mean and standarddeviation across the n = 52 patients.We complete the model with prior specifications for the hyperparameters η.For theresidual variance σ 2 we assume σ −2 ∼ Gamma(a/2, ab/2), parametrized such that E(σ −2 ) =1/b, with a = 10 and b = 0.01. Let diag(x) denote a diagonal matrix with diagonal elementsx. For the covariance matrix of the normal kernel in (8), we use S −1 ∼ W (q, R −1 /q) withq = 25 degrees of freedom and R = diag(0.01, 0.01, 0.1, 0.1, 0.1, 0.01, 1, 1). The elements ofθ ij are arranged such that the first two elements correspond to the covariate x ij and thethird through eighth elements correspond to the parameters in the non-linear regression (9),z 1 , z 2 , z 3 , τ 1 , τ 2 and β 1 . The base measure G ⋆ of the DP prior is assumed multivariate normal8


G ⋆ (µ ij ) = N(m, B) with a conjugate normal and inverse Wishart hyperprior on the moments.That is, m ∼ N(d, D) and B −1 ∼ W (c, C −1 /c) with c = 25 and C = diag(1, 1, 1, 1, 1, .1, 1, 1),D = I 8 , and the hyperprior mean is fixed as the average of single patient maximum likelihoodestimates (m.l.e.). Let ̂θ i1 denote the m.l.e. for patient i. We use d = 1/n ∑ ̂θi1 . Finally,the total mass parameter is assumed M ∼ Gamma(5, 1).We implemented posterior MCMC simulation to carry out inference in model (2), (3)and (5), using with the described prior and hyperprior choices. The parameter σ 2 , S, B, mand B are updated by draws from their complete conditional posterior distributions. All arestandard probability models that allow efficient random variate generation. Updating thelatent variables µ i , i = 1, . . . , n and the total mass parameter M proceeds as described inMacEachern and Müller (2000). Finally, consider updating θ i . Conditional on µ i , inference inthe model is unchanged from the single-cycle model. Updating the random effects parametersθ ijin a posterior Markov chain Monte Carlo (MCMC) simulation reduces to a nonlinearregression defined by the sampling model (9) and the normal prior θ ij ∼ N(µ ij , S). Inparticular, for the coefficients in θ ij corresponding to the random effects parameters z 1 , z 2and z 3 , the complete conditional posterior is available in closed form as the posterior in anormal linear regression model.Posterior inference is summarized in Figures 2 through 4. Recall that H(θ i ) = ∫ p(θ i |µ i , η) dG(µ i ) denotes the nonparametric mixture model for the random effects distribution.Also, we use Y to denote all observed data. Note that the posterior expectation E(H | Y ) isidentical to the posterior predictive p(θ n+1 | Y ) for a new subject: p(θ n+1 | Y ) = ∫ p(θ n+1 |H, Y ) dp(H | Y ) = ∫ H(θ n+1 ) dp(H | Y ). The high dimensional nature of θ ij makes itimpractical to show the estimated random effects distribution itself. Instead we show theimplied WBC profile as a relevant summary. Figure 2 shows posterior predictive WBC countsfor a future patient, arranged by dose x ij and cycle j. Each panel shows posterior predictiveinference for a different dose of CTX and GM-CSF, assuming a constant dose across allcycles. Within each panel, three curves show posterior predictive mean responses for cyclesj = 1 through j = 3. Each curve shows E(y n+1,jk | Y ), plotted against t n+1,jk . Together, thethree curves summarize what was learned about the change of θ ij across cycles. Note howthe curve for the third cycle (j = 3) deteriorates by failing to achieve the recovery to baselineWBC. Comparing the predicted WBC profiles for high versus low dose of GM-CSF for thesame level of CTX confirms that the growth factor worked as intended by the clinicians. Theadded GM-CSF improves the recovery to baseline for later cycles.Figure 3a summarizes an important feature of G. Let p14 denote the probability of whiteblood cell count (WBC) above a critical threshold of 1000 on day 14, i.e., p14 = p(Y n+1,jk >9


log 1000 | Y ) for t n+1,jk = 14 (we modeled log WBC). The figure plots p14 against cycle,arranged by treatment level x n+1 (assuming constant treatment level across all cycles anddenoting the common value by x n+1 ). For each cycle j and treatment level the lines showthe marginal posterior predictive probability of a WBC beyond 1000 by day 14. Figure 3bplots the posterior predictive minimum WBC (in log 1000) by cycle within doses of the twodrugs.Figure 4 shows another summary of the estimated model H(θ), across cycles, for fixeddoses, CTX= 3g/m 2 and GM-CSF= 5µg/kg. We consider two clinically relevant summaries,the nadir WBC and the number of days that WBC is below a critical threshold of 1000 (TLO).Both summaries are evaluated for each cycle. For each summary statistic we show the jointdistribution for cycles 1 and 2. The bivariate distributions are visualized by plotting 500random draws.6 ConclusionWe have introduced semiparametric Bayesian inference for multi-level repeated measurementdata. The nonparametric nature of the model are the random effects distribution for the firstlevel random effects and the probability model for the joint distribution of random effectsacross second level repetitions.The main limitation of the proposed approach is the computation-intensive implementation.The computational effort is in inference about possible configurations of ties of thelatent variables, as in any DP mixture model. The probability model on the configurationindicators is easy to describe, but implementation involves manipulation of cluster membershipsand keeping track of a variable number of unique values.Regression on cycle-specific covariates x ij was accommodated by extending the randomeffects distribution to a joint vector of covariates and random effects. The implied conditionaldistribution of random effects given covariates formalizes the desired regression. Thisconstruction, while traditional, raises the possible concern of including a model on covariatesthat might have been fixed by design. An alternative construction could use the model proposedin De Iorio et al. (2004) to introduce a covariate in the DP prior for G by defining adependent DP (DDP) model (MacEachern, 2001). Everything else would remain unchanged.Two other important directions of extensions for the proposed model are to different dataformats, for example repeated binary data, and to a more structured model for the dependenceacross cycles. In the proposed model, dependence across cycles is essentially learnedby clustering of the imputed random effects vectors for the observed patients. The approach10


works well for continuous responses with a non-linear regression model (2), assuming theresidual variance is small enough to leave little posterior uncertainty for the θ ij . The modelis not appropriate for less informative data, for example binary data. As a more structuredalternative, we would suggest keeping the non-parametric random effects model for the firstcycle random effects only and assuming a suitable parametric model for the later cycle randomeffects. For example, one could assume an autoregressive model for θ ij conditional onθ i,j−1 . In this case the nature of the dependence across cycles is fixed, and the model onlyallows learning about the strength of this relationship, for example by learning about theautoregressive coefficients.An extension to binary repeated measurements requires replacing the top level repeatedmeasurement model. For example, the sampling model could be based on the notion oforder-l exchangeability. Order-l exchangeability defines a non-parametric probability modelfor binary sequence data that imposes minimal structure while still allowing inference aboutdependence (Quintana and Newton, 1998). Order-l exchangeability implies a mixture homogeneousMarkov chains. The mixture is with respect to the order-l transition probabilities.Letting θ ij denote the set of transition probabilities, the model fits into the remaining structureof the probability model proposed earlier. We are currently working on this approach.Alternatively, for reasons of simplicity and practical feasibility, one could use a parsimoniousparametrization of the transition probabilities.Finally, in the proposed inference we did not model informative censoring. For example,in the data set of multi-course chemotherapy patients, it is plausible that patients drop outfrom the treatment for reasons related to the observed response. It would be straightforwardto add a factor in the likelihood to model the time to withdrawal from the study. Thisextension to the model is an area of our on-going research.ReferencesAntoniak, C. E. (1974), “Mixtures of Dirichlet Processes with Applications to BayesianNonparametric Problems,” Annals of Statistics, 2, 1152–1174.Browne, W. J., et al. (2002), “Bayesian and likelihood methods for fitting multilevel modelswith complex level-1 variation,” Computational Statistics and Data Analysis, 39, 203–225.Bush, C. A. and MacEachern, S. N. (1996), “A semiparametric Bayesian model for randomisedblock designs,” Biometrika, 83, 275–285.11


De Iorio, M., et al. (2004), “An ANOVA model for dependent random measures,” Journalof the American Statistical Association, to appear.Denison, D., et al. (2002), Bayesian Methods for Nonlinear Classification and Regression,New York: Wiley.Ferguson, T. S. (1973), “A Bayesian analysis of some nonparametric problems,” Annals ofStatistics, 1, 209–230.Goldstein, H., Browne, W. J., and Rasbash, J. (2002), “Multilevel modelling of medicaldata,” Statistics in Medicine, 21, 3291–3315.Ishwaran, H. and James, L. J. (2003), “Generalized weighted Chinese restaurant processesfor species sampling mixture models,” Statistica Sinica, 13, 1211–1235.Jain, S. and Neal, R. M. (2004), “A split-merge Markov chain Monte Carlo procedure forthe Dirichlet process mixture model,” Journal of Computational and Graphical Statistics,13, 158–182.Kleinman, K. and Ibrahim, J. (1998a), “A Semi-parametric Bayesian Approach to the RandomEffects Model,” Biometrics, 54, 921–938.Kleinman, K. P. and Ibrahim, J. G. (1998b), “A Semi-parametric Bayesian Approach toGeneralized Linear Mixed Models,” Statistics in Medicine, 17, 2579–2596.Lichtman, S. M., et al. (1993), “Phase I trial and granulocyte-macrophage colony-stimulatingfactor plus high-dose cyclophosphamide given every 2 weeks: a Cancer and LeukemiaGroup B study,” Journal of the National Cancer Institute, 85, 1319–1326.MacEachern, S. N. (2001), “Dependent Nonparametric Processes,” Journal of the AmericanStatistical Association.MacEachern, S. N. and Müller, P. (2000), “Efficient MCMC Schemes for Robust ModelExtensions using Encompassing Dirichelt Process Mixture Models,” in Robust BayesianAnalysis, eds. F. Ruggeri and D. R. Insua, New York.MacEachern, S. N. and Müller, P. (2000), “Efficient MCMC Schemes for Robust ModelExtensions using Encompassing Dirichlet Process Mixture Models,” in Robust BayesianAnalysis, eds. F. Ruggeri and D. Ríos-Insua, 295–316, New York: Springer-Verlag.12


Mallet, A., et al. (1988), “Handling covariates in population pharmacokinetics with an applicationto gentamicin,” Biomedical Measurement Informatics and Control, 2, 138–146.Müller, P. and Quintana, F. (2004), “Nonparametric Bayesian Data Analysis,” StatisticalScience, 19, 95–110.Müller, P., Quintana, F., and Rosner, G. (2004), “Hierarchical Meta-Analysis over RelatedNon-parametric Bayesian Models,” Journal of the Royal Statistical Society, SeriesB (Methodological), 66, 735–749.Müller, P. and Rosner, G. (1997), “A Bayesian population model with hierarchical mixturepriors applied to blood count data,” Journal of the American Statistical Association, 92,1279–1292.Neal, R. M. (2000), “Markov chain sampling methods for Dirichlet process mixture models,”Journal of Computational and Graphical Statistics, 9, 249–265.Pitman, J. (1996), “Some Developments of the Blackwell-MacQueen Urn Scheme,” in Statistics,Probability and Game Theory. Papers in Honor of David Blackwell, eds. T. S. Ferguson,L. S. Shapeley, and J. B. MacQueen, IMS Lecture Notes, 245–268, IMS.Quintana, F. A. and Newton, M. A. (1998), “Assessing the Order of Dependence for PartiallyExchangeable Binary Data,” Journal of the American Statistical Association, 93, 194–202.Walker, S. and Wakefield, J. (1998), “Population Models with a Nonparametric RandomCoefficient Distribution,” Sankhya, Series B, 60, 196–214.13


WBC−1 0 1 2WBC−1 0 1 2WBC−1 0 1 2CTX 1 GM 100 5 10 15 20DAYCTX 3 GM 30 5 10 15 20DAYCTX 3 GM 50 5 10 15 20DAYWBC−1 0 1 2WBC−1 0 1 2WBC−1 0 1 2CTX 3 GM 100 5 10 15 20DAYCTX 4 GM 50 5 10 15 20DAYCTX 4 GM 100 5 10 15 20DAYFigure 2: Prediction for future patients treated at different levels of CTX and GM-CSF.For each patient we show the predicted response over the first three cycles. CTX levels are1.5, 3.0 and 4.5 g/m 2 (labeled as 1,3 and 4 in the figure). GM-CSF doses are 2.5, 5 and 10µg/kg(labeled as 3,5 and 10). Inference is conditional on a baseline of 2.0.14


0.0 0.2 0.4 0.6 0.8 1.0CYCLEP(y14 > 0)1 2 3● ● ● 1●●● 2●●● 3●●● 4●●● 5●●● 6i CTX GM1 1.5 10.02 3.0 2.53 3.0 5.04 3.0 10.05 4.5 5.06 4.5 10.0−2.0 −1.5 −1.0 −0.5 0.0 0.5 1.0CYCLEFNADIR1 2 3●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● 1●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● 2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● 3●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● 4●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● 5●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● 6i CTX GM1 1.5 10.02 3.0 2.53 3.0 5.04 3.0 10.05 4.5 5.06 4.5 10.0(a) P (y14 > 0 | Y )(b) Minimum WBCFigure 3: Clinically relevant summaries of the inference across cycles: probability of W BC >1000 on day 14 (left panel) and estimated nadir WBC count (right panel). The left panelshows the posterior probability of WBC above 1000 on day 14, plotted by treatment andcycle. At low to moderate levels of the chemotherapy agent CTX, treatment with high levelof the growth factor stimulating factor GM-CSF stops the otherwise expected deteriorationacross cycles. Even for high CTX, the additional treatment with GM-CSF still mitigatesthe decline over cycles. The right panel shows the minimum WBC (in log 1000) plotted bytreatment and cycle. Reported CTX doses are in g/m 2 and GM-CSF doses are in µg/kg.15


TLO CYCLE 20 2 4 6 8 10●●●● ●●●●●●●●● ●●● ●●●●●●● ●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●● ●●●●●● ●●● ●● ●● ●●●●●●●●●●●●●●● ●● ●●●● ●●●●●● ●●● ●●●●●● ●●●●●●●●●●● ●●●●●●●●●●●●●● ●● ●●●●FNADIR CYCLE 2−2.0 −1.5 −1.0 −0.5●●●●●● ● ●●●● ●● ● ●● ●● ●● ●● ●●● ●●● ● ●●●●●● ● ●●●● ●●●●●●●●●● ●●●●● ●●● ●●●●●● ● ●●● ● ●● ●●●● ●●● ●●● ●●●●● ● ●●●●●●●●●●●●●●●●●●● ● ●●● ●●●●●●●●●●●●●●●●●●● ●●● ●●●●●● ●●●●● ●●●●●●● ●● ● ●●●●●●●●●●●●●●●●●●● ● ●●● ●●●●●●●● ●●●●● ●●●●● ● ●●●●●●●●● ●●●● ●●● ● ●● ● ●●●●●● ●●●●●●●●● ●●●●●●●● ●● ● ●● ●● ●●●2 3 4 5 6 7TLO CYCLE 1−1.5 −1.0 −0.5FNADIR CYCLE 1(a) T lo for cycle 1 and 2. (b) Minimum WBC for cycle 1 and 2Figure 4: Estimated H(θ). We show the bivariate marginals for cycle 1 and 2 for two relevantsummaries of θ, for doses CTX=3 g/m 2 and GM-CSF=5 µg/kg. The left panel shows theestimated distribution of T lo , the number of days that WBC is below 1000, for the firsttwo cycles. The right panel shows the same for the minimum WBC (in log 1000). Thedistributions are represented by scatterplots of 500 simulated draws. For the integer valuedvariable T lo we added additional noise to the draws to visualize multiple draws at the sameinteger pairs.16

More magazines by this user
Similar magazines