04.06.2013 Views

Building a Treeage Markov Model

Building a Treeage Markov Model

Building a Treeage Markov Model

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

EP550 Lecture/Demonstration 16 and Homework 10, Due Friday, March 25<br />

The purpose of this exercise is to teach you how to create <strong>Markov</strong> models using the TreeAge<br />

2011 software program and to use what you’ve created to illustrate additional features of costeffectiveness<br />

analysis. The exercise uses the lupus example from class.<br />

Create the model<br />

1. Open TreeAge, clear any material from the center panel where decision trees are created and<br />

modified, and maximize this panel. Create a decision node by clicking “File/New.../Blank Tree<br />

Diagram/OK.” A square decision node should appear. If not, in the “Tree Nodes” box to the right, double<br />

click “Decision.” Label the decision node “Lupus.” Right click on the decision node, and add two chance<br />

nodes. Label the upper chance node “Usual Care.” Label the lower chance node “Intervention.”<br />

2. Change the Usual Care chance node to a <strong>Markov</strong> node: right click the node and click on “Change<br />

type/<strong>Markov</strong>.” (Note the difference between selecting the node and selecting the subtree attached to the<br />

node.) Make the display simpler: click on “Edit/Tree Preferences/Display/Variables/<strong>Markov</strong> info,” select<br />

“Hide Definitions,” and click “OK.”<br />

3. Add four branches to the <strong>Markov</strong> node. (“Node/Add branch” or “right click/Add branch.” The Lupus<br />

node must be selected each time you repeat this process.)<br />

4. Click on the <strong>Markov</strong> node, select its subtree (“Subtree/Select Subtree”), copy the subtree (“Edit/Copy”),<br />

select the first of the four branches of the <strong>Markov</strong> node, and paste the selected subtree to this branch<br />

(“Edit/Paste”).<br />

5. Label the four branches of the <strong>Markov</strong> node (above the line): “Remission, Active, Flair, Death.”<br />

6. Select the first branch of the “Remission” node. Change this node to a terminal node (right-click the<br />

node, click “Change Type” and select “Terminal”), when the box pops up select the “Remission” state for<br />

this node to jump to, and click “OK.” Change the other three nodes to terminal nodes and select the other<br />

three states for them to jump to.<br />

7. Click on the “Remission” chance node that immediately follows the <strong>Markov</strong> node, select its subtree<br />

(“Subtree/Select Subtree”), copy the subtree (“Edit/Copy”), and paste the subtree to the second and third<br />

branches of the <strong>Markov</strong> node (select the terminal node and then select “Edit/Paste Subtree”).<br />

8. Change the fourth branch of the <strong>Markov</strong> node (“Death”) to a terminal node.<br />

Your tree should look like the tree on the following page.


Add information about the transition probabilities using the following table from the lecture notes:<br />

9. Start by clicking immediately underneath the”Active” terminal branch of the “Remission” branch of the<br />

<strong>Markov</strong> node, typing in “0.41,” and then “Enter.” Type in “0.00" for the transition probabilities for the<br />

terminal branches “Flare” and “Death.” Type in “#” for the transition probability for the terminal node<br />

“Remission.” The reason for this way of indicating probabilities will become obvious later in the exercise.<br />

Add the other transition probabilities to the other terminal nodes. Use the pound sign (#) for the transition<br />

probabilities for “Death” in the other terminal nodes.<br />

Add information about the initial distribution of patients<br />

The following table is from The lecture notes:<br />

10. To start, click underneath the “Remission” branch of the <strong>Markov</strong> node and type in “0.10.” Add<br />

numbers that correspond to the “Active” and “Flare”initial states, and use “#’ to indicate the value for<br />

“Dead.”<br />

Add values for the outcome variables<br />

11. In this example, we will use two outcome variables. The first outcome variable will be a cost variable.<br />

That variable is created using the probability of hospitalization, which is described in the following table<br />

from the lecture notes:<br />

3


To create the outcome variable, multiply the mean number of hospitalizations times the average cost of<br />

hospitalization (cHosp = $10,000). To add the dollar cost of transitioning from remission to remission,<br />

select the “Remission” terminal node of the “Remission” subtree coming out of the <strong>Markov</strong> node. Select<br />

Node/Values List/<strong>Markov</strong> Info, or go to the top of your screen, click on the symbol that represents <strong>Markov</strong><br />

information (includes an “M” with a circle around it). Maximize the box that opens, and click on the row<br />

that corresponds to Transition Reward 1, which is the row labeled “Trans Rwd” that follows the row labeled<br />

“All Sets (Unused+Active).” In the column labeled “Value” type in (cHosp*.05)/((1+r)^ stage). Note that<br />

dividing by (1+r)^_stage discounts the outcome. Push “Enter,” and a box will pop up. Select “General.”<br />

W hen asked for descriptions, cHosp is the “Cost of a hospitalization, and r is the “Discount rate” and<br />

stage is “Cycle number.” For each variable make sure the button for “Show definitions in tree” is<br />

selected.<br />

The second outcome variable measures the years of life. One year in any state except death will equal<br />

one, and one year in the state of death will equal 0.5, because we assume death at midyear.<br />

To recognize that years of life have different qualities, use the hypothetical quality data from the following<br />

table in the lecture notes:<br />

4


To add the number of QALYs associated with transitioning from remission to remission, select the<br />

“Remission” terminal node of the “Remission” subtree coming out of the <strong>Markov</strong> node. Select<br />

Node/Values List/<strong>Markov</strong> Info, or go to the top of your screen, click on the symbol that represents <strong>Markov</strong><br />

information (includes an “M” with a circle around it). In the box that opens, click on the row that<br />

corresponds to Transition Reward 2, which is the row labeled “Trans 2” that follows the row labeled “All<br />

Sets (Unused+Active).” In the column labeled “Value” type in 0.9. To discount this value, divide by (1+r)^_<br />

stage, so the whole expression is: 0.9/((1+r)^ stage). Push “Enter.”<br />

To display outcome information in the tree, go to “Edit/TreePreferences/Display/Variables/<strong>Markov</strong> Info and<br />

select “Show Definitions.”<br />

To prepare the tree for a cost-effectiveness analysis, go to “Edit/TreePreferences/Calculation/Calculation<br />

Method/Cost-Effectiveness, and make sure “Cost payoff” is number 1 and “Effectiveness payoff” is<br />

number 2. If the program says “C-E calculation method is not active. Do you want to activate it now?”<br />

click “Yes.” Click on “Numeric Formatting,” and three boxes should appear. In all three, select 2 “Decimal<br />

places,” “Add trailing zeros,” “Use thousands separator,” and “Show numbers” exactly. In the top most<br />

box select “Units : Currency.” In the middle box, select “Units: Custom suffix” and “Prefix/suffix: QALYs.”<br />

In the lowest box select “Units: None.”<br />

Save your partially completed <strong>Markov</strong> model in *.trex format (“File/<br />

save As ...”) in a file labeled with your last name followed by the number 8, for example, Williams8.<br />

Repeat these steps to add discounted values for cost as Transition Reward 1 and discounted values for<br />

Quality Adjusted Life Years as Transition Reward 2 to other terminal nodes, including the nodes for<br />

“Death,” even when those values are zero.<br />

If you want to avoid the tedium of adding all these values, download “Lupus <strong>Markov</strong> model 1" from the<br />

assignments section of Blackboard, which has these values already added.<br />

Even though you are not going to use stage rewards in this exercise, the program requires that there be<br />

stage rewards. Check the stage rewards in the tree (they are located underneath the four branches of the<br />

<strong>Markov</strong> node) to make sure all of them have values of zero. If any do not have the value of zero, go to the<br />

top of your screen, click on the symbol that represents <strong>Markov</strong> information (includes an “M” with a circle<br />

around it), and put zeros in the locations for stage rewards.<br />

Tell the program how many cycles it should run<br />

12. Select the <strong>Markov</strong> node, go to the top of your screen, select Node/Values List/<strong>Markov</strong> Info, or click on<br />

the symbol that represents <strong>Markov</strong> information (includes an “M” with a circle around it). A panel should<br />

appear at the bottom of your screen with the name “Termination conditions.” In the row labeled “Term” in<br />

the column labeled “Value” type in STAGE>1999.<br />

Define values for the variables you have created<br />

Select the “Lupus” node, go to the top of your screen, select Node/Define Variable, or click on the symbol<br />

at the top (and the bottom right) of your screen that includes “V=” twice, one over the top of the other. A<br />

panel should appear with the names of the variables you have created. Select “cHosp” and assign it the<br />

value of 10000 in the box that pops up. Repeat the process to assign the value of 0.03 to “r.” Repeat the<br />

process and select stage. In the box that pops up there is a section labeled “Groups:” which is<br />

immediately below the section labeled “_stage =”. In this section, select “Keywords.” Doing this activates<br />

a list of words in the section labeled “Element:,” which is to the right. In this section, double click “_stage.”<br />

Check to make sure that at the top of the box you have “ stage `= stage.”<br />

Your tree should look like the following tree:<br />

5


Analyze the model to make sure it works<br />

13. To determine the average number of QALYs that would be expected for a population of lupus patients<br />

who started with this distribution of states, go to “Edit/Tree Preferences .../Calculation Method/Active<br />

Method/Simple/Active Payoff: 2/OK.” Select the <strong>Markov</strong> node and then go to “Analysis/<strong>Markov</strong><br />

Cohort/<strong>Markov</strong> Cohort (Full)/Decimal probability/ Don’t collapse subtrees/Show both stage and cumulative<br />

rewards/Stages to include: 0 to 100/OK.”<br />

Use the fourth icon of the nine icons at the upper right of this panel to save the text report as a<br />

.pdf file. Label the file with your last name followed by 9, for example, Williams9.<br />

Conduct a cost-effectiveness analysis comparing the Usual Care of SLE with a hypothetical<br />

medical intervention.<br />

14. Change the “Intervention” chance node into a <strong>Markov</strong> node, select the “Usual Care” subtree, copy it,<br />

and paste it onto the “Intervention” <strong>Markov</strong> node.<br />

Change the transition probability of the “Active” terminal node of the “Remission” branch of the<br />

“Intervention” <strong>Markov</strong> node from 0.41 to rr*0.41.<br />

Select the Lupus decision node, right click, select “Define variables” and assign the value 0.8537 to rr.<br />

Assign values to the other variables at this node (cHosp = 10000, r = 0.03, and _stage = _stage).<br />

Because the treatment costs one dollar a day, you should change the cost of the outcomes to reflect that<br />

additional cost. For example, for the transition from Remission to Remission in the Intervention <strong>Markov</strong><br />

node change the value of Transition Reward 1 from “(cHosp*.05)/(1+r)^ stage” to<br />

“((cHosp*.05)+cInterv)/(1+r)^ stage,” where cInterv is a variable representing the cost of the intervention<br />

and has the value $365 (select the terminal node, click Node/Value Lists/<strong>Markov</strong> Info, and type in the new<br />

information when the transition reward box pops up). Select the Lupus decision node, right click, select<br />

“Define variables” and assign the value 365 to cInterv.<br />

Save your partially completed <strong>Markov</strong> model in *trex format (“File/save As ...”) in a file labeled with<br />

your last name followed by the number 10, for example, Williams10.<br />

Make similar changes to the other transition rewards for cost (except those with zero values, because no<br />

one made these transitions).<br />

If you want to avoid the tedium of adding all these values, download “Lupus <strong>Markov</strong> model 2" from the<br />

assignments section of Blackboard, which has these values already added.<br />

Select the “Intervention” <strong>Markov</strong> node and go to “Node/Value Lists/<strong>Markov</strong> Info.” Type in “_stage > 1999”<br />

for the termination rule. Check to make sure you have the same values for Termination in the boxes under<br />

the two <strong>Markov</strong> nodes.<br />

Select the “Lupus” node and use “Node/Value Lists/Variable Definitions to insert 0.03 to define the value<br />

of r and 10000 to indicate the value of cHosp. To avoid potential conflicts between the definitions for<br />

these variables at the “Lupus” node versus the definitions for these variables at the 2 <strong>Markov</strong> nodes, get<br />

rid of definitions for these variables at the 2 <strong>Markov</strong> nodes by selecting each node and using<br />

“Node/Values List/Variable Definition to bring up a list of variables. Highlight each variable you want to<br />

delete and click on the red X to delete each one. Do not delete the variable definitions for the “Lupus”<br />

node, which also are listed here.<br />

Click on the “Lupus” node, and prepare for a cost-effectiveness analysis by clicking on “Edit/Tree<br />

7


Preferences/Calculation/Calculation Method/Cost Effectiveness/ Cost Payoff 1, Effectiveness Payoff<br />

2/OK.”<br />

15. Perform a cost-effectiveness analysis. Click on the node “Lupus,” and click “Analysis/Cost-<br />

Effectiveness...” Select “Yes” to the question “Draw chart with cost on the vertical axis?” W ait – this could<br />

take a few seconds. On the right-hand side, select “Text Report.” If “Text Report” is not visible, minimize<br />

the material in the extreme right side of this panel.<br />

Use the fourth icon of the nine icons at the upper right of this panel to save the text report as a<br />

.pdf file. Label the file with your last name followed by 11, for example, Williams11.<br />

Email your saved files (file 8 through file 11) to sankey@wharton.upenn.edu.<br />

Prepare the tree for a probabilistic sensitivity analysis.<br />

The material that follows is optional, in part because it refers to last year’s information, which<br />

included a model that was based on TreeAge 2009. If you decide to use this material anyway, when<br />

you’re finished, you will have a model that will allow you to do probabilistic sensitivity analyses. If<br />

you want to do probabilistic sensitivity analyses without doing all the things described in the<br />

following material, download “Lupus <strong>Markov</strong> <strong>Model</strong> 3” from the assignments section of<br />

Blackboard, and go to item 21 “Conduct a probabilistic sensitivity analysis.”<br />

16. Start by creating Dirichlet distributions for the transition probabilities and the initial distribution. For the<br />

transition probabilities of the Active subtree in the Usual Care <strong>Markov</strong> node, click<br />

“Values/Distributions/New/Dirichlet,” type in the box labeled Alphas list the following material<br />

“List(66;806;56;9),” which comes from the slide below, click OK, name the distribution “ddA,” describe it as<br />

“Dirichlet distribution of the Active subtree,” click “Close.”<br />

Go to the Remission terminal node of the Active subtree, and click underneath the branch next to 0.07 in<br />

the grey box. Click “Insert Distribution,” click on “ddA,” click “Use,” and make sure the material in the white<br />

box reads “Dist(1;1).” If it doesn’t, type the material in the white box. Click “OK.” Use similar steps to label<br />

the Active terminal node “Dist(1;2),” the Flare terminal node “Dist(1;3),” and the Death terminal node<br />

“Dist(1:4)” (or, alternatively, leave the pound sign in place).<br />

8


Use similar steps to create an additional Dirichlet distribution for the transition probabilities of the flare<br />

subtree, and put the transition probabilities in the tree, except in the terminal node Remission because we<br />

have no observations that this transition takes place. Use similar steps to create an additional Dirichlet<br />

distribution for the Remission subtree. Put zeros as the transition probabilities into the Flare and Death<br />

terminal nodes of the Remission subtree (because we have no observations that these transitions take<br />

place). Put a pound sign for the transitional probability of the Remission terminal node and “Dist(3;2)” for<br />

the transitional probability for the Active terminal node. Create an additional Dirichlet distribution for the<br />

initial distribution and put the probabilities into the tree below the branches leading from the <strong>Markov</strong> node.<br />

Create distributions for cost and QALYs using the following information.<br />

Table. Hypothetical mean number of hospitalizations for those who begin in state i and end in state j.<br />

Remis. Active Flare Death<br />

Remission 0.05 0.25 0.00 0.00<br />

Active 0.10 0.20 1.00 0.50<br />

Flare 0.00 0.25 1.25 0.75<br />

9


QALY Distributions<br />

Assume normal distribution (mean, SE)<br />

Assume SD = 0.1<br />

Assume N for calculating SE is same as the number of<br />

individuals observed making the transitions reported in<br />

the Table on slide 24<br />

RtoR: 0.9, 0.0130; RtoA: 0.8, 0.0156<br />

AtoR: 0.8, 0.0123; AtoA: 0.7, 0.0035<br />

AtoF: 0.6, 0.0134; AtoD: 0.35, 0.0333<br />

FtoA: 0.6, 0.0213; FtoF: 0.5, 0.0236<br />

FtoD: 0.25, 0.0158<br />

17. For example, to create outcomes for the Remission terminal node of the Remission subtree, first<br />

create the distributions for the outcomes: click “Values/Distributions/new/Poisson”(Poisson because the<br />

slide above says to use Poisson distributions for the number of hospitalizations), type “0.05" in the white<br />

box labeled lambda (from the hypothetical table above of the number of hospitalizations for going from<br />

Remission to Remission), click “OK,” name the distribution “dhRR,” describe the distribution as<br />

“Distribution of number of hospitalizations, Remission to Remission,” and click “OK.” For the next<br />

distribution click “Values/Distributions/new/Normal,” type “10,000" in the white box labeled Mean and 100<br />

in the white box labeled Std Dev (from the slide above describing the cost of hospitalization), click “OK,”<br />

name the distribution “dcHosp,” describe the distribution as “Distribution of the cost of hospitalization,” and<br />

click “OK.” Use similar steps and the information in the slide immediately above to create a distribution for<br />

the number of QALYs gained for going from Remission to Remission and name this distribution dqRR. To<br />

put these distributions in the <strong>Markov</strong> model, “select the first terminal node, right click the terminal node,<br />

select “<strong>Markov</strong> Transition Rewards,” type in the box for Transition Rwd 1 (which is labeled “Cost:”)<br />

“(dhRR*dcHosp)/(1+r)^t,” type in Transition Rwd 2 (which is labeled “Effect;) (dqRR/(1+r)^t,” and click<br />

“OK.”<br />

Create and insert other distributions for the transition rewards for the remaining terminal nodes in the<br />

Usual Care <strong>Markov</strong> node.<br />

Transfer information from the Usual Care <strong>Markov</strong> node to the Intervention <strong>Markov</strong> node, and<br />

modify it.<br />

18. Start by deleting the subtree following the existing Intervention <strong>Markov</strong> node: right click the<br />

Intervention <strong>Markov</strong> node, click “Select Subtree,” right click again, and click “Cut Subtree.” Right click the<br />

Usual Care <strong>Markov</strong> node, click “Select Subtree,” right click again, click “Copy Subtree,” right click the<br />

Intervention <strong>Markov</strong> node, and click “Paste nodes.”<br />

19. There are two differences between the Usual Care and the Intervention <strong>Markov</strong> nodes. The transition<br />

probabilities in the Remission subtree are different and the cost of the intervention must be added to the<br />

outcomes in the Intervention model. To make these changes, start by creating a distribution for the<br />

10


elative risk of fewer transitions from Remission to Active in the Intervention model (see slide above).To<br />

create this distribution, click “Values/Distributions/New/LogNormal,” type “-.1582" in the white box labeled<br />

mu (mean of logs) and .1816 in the white box labeled sigma (std dev of logs), click “OK,” name the<br />

distribution “drr,” descibe the distribution as “Distribution of the relative risk of fewer transitions with<br />

intervention,” click “OK, and click “Close.” Insert this distribution in the tree: click the Remission to Active<br />

branch of the Intervention <strong>Markov</strong> node, change the transition probability from “Dist(3;2)” to “drr*Dist(3;2).”<br />

Do not insert drr in any other transition nodes.<br />

20. To create the distribution for the cost of the intervention, click “Values/Distributions/New/Normal,” type<br />

“365" in the white box labeled mean and “50" in the white box labeled Std Dev, click “OK,” name the<br />

distribution “dcDrug,” descibe the distribution as “Distribution of the cost of the intervention,” click “OK, and<br />

click “Close.” Insert this distribution into the tree: for each terminal node in the Intervention model, right<br />

click the node, and add dcDrug to the cost outcome. For example, change the cost outcome of the first<br />

terminal node from “(dhRR*dcHosp)/(1+r)^t” to “((dhRR*dcHosp)+dcDrug)/(1+r)^t.”<br />

Conduct a probabilistic sensitivity analysis.<br />

21. Highlight the Lupus decision node. Click “Analysis/Monte Carlo Simulation/Sampling (Probabilistic<br />

Sensitivity). . ./Sample all distributions.” Type 1000 in the white box labeled “Number of samples. . . .”<br />

Click “More options/Seed random number generator,” make sure 1 is in the white box, and click<br />

“OK/Begin.” W hen the program has finished running, click “Stats Format/Effectiveness,” put 4 in the white<br />

box labeled Decimal places, click “OK/Close.” Compare your output with the output in the following 3<br />

slides. To compare your output with the output in the first slide, click the rectangle at the bottom of the<br />

page labeled “Stats shown for” to toggle between output for Usual Care and Intervention. To compare<br />

your output with the output in the second slide, click “Graph/Distribution of Incrementals/Incremental<br />

Cost/OK” and then “Graph/Distribution of Incrementals/Incremental Effectiveness/OK.” To compare your<br />

output with the output in the third slide, repeat what you did to display the graphs and click on<br />

11


“Actions/Distribution Statistics.”<br />

12

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!