<strong>Prescription</strong> <strong>Drug</strong> <strong>Monitoring</strong> <strong>Programs</strong>: An Assessment <strong>of</strong> the Evidence for Best Practices 8Toward a checklist <strong>of</strong> <strong>PDMP</strong> best practicesThis paper can be considered a step toward developing an evidence-‐based checklist <strong>of</strong> <strong>PDMP</strong> bestpractices that could be used to evaluate a <strong>PDMP</strong>. Each practice would be defined operationally, andwhere possible and appropriate, quantitative metrics indicating success in carrying out the practicewould be specified. Once parameters are established for each practice’s definition and metrics, annualor semiannual surveys <strong>of</strong> <strong>PDMP</strong>s could track their adoption. Some candidate practices considered beloware sufficiently well-‐defined and arguably have enough evidential support to already warrant theirinclusion in a compendium <strong>of</strong> best practices, but many need more clarification, specificity, and evidence<strong>of</strong> effectiveness to support their inclusion. For example, practices in <strong>PDMP</strong> user recruitment,enrollment, and education need to be evaluated, such as the 2012 statutes in Kentucky, New York,Tennessee, and Massachusetts mandating <strong>PDMP</strong> enrollment and use. For demonstration purposes only,a checklist <strong>of</strong> the candidate practices considered below is presented in Appendix A.
<strong>Prescription</strong> <strong>Drug</strong> <strong>Monitoring</strong> <strong>Programs</strong>: An Assessment <strong>of</strong> the Evidence for Best Practices 9III. Methods: Assessing the Evidence Base for PracticeEffectivenessLiterature searchAs the first step in assessing the evidence base for practice effectiveness, we conducted a systematicreview <strong>of</strong> the medical (PubMed), psychological (PsycINFO), and economics (EconLit) literature throughNovember 2011 for articles pertaining to the effectiveness <strong>of</strong> <strong>PDMP</strong>s and <strong>PDMP</strong> best practices, using apredetermined set <strong>of</strong> search terms. Search terms included prescription drug monitoring, prescriptionmonitoring, doctor shopping, multiple prescribers, unsolicited reporting, and proactive reporting. Allarticles from peer-‐reviewed journals, published in English, were considered for inclusion. Abstractsidentified through searches were reviewed to clarify the publication’s relevance, and eligible articleswere retrieved and read to further verify the study’s applicability. These searches were expanded byreviewing the references cited in relevant articles. Articles were excluded if the data did not includeoutcome measures that would allow us to report on the effectiveness <strong>of</strong> <strong>PDMP</strong>s or <strong>of</strong> the best practiceexamined. In later drafts <strong>of</strong> this white paper, the literature search was extended to May 2012.Other literature was identified from a review <strong>of</strong> documents listed on the <strong>PDMP</strong> COE website(www.pmpexcellence.org), on individual states’ <strong>PDMP</strong> websites, and from discussion with <strong>PDMP</strong> COEstaff. We identified written (“documented”) evidence <strong>of</strong> expert opinion or consensus on best practicesfrom review <strong>of</strong> the Alliance <strong>of</strong> States with <strong>Prescription</strong> <strong>Monitoring</strong> <strong>Programs</strong> and National Alliance forModel State <strong>Drug</strong> Laws websites (www.pmpalliance.org and www.namsdl.org), particularly practicesspecified in the 2010 Model Act. Other potential best practices were identified from discussions withexperts in the field.Data extraction and categorization <strong>of</strong> evidenceResearchers extracted data on study characteristics from the articles and other sources <strong>of</strong> evidenceidentified, and summarized the combined evidence for each potential best practice in descriptive andtabular formats. The tabular summary <strong>of</strong> evidence drew upon and was adapted from guidance providedby several sources on grading scientific strength <strong>of</strong> evidence (i.e., Lohr, 2004; Owens et al., 2010). Thecriteria outlined by these authors include a hierarchical evaluation <strong>of</strong> the study design, the risk <strong>of</strong> bias,the quantity <strong>of</strong> the evidence (such as the number <strong>of</strong> studies), the directness <strong>of</strong> the evidence, theconsistency <strong>of</strong> the evidence, and the precision and magnitude <strong>of</strong> the estimates. Due to the paucity <strong>of</strong>studies found on <strong>PDMP</strong> best practices, we focused our analysis on summarizing the type and level <strong>of</strong>evidence available, the number <strong>of</strong> research studies, and where applicable, key findings and consistency<strong>of</strong> the research evidence. Type <strong>of</strong> evidence was categorized into two major classes: published or