30.11.2012 Views

(OPPE) Using Automatically Captured Electronic Anesthesia Data

(OPPE) Using Automatically Captured Electronic Anesthesia Data

(OPPE) Using Automatically Captured Electronic Anesthesia Data

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Performance Measures<br />

The Joint Commission Journal on Quality and Patient Safety<br />

Ongoing Professional Performance Evaluation (<strong>OPPE</strong>) <strong>Using</strong><br />

<strong>Automatically</strong> <strong>Captured</strong> <strong>Electronic</strong> <strong>Anesthesia</strong> <strong>Data</strong><br />

Jesse M. Ehrenfeld, MD, MPH; Justin P. Henneman, MS; Robert A. Peterfreund, MD, PhD; Tyler D. Sheehan, BS; Feng<br />

Xue, MS; Stephen Spring, BA; Warren S. Sandberg, MD, PhD<br />

In 2006 The Joint Commission updated the accreditation<br />

standards for the process of credentialing and privileging practitioners<br />

to make the process evidence based and more objective,<br />

facilitate continuous monitoring of performance, help identify<br />

substandard performance, and provide a basis for intervening<br />

when safety and quality of care issues are identified. Specifically,<br />

the revised standards require—for maintaining privileges—an<br />

Ongoing Professional Practice Evaluation (<strong>OPPE</strong>; effective since<br />

January 1, 2007).* 1–4<br />

Medical specialties employ numerous methods to examine<br />

physician performance, including evaluation of encounters with<br />

simulated patients, observation of patient care, peer assessment,<br />

medical record audits, and portfolio appraisals. 5 In specialties in<br />

which data are both objective and saved, samples of data can be<br />

used to assess competence. For example, radiologists often double-read<br />

films and pathologists double-read slides to assess concordance<br />

between an expert reviewer and the physician being<br />

evaluated. Each of these methods is time consuming, labor intensive,<br />

expensive, and difficult to perform on a continuous<br />

basis. 6 Furthermore, methods that employ direct observation<br />

have the potential to introduce an observation bias, partly because<br />

individuals behave differently when they know they are<br />

being watched. 7–9<br />

The Massachusetts General Hospital (Boston) is a large academic<br />

center providing anesthesia services for more than 49,000<br />

procedures each year. In seeking to ensure compliance with the<br />

new Joint Commission physician credentialing and privileging<br />

standards, the size (149 faculty members) of the department of<br />

<strong>Anesthesia</strong>, Critical Care and Pain Medicine and the volume of<br />

cases performed annually necessitated the development of a<br />

* According to Medical Staff (MS) Standard MS.08.01.03, “Ongoing professional<br />

practice evaluation is factored into the decision to maintain existing privilege(s), to<br />

revise existing privileges, or to revoke an existing privilege prior to or at the time of<br />

renewal.” According to Element of Performance 2, “The type of data to be collected<br />

is determined by individual departments and approved by the organized medical<br />

staff.” MS.06.01.05, Element of Performance 9 stipulates the use of “Relevant practitioner-specific<br />

data as compared to aggregate data, when available.”<br />

February 2012 Volume 38 Number 2<br />

Copyright 2012 © The Joint Commission<br />

Article-at-a-Glance<br />

Background: The Massachusetts General Hospital (Bos -<br />

ton), a large academic center providing anesthesia services<br />

for more than 49,000 procedures each year, created an Ongoing<br />

Professional Practice Evaluation (<strong>OPPE</strong>) process that<br />

could use readily available, automatically captured electronic<br />

information from its vendor-provided anesthesia information<br />

management system.<br />

Methods: The <strong>OPPE</strong> credentialing committee selected the<br />

following initial metrics: Blood pressure (BP) monitoring,<br />

end tidal CO 2 monitoring, and timely documentation of<br />

compliance statements. Baseline data on the metrics were<br />

collected in an eight-month period (January 1, 2008–August<br />

31, 2008). In February 2009 information on the metrics was<br />

provided to the department’s staff members, and the ongoing<br />

evaluation process began. On the basis of three months<br />

of data, final reports for physicians being credentialed were<br />

distributed. Each report included a listing for each metric of<br />

the total number of compliant cases and noncompliant cases<br />

and a comparison by percentage to the baseline departmental<br />

evaluation. A summary statement indicated whether a<br />

physician’s performance was within the group representing<br />

95% of all department physicians. Noncompliant cases were<br />

listed by medical record number and case date so providers<br />

and reviewers could examine individual cases.<br />

Conclusion: A novel, automated, and continuous reporting<br />

system for physician credentialing that uses the existing<br />

clinical information system infrastructure can serve as a key<br />

element of a comprehensive clinical performance evaluation<br />

that measures both technical and generalizable clinical skill<br />

sets. It is not intended to provide a complete system for<br />

measuring competence but rather to serve as a first-round<br />

warning mechanism and metric scoring tool to identify<br />

problems and potential performance noncompliance issues.<br />

73


74<br />

The Joint Commission Journal on Quality and Patient Safety<br />

novel strategy to avoid the high resource and time costs associated<br />

with traditional methods of clinical performance evaluation.<br />

Even if we developed a process that required only one hour of<br />

physician time and one hour of administrative time (both conservative<br />

estimates), the need to perform the <strong>OPPE</strong> process more<br />

frequently than annually would result in sequestration of several<br />

hundreds of hours of physician and administrative time. Removing<br />

clinicians from direct patient care to comply with the <strong>OPPE</strong><br />

mandate would be a significant financial burden and drain on<br />

clinical efficiency. To avoid such a significant impact on clinical<br />

operations, we sought to create a process that could use readily<br />

available, automatically captured electronic information from<br />

our vendor-provided anesthesia information management system<br />

(AIMS) to address The Joint Commission’s <strong>OPPE</strong> requirements.<br />

A fully implemented AIMS has been in place in each of<br />

our 70 anesthetizing locations since 2002. The AIMS data provide<br />

reliable and extensive documentation of clinical monitoring<br />

and physician practice patterns.<br />

Our primary goal in creating the <strong>OPPE</strong> process was to develop<br />

a system that (1) requires little or no additional effort on<br />

behalf of the clinician being evaluated and (2) minimizes or eliminates<br />

the modified behavior effect that an observer or mock patient<br />

might create. We also attempted to create a system that<br />

would be unbiased in its measurement of clinical behavior, con-<br />

Table 1. Common Methods of Performance Review<br />

Review Method Advantages Disadvantages<br />

Chart Review Large sample size Difficult to interpret<br />

Randomized Charts can be “smoothed” afterward<br />

No observer effect Retrospective bias<br />

Direct Observation High level of detail Time consuming and expensive<br />

Contextual Observer effect<br />

Immediate feedback Observer bias<br />

Difficult to review many cases<br />

Simulated Patients Controlled and repeatable Time consuming and expensive<br />

Observer bias<br />

Observer effect<br />

360-Degree Evaluation Balanced/minimized bias Time consuming and expensive<br />

Video Review Reduces observer effect Time consuming and expensive<br />

Difficult to interpret<br />

Retrospective bias<br />

Control Charting No observer effect Charts can be “smoothed” afterward<br />

Continuous Retrospective bias<br />

Easy to measure<br />

tinuous, and relatively inexpensive to install and maintain. Such<br />

a system, which could provide feedback that was both continuous<br />

and transparent, would enable physicians to self-assess their<br />

own performance and make adjustments to correct issues before<br />

the actual credentialing process.<br />

Methods<br />

THE COMMITTEE’S CHARGE: DESIGN A SET OF<br />

CREDENTIALING METRICS<br />

We began by establishing the <strong>OPPE</strong> credentialing committee in<br />

fall 2008 to design a set of meaningful credentialing metrics.<br />

This committee consisted of six senior staff physicians [including<br />

J.M.E., R.A.P., W.S.S. (chair)] representing a broad crosssection<br />

of the department’s clinical activities—including<br />

pediatric, transplant, neurosurgical, orthopedic, and vascular<br />

anesthesia. The committee conducted a literature search and<br />

consulted with other hospitals and departments to generate and<br />

examine a list of existing methods of physician performance evaluation.<br />

The committee discussed the advantages and disadvantages<br />

of the commonly used methods to review physician<br />

competency in its efforts to design a solution that was modeled<br />

on the strengths of successful methods. A partial list of commonly<br />

used methods to evaluate clinical performance is provided<br />

in Table 1 (above).<br />

A major goal of the committee was to ensure that metrics<br />

February 2012 Volume 38 Number 2<br />

Copyright 2012 © The Joint Commission


The Joint Commission Journal on Quality and Patient Safety<br />

were consistent with national practice and centered on patient<br />

care. Thus, the committee looked closely at the American Society<br />

of Anesthesiologists (ASA) standards for monitoring and patient<br />

care as put forth by the ASA Standards and Practice<br />

Parameters Committee. 10<br />

To minimize any administrative burden and reduce the effect<br />

of observation and bias in our measurements, the committee ultimately<br />

decided to focus on extraction of readily available electronic<br />

AIMS data, thereby enabling the creation of an automatic,<br />

reliable, cost-effective process applied on a continuous, ongoing<br />

basis. We could easily modify the process over time to replace<br />

metrics or add additional metrics as needed.<br />

SELECTING THE INITIAL METRICS<br />

By October 2008 the committee selected the initial metrics:<br />

blood pressure (BP) monitoring, end tidal CO 2monitoring, and<br />

timely documentation of compliance statements.<br />

Blood Pressure Monitoring: Requires that a physician document<br />

BP prior to induction of general anesthesia. Documentation<br />

of BP in the AIMS occurs automatically if the BP is<br />

measured. The induction of anesthesia was inferred from manually<br />

entered comments or was inferred from the automatic detection<br />

of inhalation anesthetics in the exhaled gases.<br />

End Tidal CO 2 Monitoring: Requires that a physician monitor<br />

the end tidal CO 2 level at least once during the provision of<br />

general anesthesia. Documentation of end tidal CO 2 monitoring<br />

is automatic if the monitor is functional and connected.<br />

Timely Documentation of Compliance Statements. Requires<br />

that a physician document all the necessary case compliance/<br />

attestation statements that make a record billable no more than<br />

120 minutes after the end of anesthesia care. This documentation<br />

is part of the normal clinical documentation work flow.<br />

BASELINE DATA COLLECTION<br />

After designing the three metrics, in an eight-month period<br />

(January 1, 2008–August 31, 2008), we performed a baseline<br />

set of measurements to validate the metrics. Our goal was to be<br />

able to set a performance threshold for passing each metric that<br />

would distinguish acceptable from unacceptable performance<br />

while taking into consideration any limitations or artifacts contained<br />

within the electronic AIMS database. The committee ultimately<br />

decided to set the threshold for passing each metric at<br />

a level that encompassed 95% of all physicians in the baseline<br />

dataset—that is, 95% of physicians met the metric.<br />

Of the 149 anesthesiologists in the department, 128 (86%)<br />

were subject to the metrics. The remaining 18 anesthesiologists<br />

(9 pain physicians, 4 critical care physicians, and 5 physicians<br />

February 2012 Volume 38 Number 2<br />

Copyright 2012 © The Joint Commission<br />

who work exclusively in our preoperative evaluation clinic) were<br />

not subject to the metrics.<br />

<strong>Electronic</strong> <strong>Data</strong>base Queries. <strong>Using</strong> SQL Query Analyzer<br />

(Microsoft, Redmond, Washington), we developed a set of electronic<br />

database queries to extract electronic data from our AIMS.<br />

For each case during the baseline data collection period, the<br />

query returned the unique case identifier, date of service, physician<br />

identification, operating room (OR), type of anesthesia<br />

(general, monitored anesthesia care, or regional), and the specific<br />

variables relevant to each of the three metrics—as now described.<br />

■ BP Monitoring. To perform the BP monitoring query, the<br />

following metric specific variables were obtained:<br />

–Start of <strong>Anesthesia</strong> Care Time (manually documented)<br />

–<strong>Anesthesia</strong> Induction Time (manually documented)<br />

–First Inhalational Agent Name (automatically recorded from<br />

gas analyzer)<br />

–First Inhalational Agent Value (automatically recorded from<br />

gas analyzer)<br />

–First BP Measurement Time (automatically recorded from<br />

physiologic monitor)<br />

–First Systolic BP Value (automatically recorded from physiologic<br />

monitor)<br />

–First Diastolic BP Value (automatically recorded from physiologic<br />

monitor)<br />

The query compared the time stamp of the first BP recorded<br />

in the chart to the time stamp associated with the anesthesia induction<br />

time (the earlier of either the manually documented<br />

anesthesia induction time or the time of first inhalational<br />

agent).* The logic also excluded nonphysiologic BP values (from<br />

zeroed but disconnected arterial lines) from consideration. We<br />

chose BP measurement prior to induction as a quality measure<br />

because it is frequently not adhered to. Beyond our own collective<br />

clinical experience, the authors’ research into the documentation<br />

of BP during anesthesia has shown significant evidence<br />

that BP monitoring does not always meet our expectations and<br />

that progress is needed to enhance patient care. 11 We therefore<br />

believe that standard practices can, in many cases, make for ideal<br />

metrics of patient care—that is, how often do we adhere to the<br />

standard. Although we hope that standard practice is reflected in<br />

routine care, we know that it sometimes does not. For example,<br />

administering on-time antibiotics prior to surgical incision is a<br />

standard operating procedure that is not always adhered to. 12,13<br />

Thus, insofar as this task—or BP monitoring, for that<br />

* For the purposes of our query we used the following thresholds: Isoflurane > 0.1<br />

minimum alveolar concentration (MAC); sevoflurane > 0.2 MAC; desflurane > 0.5<br />

MAC.<br />

75


76<br />

The Joint Commission Journal on Quality and Patient Safety<br />

matter—does not actually occur in 100% of cases, it would appear<br />

to be a valid quality measure with relevance to physician<br />

performance and patient care.<br />

■ End Tidal CO 2 Monitoring. The primary purpose of the<br />

end tidal CO 2 query, was to determine whether or not it was<br />

measured during a given general anesthesia case. Whenever it is<br />

measured, the AIMS creates a unique database entry every 60<br />

seconds, which includes a number of data elements (for example,<br />

time stamp, value, data source). Because most cases contain<br />

hundreds of end tidal CO 2measurements, we programmed our<br />

SQL query to simply count the number of measurements per<br />

case and associate that count with a single unique case identifier.<br />

The ASA national standard for end tidal CO 2 monitoring is<br />

that “continual monitoring for the presence of expired carbon<br />

dioxide shall be performed unless invalidated by the nature of<br />

the patient, procedure or equipment.” 10 However, there are circumstances<br />

in which CO 2 monitoring is not applied at all when<br />

patients undergo general anesthesia (particularly when a MAC<br />

case is converted to a general anesthetic). We therefore targeted<br />

use of end tidal CO 2 monitoring at any point during a case as<br />

our first version of this metric, but we expect that a future version<br />

might evaluate the frequency and duration of monitoring.<br />

■ Timely Documentation of Compliance Statements. To<br />

create the data needed for the query regarding timely documentation<br />

of compliance, the “End of <strong>Anesthesia</strong> Care Time”—a<br />

particular time stamp recorded by the anesthetist at case conclusion—was<br />

obtained from the AIMS database. In addition, all<br />

time stamps associated with each of the required compliance<br />

statements were also obtained and compared. We chose this metric<br />

because we believe that timely documentation facilitates communication.<br />

Downstream care providers need complete<br />

documentation to make the best clinical decisions for their patients.<br />

Given this metric’s direct effect on patient care, it is important<br />

to assess as an aspect of the quality of care provided<br />

patients—just as, say, outstanding dictation reports is also frequently<br />

used as a compliance metric. The time frame of two<br />

hours was selected because the availability of complete electronic<br />

charts to providers of downstream care (for example, postanesthesia<br />

care unit, intensive care unit [ICU], and general floors) is<br />

an important goal for our department. Although billable aspects<br />

of care might not necessarily reflect quality of care, they do influence<br />

a hospital’s ability to provide care. In addition, it is a<br />

growing concern that a caregiver’s economic performance, which<br />

is often influenced by billing metrics, is in fact a valid criteria to<br />

determine credentialing or appointment of staff. 14<br />

It is important that criteria reflect expectation of failure rates.<br />

Any criterion that has a near-perfect passing rate might be a poor<br />

measure of performance—and might not contribute to a potential<br />

for quality improvement. 15<br />

CASE EXCLUSIONS<br />

Several case types were excluded from the baseline assessment.<br />

Because of clinical practice patterns, pediatric cases were excluded<br />

from the BP metric, as were all cases in which patients<br />

were noted to arrive in the OR already intubated—such patients<br />

are almost universally transported with sedation/general anesthesia<br />

while on a transport monitor. Cases in which the anesthetic<br />

delivered did not include a significant inhalational agent<br />

concentration (see the footnote on page 75) were also excluded.<br />

Therefore, cases in which total intravenous anesthesia were excluded<br />

from capture. Finally, we elected to exclude cases in which<br />

a transfer of care occurred, so as to not penalize physicians who<br />

had performed a portion of the case but transferred the case to<br />

another physician who failed to document compliance appropriately.<br />

As such, a separate SQL query was written to find and<br />

demarcate cases in which a transfer of care occurred.<br />

BASELINE DATA ANALYSIS<br />

<strong>Data</strong> collected by the SQL query were analyzed using a<br />

spreadsheet program. Cases were determined to either pass or<br />

fail each of the three metrics on the basis of our predefined standards.<br />

For each of the cases, which were counted on a per-physician<br />

basis, passing/nonpassing percentages were calculated. If an<br />

individual physician performed fewer than 60 cases during the<br />

assessment, he or she was removed from the analysis because of<br />

low case volume and underwent a different evaluative process. A<br />

summary was then created, ranking each physician by his or her<br />

percentage of passing cases. As established by the <strong>OPPE</strong> credentialing<br />

committee, the bottom 5% of the group was flagged as<br />

“Not Passing the Metric” for each of the three parameters.<br />

ONGOING PHYSICIAN PERFORMANCE REPORTING<br />

In February 2009, after establishing passing thresholds for<br />

each of our three metrics, we provided the information on the<br />

metrics to the department’s staff members and began our ongoing<br />

evaluation process. No major educational intervention was<br />

performed. Three months of data for each physician were then<br />

evaluated according to the three metrics, and a confidential<br />

report was provided to each physician.<br />

To present individual reports to clinicians and use our administrative<br />

staff efficiently, an automated data reporting system<br />

was implemented using a spreadsheet program with macros. Two<br />

types of confidential reports were produced—(1) final reports<br />

for physicians being credentialed within the given three-month<br />

February 2012 Volume 38 Number 2<br />

Copyright 2012 © The Joint Commission


The Joint Commission Journal on Quality and Patient Safety<br />

Sample Ongoing Professional Practice Evaluation (<strong>OPPE</strong>) Report<br />

Figure 1. A simple summary description of physician performance appears in the physician performance component of the sample report, below the summary data<br />

tables for each metric.<br />

period and (2) interim reports for all others. The reports were<br />

then disseminated to the appropriate parties for review.<br />

Each report includes a listing for each metric of the total<br />

number of compliant cases, the total number of noncompliant<br />

cases, and a comparison by percentage to the baseline departmental<br />

evaluation. A summary statement indicates whether the<br />

physician’s performance was within the group representing 95%<br />

of all department physicians. Noncompliant cases are listed by<br />

medical record number and case date at the bottom of the panel<br />

so that providers and reviewers can then examine individual<br />

cases.<br />

Reports<br />

INDIVIDUAL CONFIDENTIAL REPORTS<br />

The results of each metric measurement are continuously compiled<br />

for every attending clinician who practices in a clinical environment<br />

where the AIMS records clinical data and generates<br />

the anesthesia record. Individualized confidential reports include<br />

the total number of compliant and noncompliant cases, as well<br />

as the percentage of passing scores.<br />

A simple summary description of physician performance appears<br />

in the physician performance component of the sample report,<br />

below the summary data tables for each metric, as shown<br />

in Figure 1 (above). The presentation of aggregate perfor mance<br />

data for the entire department is intended to provide the individual<br />

clinician an opportunity to perform a comprehensive selfassessment<br />

against benchmark performance.<br />

COMPARISON OF INDIVIDUALS WITH THE GROUP<br />

Graphs—one graph for each of the three metrics—were also<br />

created to plot individuals against the group. These graphs,<br />

which were not included in the confidential individual reports,<br />

helped to show the overall trend of each metric. These data are<br />

intended to allow those managing the credentialing process to<br />

evaluate metrics over time and replace and/or modify them as<br />

necessary. A sample summary plot for Timely Documentation of<br />

Compliance Statements metric is shown in Figure 2 (page 78).<br />

BASELINE DATA FOR THE THREE METRICS<br />

The baseline data for the three metrics is shown in Table 2<br />

(page 79), along with the average compliance scores for 2010<br />

for comparison.<br />

Discussion<br />

We have created a novel, automated, and continuous reporting<br />

system for physician credentialing, which uses our existing clinical<br />

information system infrastructure, in an effort to address<br />

The Joint Commission’s <strong>OPPE</strong> requirements for quantitative<br />

metrics. This system is automated, continuous, objective, and<br />

relatively inexpensive to implement. Its basic framework can be<br />

refined and developed easily. The system can therefore serve as a<br />

key element of a comprehensive clinical performance evaluation<br />

that measures both technical and generalizable clinical skill sets.<br />

We avoided any metrics (such as reintubation rates, or unplanned<br />

ICU admissions) that might penalize practitioners in<br />

high-risk specialties or would require risk adjustment for case<br />

mix.<br />

The system requires no additional effort on behalf of the clinician<br />

being scrutinized and reduces the introduction of evaluation<br />

artifact in situations in which the presence of an observer<br />

February 2012 Volume 38 Number 2<br />

Copyright 2012 © The Joint Commission<br />

77


78<br />

The Joint Commission Journal on Quality and Patient Safety<br />

or a simulated patient scenario modifies behavior. It also has<br />

minimal costs to install and maintain and is unbiased in its measurement<br />

of clinical performance. It is not intended to provide a<br />

complete system for measuring competence but rather to serve<br />

as a first-round warning mechanism and metric scoring tool to<br />

identify problems and potential performance noncompliance issues.<br />

Numerous problems can arise in the process of delivering<br />

anesthesia care (or any clinical practice), such as failure to perform<br />

basic patient monitoring in a fashion that ensures the safety<br />

of the patient. Continuous and transparent systems such as ours<br />

enable physicians to self-assess their performance and make adjustments<br />

to correct issues before engaging in the credentialing<br />

process.<br />

Although the process of credentialing professionals in health<br />

care is not new, 16–18 there are few in-depth analyses of physician<br />

performance evaluation, and most are focused on overtly technical<br />

skills; some address generalized professional skill sets. Generalizable<br />

skills and specialized knowledge are both important to<br />

clinical care and should be included in any competence metric. 19<br />

Surgery and endoscopy practices place a high value on quantifying<br />

technical skills. 20–22 In anesthesia, it has been suggested that<br />

technical clinical performance can also be readily evaluated by<br />

simulation 23 or control chart methodology. 24<br />

The development of our credentialing system reflects similar<br />

work reported by others. 25 Schartel and Metro suggest that evaluation<br />

should take place close to the time of the clinical encounter<br />

and should be intended to not only correct mistakes but<br />

also continuously improve performance. 26 Fried and Feldman<br />

state that technical performance measurements should be objective<br />

and practical for a clinical specialty. 27 Hill argues for standardized<br />

credentialing, 28 but we believe that it is nearly<br />

impossible to standardize the clinical behavior of an entire profession.<br />

Credentialing parameters should be specific enough to<br />

enhance patient care by resolving issues within a specialty. 29 In<br />

any case, because of uneven adoption of technology—for example,<br />

clinical data coding is not universal—it would be unrealistic<br />

to set standards beyond the capacity of smaller hospitals. 30<br />

The purpose of credentialing is to improve patient care, and thus<br />

should be dictated by each specialty’s organization and the health<br />

care institutions themselves.<br />

It should be noted that the purpose of our work was to design<br />

a credentialing process suitable to our own department, with its<br />

particular work flow and clinical practice, rather than for all<br />

medical specialties or even anesthesia as a whole. These metrics<br />

are intended to apply to anesthesiologists in clinical practice in<br />

the OR—who constitute the vast majority of hospital-based<br />

practicing anesthesiologists—and will not apply to all anesthesiologists,<br />

such as pain and ICU physicians. However, we believe<br />

that our results as presented have broad applicability to fields in<br />

which significant structured clinical and compliance documentation<br />

are important components of clinical practice.<br />

Other fields of medicine, particularly ICUs, have used a similar<br />

methodology to enact a system of defining and measuring<br />

metrics of patient care and physician performance. For example,<br />

Wahl et al., without incurring the need for additional personnel,<br />

used a computerized system to collect data on ICU core<br />

mea sures—glucose management, head of bed angle, prophylaxis,<br />

and ventilator weaning. 31<br />

MIDCOURSE CORRECTIONS AND NEXT STEPS<br />

We are in the process of developing additional metrics, coupled<br />

with a determination of which metrics to add or remove on<br />

the basis of their performance over time. Potential future metrics<br />

may include those based on the total case duration or “time-inflight,”<br />

as opposed to what we have initially used, that is; casebased<br />

metrics (items that occur only once per case). This<br />

adjustment reflects the fact that some of our clinicians (for example,<br />

members of our cardiac group) perform a small number<br />

of long cases, as opposed to the majority of clinicians, who administer<br />

a moderate number of relatively short-duration anesthetics.<br />

The long-term goal of the credentialing system is to<br />

develop a comprehensive group of metrics that best represents<br />

the scope of clinical OR anesthesiology.<br />

February 2012 Volume 38 Number 2<br />

Sample Summary Plot<br />

Figure 2. This sample summary plot for the Timely Documentation of Compliance<br />

Statements metric displays the percentage of passing cases. The mean<br />

baseline passing values (horizontal line) are indicated, as are the failure cutoff<br />

point (vertical line), for just over 100 physicians.<br />

Copyright 2012 © The Joint Commission


The Joint Commission Journal on Quality and Patient Safety<br />

Group Mean Baseline Group Mean Current<br />

No. of Physicians Performance Performance<br />

Metric Evaluated at Baseline (Jan 1, 2008–Mar 31, 2008) (Jan 1, 2010–Dec 31, 2010)<br />

End Tidal CO 2 Monitoring 90 98.8% 99.2%<br />

BP Prior to Induction 82 92.0% 96.2%<br />

Compliance Statements within 120 minutes 103 97.9% 99.2%<br />

* BP, blood pressure.<br />

Whereas we started with reporting individual provider performance<br />

at quarterly intervals, since June 2009 we transferred<br />

to a monthly reporting system. We have taken several steps to<br />

improve our credentialing system as each round of scores are reviewed<br />

and assessed. The passing mark of 95% was not assigned<br />

to declare those outside the bounds as having failed, but rather<br />

as being “of interest” to determine why this individual has a different<br />

clinical practice. It was determined that review by the chair<br />

or another appropriate senior member was warranted (with direct<br />

observation or case review) to ensure that the practice was<br />

in fact safe. To minimize direct observation—and any associated<br />

new bias, since January 2010 we started implementing quarterly<br />

metrics reviews to allow clinicians to self-adjust their practice to<br />

fit the guidelines. Accordingly, if the departmental member still<br />

falls below the 95% passing rate for two thirds of the metrics, he<br />

or she will meet with the chair for a Focused Practice Perfor -<br />

mance Evaluation (FFPE), as in the case, for example, in which<br />

an attending anesthesiologist did not meet the metric on both<br />

end tidal CO2 and timely documentation. The formal report delivered<br />

to the staff member contained detailed information describing<br />

the measures and how each of the cases did not meet<br />

the metric. A senior member of the anesthesia department, in<br />

reviewing the cases, was unable to find any nonclinical issue that<br />

might have contributed to the results, and determined that the<br />

staff member did not perform according to standard operating<br />

protocols–resulting in the staff member’s being flagged for closer<br />

(monthly) observation. The senior member met with the staff<br />

member, and discussed how the staff member could improve<br />

compliance with the measures. Closer review was sustained until<br />

scores improved. J<br />

This work was supported by 5T32GM007592 from the National Institute of Health<br />

and the Massachusetts General Hospital Department of <strong>Anesthesia</strong>, Critical Care<br />

and Pain Medicine.<br />

Table 2. Summary Results*<br />

References<br />

1. The Joint Commission: Revised credentialing and privileging standards. This<br />

Month at the Joint Commission, State Hospital Association Edition, Jun 2006.<br />

Accessed Dec 14, 2011. http://www.njha.com/qualityinstitute/pdf/<br />

614200630015PM29.pdf.<br />

2. The Joint Commission: Telephone Conference Call Transcript. Audio Conference<br />

with Joint Commission President Dennis S. O’Leary, M.D: Joint Commission<br />

Credentialing and Privileging Standard (online; no longer available).<br />

3. The Joint Commission. 2012 Comprehensive Accreditation Manual for Hospitals:<br />

The Official Handbook. Oak Brook, IL: Joint Commission Resources,<br />

2011.<br />

4. Freedman S. How 2007 Joint Commission standards expand hospital peer<br />

review. Patient Safety & Quality Healthcare. 2007;4(5):14–16.<br />

5. Donnelly LF, Strife JL. Performance-based assessment of radiology faculty: A<br />

practical plan to promote improvement and meet JCAHO standards. AJR Am<br />

J Roentgenol. 2005;184(5):1398–1401.<br />

6. Overeem K, et al. Doctor performance assessment in daily practise: Does it<br />

help doctors or not? A systematic review. Med Educ. 2007;41(11):1039–1049.<br />

February 2012 Volume 38 Number 2<br />

Copyright 2012 © The Joint Commission<br />

Jesse M. Ehrenfeld, MD, MPH, formerly Director, <strong>Anesthesia</strong> Informatics<br />

Fellowship, Department of <strong>Anesthesia</strong>, Critical Care and Pain<br />

Medicine, Massachusetts General Hospital, Boston, and Assistant<br />

Professor of <strong>Anesthesia</strong>, Harvard Medical School, Boston, is Director,<br />

Perioperative <strong>Data</strong> Systems Research (PDSR); Assistant Professor<br />

of Anesthesiology, Division of Multispecialty Adult<br />

Anesthesiology; Assistant Professor of Biomedical Informatics; Director,<br />

Center for Evidence Based <strong>Anesthesia</strong>; and Medical Director,<br />

Perioperative Quality, Vanderbilt University School of Medicine,<br />

Nashville, Tennessee. Justin P. Henneman, MS, formerly Research<br />

Assistant, Department of <strong>Anesthesia</strong> Critical Care and Pain Medicine,<br />

Massachusetts General Hospital, is a Medical Student, North<br />

Shore LIJ School of Medicine, Hofstra University, Hempstead, New<br />

York. Robert A. Peterfreund, MD, PhD, is Chair, Quality Assurance<br />

Committee, and Anesthetist, Department of <strong>Anesthesia</strong>, Critical Care<br />

and Pain Medicine, Massachusetts General Hospital; and Associate<br />

Professor of <strong>Anesthesia</strong>, Harvard Medical School. Tyler D. Sheehan,<br />

BS, and Feng Xue, MS, are Senior Financial Analysts and<br />

Stephen Spring, BA, is Administrative Director of Finance and Technology,<br />

Department of <strong>Anesthesia</strong>, Critical Care and Pain Medicine.<br />

Warren S. Sandberg, MD, PhD, formerly Associate Professor of<br />

<strong>Anesthesia</strong>, Harvard Medical School, is Chairman, Department of<br />

Anesthesiology, and Professor of Anesthesiology, Division of Multispecialty<br />

Adult Anesthesiology, Vanderbilt University School of Medicine.<br />

Please address correspondence and requests for reprints to<br />

Jesse M. Ehrenfeld, jesse.ehrenfeld@vanderbilt.edu.<br />

79


80<br />

The Joint Commission Journal on Quality and Patient Safety<br />

7. Ary D, et al. Introduction to Research in Education, 8th ed. Belmont, CA:<br />

Wadsworth, 2009.<br />

8. Rose G, Barker DJ. Epidemiology for the uninitiated. Observer variation.<br />

Br Med J. 1978 Oct 7;2(6143):1006–1007.<br />

9. Zegiob LE, Arnold, S., Forehand, R. An examination of observer effects in<br />

parent-child interactions. Child Development. 1975;46(2):509–512.<br />

10. American Society of Anesthesiologists. Standards for Basic Anesthetic Monitoring.<br />

Oct 20, 2010. Accessed Dec 14, 2011. http://www.asahq.org/<br />

For-Members/~/media/For%20Members/documents/Standards%20<br />

Guidelines%20Stmts/Basic%20Anesthetic%20Monitoring%202011.ashx.<br />

11. Ehrenfeld JM, et al. Automatic notifications mediated by anesthesia information<br />

management systems reduce the frequency of prolonged gaps in blood<br />

pressure documentation. Anesth Analg. 2011;113(2):346–363.<br />

12. Classen DC, et al. The timing of prophylactic administration of antibiotics<br />

and the risk of surgical-wound infection. N Engl J Med. 1992 Jan<br />

30;326:281–286.<br />

13. Steinberg JP, et al. Timing of antimicrobial prophylaxis and the risk of surgical<br />

site infections: Results from the Trial to Reduce Antimicrobial Prophylaxis<br />

Errors. Ann Surg. 2009;250(1):10–16.<br />

14. Riley DW. Legal forum. Economic credentialing of physicians: New criteria<br />

and evaluation of physicians. Spine. 1996;21(1):141–146.<br />

15. Fung V, et al. Meaningful variation in performance: A systematic literature<br />

review. Med Care. 2010;48(2):140–148.<br />

16. Starr P. The Social Transformation of American Medicine. New York: Basic<br />

Books, 1982.<br />

17. Wilson FC. Credentialing in medicine. Ann Thorac Surg. 1993;55(5):<br />

1345–1348.<br />

18. Hernandez AM. Trends in health care practitioner credentialing. J Health<br />

Care Finance. 1998;24(3):66–70.<br />

19. Wimmers PF, Fung CC. The impact of case specificity and generalisable<br />

skills on clinical performance: A correlated traits-correlated methods approach.<br />

Med Educ. 2008;42(6):580–588.<br />

20. Sharma VK, Coppola AG Jr, Raufman JP. A survey of credentialing practices<br />

of gastrointestinal endoscopy centers in the United States. J Clin Gastroenterol.<br />

2005;39(6):501–507.<br />

21. Dickinson I, et al. Guide to the assessment of competence and performance<br />

in practising surgeons. ANZ J Surg. 2009;79(3):198–204.<br />

22. Tedesco MM, et al. Simulation-based endovascular skills assessment: The future<br />

of credentialing? J Vasc Surg. 2008;47(5):1008–1011; discussion 1014.<br />

23. Zausig YA, et al. Simulation as an additional tool for investigating the performance<br />

of standard operating procedures in anaesthesia. Br J Anaesth.<br />

2007;99(5):673–678.<br />

24. Runcie CJ. Assessing the performance of a consultant anaesthetist by control<br />

chart methodology. Anaesthesia. 2009;64(3):293–296.<br />

25. Bozadjian EM. Quality improvement, credentialing, and competency. Int<br />

Anesthesiol Clin. 1999;37(4):47–57.<br />

26. Schartel SA, Metro DG. Evaluation: Measuring performance, ensuring<br />

competence, achieving long-term excellence. Anesthesiology. 2010;112(3):<br />

519–520.<br />

27. Fried GM, Feldman LS. Objective assessment of technical performance.<br />

World J Surg. 2008;32(2):156–160.<br />

28. Hill D. The case for standards. Uniform physician credentialing process can<br />

save physician practices and hospitals time, money and frustration. Health<br />

Manag Technol. 2004;25(10):48–49.<br />

29. Hanson J. Measuring physician performance. Building an effective physician<br />

performance system starts with transparency. Health Manag Technol.<br />

2008;29(4):10, 2–3.<br />

30. Rebagliati GS. Physician-specific <strong>OPPE</strong> models in academic medical centers.<br />

Ann Surg. 2009;249(4):700–701.<br />

31. Wahl WL, et al. Use of computerized ICU documentation to capture ICU<br />

core measures. Surgery. 2006;140(4):684–689; discussion 690.<br />

February 2012 Volume 38 Number 2<br />

Copyright 2012 © The Joint Commission

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!