23.07.2014 Views

Evidence-Based Geriatric Nursing - Springer Publishing

Evidence-Based Geriatric Nursing - Springer Publishing

Evidence-Based Geriatric Nursing - Springer Publishing

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

18<br />

<strong>Evidence</strong>-<strong>Based</strong> <strong>Geriatric</strong> <strong>Nursing</strong> Protocols for Best Practice<br />

prior to implementing the monitoring activity allows for precise data collection. It also<br />

facilitates benchmarking with other organizations when the data elements are similarly<br />

defined and the data collection methodologies are consistent. Imagine that we want to<br />

monitor falls on the unit. The initial questions would be as follows: What is considered<br />

a fall? Does the patient have to be on the floor? Does a patient slumping against<br />

the wall or onto a table while trying to prevent himself or herself from falling to the<br />

floor constitute a fall? Is a fall due to physical weakness or orthostatic hypotension<br />

treated the same as a fall caused by tripping over an obstacle? The next question would<br />

be the following: Over what time are falls measured: a week, a fortnight, a month,<br />

a quarter, a year? The time frame is not a matter of convenience but of accuracy. To<br />

be able to monitor falls accurately, we need to identify a time frame that will capture<br />

enough events to be meaningful and interpretable from a quality improvement point<br />

of view. External indicator definitions, such as those defined for use in the National<br />

Database of <strong>Nursing</strong> Quality Indicators, provide guidance for both the indicator definition<br />

as well as the data collection methodology for nursing-sensitive indicators. The<br />

nursing-sensitive indicators reflect the structure, process, and outcomes of nursing care.<br />

The structure of nursing care is indicated by the supply of nursing staff, the skill level<br />

of the nursing staff, and the education/certification of nursing staff. Process indicators<br />

measure aspects of nursing care such as assessment, intervention, and registered nurse<br />

(RN) job satisfaction. Patient outcomes that are determined to be nursing sensitive are<br />

those that improve if there is a greater quantity or quality of nursing care (e.g., pressure<br />

ulcers, falls, intravenous [IV] infiltrations) and are not considered “nursing-sensitive”<br />

(e.g., frequency of primary C-sections, cardiac failure; see http://www.nursingquality<br />

.org/FAQPage.aspx#1 for details). Several nursing organizations across the country participate<br />

in data collection and submission, which allows for a robust database and excellent<br />

benchmarking opportunities.<br />

Additional indicator attributes include validity, sensitivity, and specificity. Validity<br />

refers to whether the measure “actually measures what it purports to measure” (Wilson,<br />

1989). Sensitivity and specificity refer to the ability of the measure to capture all true<br />

cases of the event being measured, and only true cases. We want to make sure that a performance<br />

measure identifies true cases as true, and false cases as false, and does not identify<br />

a true case as false or a false case as true. Sensitivity of a performance measure is the<br />

likelihood of a positive test when a condition is present. Lack of sensitivity is expressed<br />

as false positives: The indicator calculates a condition as present when in fact it is not.<br />

Specificity refers to the likelihood of a negative test when a condition is not present.<br />

False-negatives reflect lack of specificity: The indicators calculate that a condition is not<br />

present when in fact it is. Consider the case of depression and the recommendation in<br />

Chapter 9, Depression in Older Adults, to use the <strong>Geriatric</strong> Depression Scale, in which<br />

a score of 11 or greater is indicative of depression. How robust is this cutoff score of 11?<br />

What is the likelihood that someone with a score of 9 or 10 (i.e., negative for depression)<br />

might actually be depressed (false-negative)? Similarly, what is the likelihood that<br />

a patient with a score of 13 would not be depressed (false positive)?<br />

Reliability means that results are reproducible; the indicator measures the same<br />

attribute consistently across the same patients and across time. Reliability begins with a<br />

precise definition and specification, as described earlier. A measure is reliable if different<br />

people calculate the same rate for the same patient sample. The core issue of reliability is<br />

measurement error, or the difference between the actual phenomenon and its measurement:<br />

The greater the difference, the less reliable the performance measure. For example,

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!