11.07.2015 Views

CPSI, Root Cause Analysis Workbook - Paediatric Chairs of Canada

CPSI, Root Cause Analysis Workbook - Paediatric Chairs of Canada

CPSI, Root Cause Analysis Workbook - Paediatric Chairs of Canada

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

32Implementing theAction PlanThe ultimate success <strong>of</strong> any RCA process depends uponthe actions taken by the organization in response to therecommendations <strong>of</strong> the team. All root cause analysescompleted in an organization should be tracked anda designated individual (quality specialist or similar)assigned the responsibility for follow-up <strong>of</strong> all RCArecommendations. The board <strong>of</strong> trustees and seniorstaff should be provided with frequent status updates onimplementation <strong>of</strong> all action plans.When planning implementation <strong>of</strong> actions theorganization must consider:• who will be affected by action(s);• likelihood <strong>of</strong> success;• within the organization’s capabilities;• compatibility with the organization’s objectives;• the likelihood <strong>of</strong> engendering other adverse events;• receptivity by management, staff and physicians;• barriers to implementation;• implementation time, i.e., long term versus shortterm solution;• cost; and• measurability.The organization should also consider pilot testingor usability testing <strong>of</strong> interventions prior to broadimplementation, especially in situations wheresubstantial changes in process are planned. The use <strong>of</strong>small cycles <strong>of</strong> change (e.g. PDSA or Plan/Do/Study/Actcycles) can be beneficial.Measure and Evaluate theEffectiveness <strong>of</strong> ActionsThe purpose <strong>of</strong> implementing system changes is to makethe system safer. However, the possibility exists that wellintentioned and well thought out recommendations maynot have the desired effect once put into practice. As withany quality improvement initiative, the effectiveness <strong>of</strong>the implemented recommendations must be measuredto determine if the changes helped make the systemsafer, there was no effect on the safety <strong>of</strong> the system, or,in the worst-case scenario, the changes actually madethe system less safe. If surveillance indicates that, forwhatever reason, the changes did not have the intendedeffect, the organization needs to revisit the issue toidentify alternative solutions. “Rather than being apolicing activity, monitoring implements pr<strong>of</strong>essionalaccountability and contributes to rational managementby documenting the quality <strong>of</strong> the product.” 54There are three general types <strong>of</strong> measurements: structuremeasures, process measures and outcome measures.It is important to measure a process when it is stable(i.e., after the variable performance during initialimplementation phases) and to know if the measuredoutcomes match expected ones. Most data analysis willinvolve comparison <strong>of</strong> organizational data to a point<strong>of</strong> reference such as internal comparisons, aggregateexternal referenced databases, practice guidelines, and/orparameters and performance targets.Measurement strategies determine the effectiveness<strong>of</strong> the action and not the completion <strong>of</strong> the action. Forexample, a measurement strategy would calculate thepercentage <strong>of</strong> newly admitted patients assessed for fallrisk, and not the percentage or number <strong>of</strong> staff trainedto complete falls assessment. Measurement should bequantifiable, with a defined numerator and denominator(if appropriate). The sampling strategy and time framefor measurement must be clearly stated (for example,random sampling <strong>of</strong> 15 charts per quarter). It is importantto set realistic performance thresholds (for example,a target for 100 per cent compliance should not be setunless it can be met).Measurement may take the form <strong>of</strong> voluntary reporting,intervention tracking, direct observation <strong>of</strong> performance,chart review, computerized tracking and surveys.The organization should consider the following questionswhen developing performance measures:• Is there a plan for use <strong>of</strong> the data? (i.e., don’t collectdata that will not be used)• Are the data collected reliable and valid?• Has data collection been simplified?• Have key elements required for improvement orchange been identified?• Has a “data rich – information poor” situation beenavoided?• Has a key point for information dissemination beenidentified?• How will the measurement be documented?The improvements/changes are successful when:• the new processes become the routine/habit; and• new employees demonstrate proper procedure afterorientation.54A. Donabedian, “Institutional and Pr<strong>of</strong>essional Responsibilities in Quality Assurance,” Quality Assurance in Health Care 1, No.1 (1989): 3-11.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!