13.07.2015 Views

Surgical Expertise in Neurosurgery: Integrating Theory Into ... - CHUQ

Surgical Expertise in Neurosurgery: Integrating Theory Into ... - CHUQ

Surgical Expertise in Neurosurgery: Integrating Theory Into ... - CHUQ

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Copyright © Congress of Neurological Surgeons. Unauthorized reproduction of this article is prohibited.GÉLINAS-PHANEUF AND DEL MAESTROpilots, neurosurgeons are tra<strong>in</strong>ed to deal with complex situations<strong>in</strong>volv<strong>in</strong>g unplanned and life-threaten<strong>in</strong>g events. 15The assumption beh<strong>in</strong>d all these issues is that understand<strong>in</strong>gwhat is required to be an expert and how experts achieve a level ofexpertise will allow the development of tra<strong>in</strong><strong>in</strong>g programs thatenable more neurosurgical residents and practic<strong>in</strong>g neurosurgeonsto become “experts” and ma<strong>in</strong>ta<strong>in</strong> their expertise. 16 How theCanMeds competency or the Accreditation Council on GraduateMedical Education core competencies model help achieve thegoal of <strong>in</strong>creas<strong>in</strong>g neurosurgical technical expertise needs furtherstudy.EXPERT PERFORMANCE IN SURGERY:CURRENT EVIDENCE<strong>Expertise</strong> can be represented by the ability to reproduceconsistently superior performance on a given task, on demand. 17This task, when it represents the essence of what an expert does <strong>in</strong>a given field, can be used <strong>in</strong> the expert performance paradigm. Itis the researchers’ chore to ensure that the task truly representspo<strong>in</strong>ts of expertise <strong>in</strong> the doma<strong>in</strong> and not just associatedepiphenomena. 18 The issue of experience, <strong>in</strong>struction, andexpertise has been extensively studied by the group led byEricsson over the past 2 decades. 1,17,18 To achieve a high level ofexpertise, experts need to master multiple technical factors,<strong>in</strong>clud<strong>in</strong>g the techniques <strong>in</strong>volved <strong>in</strong> their specialty and have themotivation to pursue the development and cont<strong>in</strong>ued developmentof expertise <strong>in</strong> these techniques. 17-19 Failure to cont<strong>in</strong>ue toimprove and develop these techniques with time results <strong>in</strong>a significant loss of performance (Figure 1). Ericsson 17,20 hasproposed, that an expert performance scheme could (and should)be adaptable to surgery. Research methodologies assess<strong>in</strong>g expertperformance are best carried out <strong>in</strong> constra<strong>in</strong>ed environments, beit a laboratory or an operat<strong>in</strong>g room. One must f<strong>in</strong>d a task thatrepresents with high fidelity an arena where the expert will giveFIGURE 1. Two trends for the development, ma<strong>in</strong>tenance, and decay of medicalperformance as a function of time, experience, and <strong>in</strong>struction. Adapted withpermission Ericsson KA. Acad Med. 2004;79(10 Suppl):S70-S81. Copyright Ó2004, Association of American Medical Colleges.a constant superior performance and that this superior performanceis actually derived from the alleged expertise. 20The current literature on the assessment of superior performance<strong>in</strong> surgery <strong>in</strong>volves technical skills perta<strong>in</strong><strong>in</strong>g to specificsurgical procedures. Simulator studies have <strong>in</strong>vestigated surgicaltechnical performance out of the operat<strong>in</strong>g room sett<strong>in</strong>g, but rarely“other skills.” 21 An objective assessment of technical skills hasalways been a goal of any neurosurgical tra<strong>in</strong><strong>in</strong>g program. Theseassessments are carried out to ensure that their students haveadequately mastered the different techniques required for theirspecific specialty. An extensive study has shown that checklists failto measure differences between various levels of expertise. 22Checklists only discrim<strong>in</strong>ate expertise up to a certa<strong>in</strong> po<strong>in</strong>t, afterwhich <strong>in</strong>termediate experts score as well as experienced surgeons.This was <strong>in</strong> contrast to global rat<strong>in</strong>g scales that ma<strong>in</strong>ta<strong>in</strong>ed theirability to differentiate levels of expertise over a larger range. 22,23The design of most studies on this topic <strong>in</strong>volves an expert-novicedesign to obta<strong>in</strong> various forms of validity of the tool be<strong>in</strong>gtested. 24 To <strong>in</strong>tegrate any technique or new technology <strong>in</strong>toa formal tra<strong>in</strong><strong>in</strong>g curriculum, it must be proved that tra<strong>in</strong><strong>in</strong>g isuseful and appropriate. A well-established series of validationsteps needs to be undertaken: face, content, construct, andconcurrent validity. Face and content validity determ<strong>in</strong>e thata technology is realistic and targets tra<strong>in</strong><strong>in</strong>g skills that are requiredto be tra<strong>in</strong>ed. Construct validity establishes that the scoresobta<strong>in</strong>ed correlate with actual operative technical skill bydiscrim<strong>in</strong>at<strong>in</strong>g experts from novices. This enables novices topractice and tra<strong>in</strong> <strong>in</strong> the technology until they reach theperformance of an expert. The f<strong>in</strong>al step, concurrent validity,is particularly important should that technology be used forassessment as it demonstrates that the skills acquired dur<strong>in</strong>gtra<strong>in</strong><strong>in</strong>g reflect performance <strong>in</strong> the operat<strong>in</strong>g room. Reznick’sgroup developed a tool to measure the performance of tra<strong>in</strong>eesand experts <strong>in</strong> the field of general surgery. This tool, theObjective Structured Assessment of Technical Skill (OSATS),has been shown to be both valid and reliable <strong>in</strong> multiplestudies. 25-27 However, the OSATS was never validated <strong>in</strong>neurosurgery, lead<strong>in</strong>g to the possibility of <strong>in</strong>accurate assessmentsif used without proper validation <strong>in</strong> neurosurgical studies. Agroup from McGill University designed another tool to evaluatethe performance of experts and novices dur<strong>in</strong>g live laparoscopicsurgery. This tool, the Global Operative Assessment of LaparoscopicSkills, has also been shown to be valid and reliable. 28 Avariety of others scales have been developed for other surgicalspecialties, <strong>in</strong>clud<strong>in</strong>g ophthalmology, gastroenterology, and earnose-throatsurgery. 29-31 There is currently no validated tool ofthis k<strong>in</strong>d <strong>in</strong> neurosurgery. A current project, the GlobalAssessment of Intraoperative Neurosurgical Skills, is try<strong>in</strong>g toaddresses this issue <strong>in</strong> neurosurgery. 32 Global Assessment ofIntraoperative Neurosurgical Skills uses the same pr<strong>in</strong>ciple ofexpert-novice comparison to validate the tool. A major limitation<strong>in</strong> design<strong>in</strong>g such a tool is the def<strong>in</strong>ition of what constitutes anexpert <strong>in</strong> neurosurgery. F<strong>in</strong>d<strong>in</strong>g the appropriate tasks that allowan accurate assessment of the full extent of expertise of a subjectS32 | VOLUME 73 | NUMBER 4 | OCTOBER 2013 SUPPLEMENTwww.neurosurgery-onl<strong>in</strong>e.com

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!