26.03.2014 Views

scapula fracture classification system - Kreiskrankenhaus Mechernich

scapula fracture classification system - Kreiskrankenhaus Mechernich

scapula fracture classification system - Kreiskrankenhaus Mechernich

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

516 M. Jaeger et al.<br />

Figure 5<br />

Focused <strong>classification</strong> <strong>system</strong> of multifragmentary joint <strong>fracture</strong>s (F2). Ó 2012, Jaeger et al.<br />

Figure 6<br />

Example of a 3D CT reconstruction video used for the last <strong>classification</strong> session. Ó 2012, Jaeger et al.<br />

format, anonymized, and distributed on DVD format to SCCG<br />

members, who classified <strong>fracture</strong>s independently ‘‘at home’’<br />

within a 3-month period. Fracture codes were collected electronically<br />

using specifically designed Excel sheets (Microsoft, Redmond,<br />

WA, USA), which were centralized for the analyses. The<br />

results including reliability and accuracy data were reviewed<br />

during face-to-face meetings to identify the reasons for coding<br />

disagreements and improving the <strong>system</strong>.<br />

The final session included 120 cases documented with the<br />

2-dimensional CT series and 3D CT reconstruction videos, as well<br />

as additional radiographs available from 105 cases. Data were<br />

managed and analyzed by use of Intercooled Stata, version 11<br />

(StataCorp LP, College Station, TX, USA). The basic <strong>system</strong> was<br />

first analyzed to assess the likely distribution of articular segment<br />

(F0/F1/F2) <strong>fracture</strong>s in the sample and identify them. The focused<br />

<strong>system</strong> was then applied to this <strong>fracture</strong> subset and separately<br />

analyzed for simple <strong>fracture</strong> patterns (F1) and multifragmentary<br />

joint <strong>fracture</strong>s (F2). Interobserver reliability was evaluated by<br />

means of k coefficients. The k coefficient is commonly used as<br />

a chance-corrected measure of agreement, which ranges from þ1<br />

(complete agreement) to 0 (agreement by chance alone) to less<br />

than 0 (less agreement than expected by chance). The k coefficient<br />

is a useful indicator of reliability, and a value of 0.70 is considered<br />

an adequate sign of reliability. Classification accuracy was estimated<br />

by latent class modeling 22,23 using Latent GOLD software,<br />

version 3.0.1 (Statistical Innovations, Belmont, MA, USA). This<br />

technique aims at identifying the most likely ‘‘true’’ distribution of<br />

<strong>fracture</strong> classes in the population of <strong>scapula</strong> <strong>fracture</strong>s based on the<br />

evaluated sample and the agreement data collected among the<br />

participating surgeons; for each class, the degree of <strong>classification</strong><br />

accuracy for each surgeon is also estimated. 6<br />

Results<br />

When identifying cases using the basic <strong>system</strong> to classify<br />

a <strong>fracture</strong> of the articular segment (denoted F), the<br />

7 shoulder specialists were in agreement for 73% of the 120

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!