09.12.2012 Views

2003 IMTA Proceedings - International Military Testing Association

2003 IMTA Proceedings - International Military Testing Association

2003 IMTA Proceedings - International Military Testing Association

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

676<br />

There are a variety of data collection templates that could be attached to the performance<br />

case to assist in detailed analysis. The current version of the AOOPA model uses a gap<br />

analysis template (see figure 4) in which data is collected about current and desired<br />

performance in the tasks that are carried out in pursuit of a performance goal. Where a<br />

gap is found, for example if 100% accuracy is required on a task and only 60% of those<br />

assigned to the task are able to achieve this, then a cause and solution analysis will be<br />

initiated. In a cause analysis, stakeholders review gap data, brainstorm possible causes,<br />

put them into cause categories, rate them by user-defined criteria, and select which ones<br />

to pursue. The AOOPA prototype allows users to categorize causes so the recommended<br />

solutions are more likely to address the underlying causes. The specific process used in<br />

this version is described in more detail in Douglas et al, <strong>2003</strong>.<br />

Organization<br />

Process<br />

Organization<br />

Process<br />

X<br />

X Version<br />

Version of<br />

Y<br />

Version<br />

X<br />

Analysis<br />

data<br />

Y<br />

Analysis<br />

data<br />

dl<br />

Figure 5: Transferring data between different organizations<br />

FUTURE WORK<br />

An important concept embedded in the design of the prototype is configurability<br />

(Cameron, 2002). As noted in the introduction, a framework is meant to provide a<br />

structure for a variety of approaches that can be tailored to specific groups or situations<br />

rather than to provide a set of rules for a single correct way of developing systems. The<br />

philosophy is that no “one size fits all” methodology will be effective; methodologies<br />

evolve to fit organizations, situations, and new technologies. The same is true for<br />

software tools, which are of limited use when fixed on a particular methodology. Given<br />

that different organizations will adopt different methods to suit different circumstances,<br />

software support should be adaptable. The vision is of a system of configurable tools and<br />

methods, which have a shared underlying representation of performance analysis<br />

knowledge. This will allow custom interfaces to a continuously refined shared repository<br />

of knowledge on human performance (see figure 5).<br />

The software architecture used allows the plug-in of different components, thus allowing<br />

a different set of components to be configured to each methodology that conforms to the<br />

45 th Annual Conference of the <strong>International</strong> <strong>Military</strong> <strong>Testing</strong> <strong>Association</strong><br />

Pensacola, Florida, 3-6 November <strong>2003</strong><br />

Process<br />

Independen<br />

t

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!