15.01.2015 Views

a primer on verbal protocol analysis - Ammon Wiemers' Home Page

a primer on verbal protocol analysis - Ammon Wiemers' Home Page

a primer on verbal protocol analysis - Ammon Wiemers' Home Page

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

C35165/Schmorrow <strong>Page</strong> 339<br />

A Primer <strong>on</strong> Verbal Protocol Analysis 339<br />

of this segmentati<strong>on</strong> scheme from the meteorology domain. As the illustrati<strong>on</strong><br />

shows, the length of each utterance can vary from a single word(s) (for example,<br />

“850”) to fairly lengthy (for example, “clouds are going to be increasing for the<br />

next 24 hours in western Ohio”). In this domain, we are generally interested in<br />

the types of mental operati<strong>on</strong> participants perform <strong>on</strong> the visualizati<strong>on</strong>s, such as<br />

reading off informati<strong>on</strong>, transforming informati<strong>on</strong> (spatially or otherwise), making<br />

comparis<strong>on</strong>s, and the like, and these map well to our segmentati<strong>on</strong> scheme.<br />

The utterances “it’s at 700 millibars” and “clouds are going to be increasing for<br />

the next 24 hours in western Ohio” would each be coded as <strong>on</strong>e “read-off informati<strong>on</strong>”<br />

event, even though they differ quite a bit in length. Of course, it would<br />

be possible to subdivide l<strong>on</strong>ger utterances further (for example, “clouds are going<br />

to be increasing // for the next 24 hours // in western Ohio”); however, according<br />

to our coding scheme this is still <strong>on</strong>e read-off informati<strong>on</strong> event, which would<br />

now span three utterances. Having a single event span more than <strong>on</strong>e utterance<br />

makes data <strong>analysis</strong> harder. The two most important guidelines in dividing <strong>protocol</strong>s<br />

are (1) c<strong>on</strong>sistency and (2) making sure that the segments map readily to the<br />

codes to be applied.<br />

Although transcripti<strong>on</strong> cannot be automated, a number of video <strong>analysis</strong> software<br />

programs are available to aid in data <strong>analysis</strong>, and some of these programs<br />

allow <strong>protocol</strong>s to be transcribed directly into the software. We have not found<br />

the perfect <strong>protocol</strong> <strong>analysis</strong> program yet, though we have used MacShapa,<br />

Transana, and Noldus Observer. Looking ahead and deciding what kind of <strong>analysis</strong><br />

software will be used will also save a great deal of time by reducing the need<br />

for further processing of transcripts in order to successfully import them to the<br />

video <strong>analysis</strong> software. Another opti<strong>on</strong> that we have successfully used is to transcribe<br />

<strong>protocol</strong>s in a spreadsheet program, such as Excel, with each segment <strong>on</strong> a<br />

different row. We then set up columns for our different coding categories and can<br />

easily perform frequency counts, create pivot tables, and import the data into a<br />

statistical <strong>analysis</strong> program. This method works very well if the video porti<strong>on</strong><br />

of the <strong>protocol</strong> is irrelevant; however, in most cases we are interested not <strong>on</strong>ly<br />

in what people are saying, but what they are doing (and looking at) while they<br />

are saying it. In this case, using the spreadsheet method is disadvantageous,<br />

because the transcripti<strong>on</strong> cannot easily be aligned with the video. Video <strong>analysis</strong><br />

software has the advantage that video timestamps are automatically entered at<br />

each line of transcripti<strong>on</strong>, thus facilitating synchr<strong>on</strong>izing text and video.<br />

Once data have been transcribed and segmented, they are ready for coding.<br />

Unless the research questi<strong>on</strong> can be answered by some kind of linguistic coding<br />

(for example, counting the number of times a certain word or family of words is<br />

used), a coding scheme must be developed that maps to the cognitive processes<br />

of interest. Establishing and implementing an effective and reliable coding<br />

scheme lies at the heart of <strong>verbal</strong> <strong>protocol</strong> <strong>analysis</strong>, and this is usually the most<br />

difficult and time c<strong>on</strong>suming part of the whole process. In some cases, researchers<br />

will have str<strong>on</strong>g a priori noti<strong>on</strong>s about what to look for. However, <strong>verbal</strong> <strong>protocol</strong><br />

<strong>analysis</strong> is a method that is particularly useful in exploratory research, and<br />

in these cases, researchers may approach the <strong>protocol</strong> with <strong>on</strong>ly a general idea.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!