28.03.2014 Views

isbn9789526046266

isbn9789526046266

isbn9789526046266

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

meeting certain “esthetic principles”, lend themselves to answering unanticipated questions or predicting<br />

the consequences of novel situations. They contended that using such mental models, which they termed<br />

robust, is characteristic of expert behavior.<br />

De Kleer and Brown’s theory defines mental models as topologies of submodels that represent<br />

components of the system. Each submodel is a collection of rules that describe the causal behavior of a<br />

component. Robustness arises out of the component models. The better a mental model’s component<br />

models meet the following principles, the more robust the overall model is.<br />

• The no-function-in-structure principle: the rules that specify the behavior of a system component<br />

are context free. That is, they are completely independent of how the overall system functions. For<br />

instance, the rules that describe how a switch in an electric circuit works must not refer, not even<br />

implicitly, to the function of the whole circuit. This is the most central of the principles that a<br />

robust model must follow.<br />

• The locality principle: the rules that specify the behavior of a system component are represented<br />

only in terms of the internal aspects of the component and its connections to other components,<br />

not in terms of the internal aspects of other components. For instance, the rules that describe a<br />

switch in an electrical circuit must not depend on the internal state of any other component in the<br />

circuit. The locality principle helps ensure that the no-function-in-structure principle is met.<br />

• The weak causality principle: the rules of the mental model attribute each event in the system to a<br />

direct cause. The reasoning process involved in determining the next state does not depend on any<br />

“indirect arguments”. For instance, what happens next to a component in an electrical circuit must<br />

be directly attributable to some local cause rather than indirectly inferred by elaborately reasoning<br />

about other components. The weak causality principle is important for the efficient running of the<br />

mental model.<br />

• The deletion principle: the mental model should not predict that the system will work properly even<br />

when a vital component is removed.<br />

An overarching aspect of robust models is that the components of the model are understood in terms of<br />

general knowledge that pertains to those components rather than specific knowledge that pertains to the<br />

particular configuration of the components. A non-robust model may serve for mental simulations of a<br />

particular system under normal circumstances. A robust model is needed for transferring the knowledge<br />

embodied in a mental model to a similar but novel problem. A robust model is also needed to mentally<br />

simulate exceptional situations such as when a component malfunctions or a change to the system is<br />

either made or planned. This makes robust models highly desirable.<br />

Wickens and Kessel: transferable models through active learning<br />

Wickens and Kessel’s work (see Kessel and Wickens, 1982; Wickens, 1996; Schumacher and Czerwinski,<br />

1992) provides another perspective on mental model formation. They studied the performance of people<br />

trained alternatively as monitors, who supervise a complex technological system, or as controllers, who<br />

control the system manually. As one would expect, Wickens and Kessel found that training in system<br />

monitoring improves people’s monitoring skills, and training in controlling a system improves controlling<br />

skills. However, and significantly, they also found that the controllers could transfer their skills to<br />

monitoring tasks, while the reverse was not true of the monitors. The controllers were also found to<br />

be better at detecting system faults from subtle cues that escaped the attention of the monitors. Wickens<br />

and Kessel inferred that the two kinds of training led to different kinds of internal models being formed.<br />

Process control researchers evoke worrying images of supervisors of automated nuclear power plants<br />

trained to monitor rather than to control, and of airplane pilots whose training is excessively based on<br />

autopiloting.<br />

Wickens and Kessel’s results are important as they show how people doing similar yet different tasks<br />

on the same system develop different kinds of mental representations and different kinds of expertise. In<br />

particular, a more passive task resulted in worse learning. This is not the last time that we will run into<br />

this thought on these pages.<br />

57

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!