21.01.2015 Views

Dynamic Field Theory (DFT): Applications in Cognitive Science and ...

Dynamic Field Theory (DFT): Applications in Cognitive Science and ...

Dynamic Field Theory (DFT): Applications in Cognitive Science and ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

give a concrete example from the doma<strong>in</strong> of social cognition, a fluent <strong>and</strong> efficient<br />

coord<strong>in</strong>ation of actions <strong>and</strong> decisions among partners <strong>in</strong> a jo<strong>in</strong>t action task requires that<br />

each <strong>in</strong>dividual is able to underst<strong>and</strong> others’ action goals. The <strong>DFT</strong> model of goal<br />

<strong>in</strong>ference [26] implements the idea that action simulation supports action underst<strong>and</strong><strong>in</strong>g.<br />

Without the need to rely on any explicit symbolic communication, we underst<strong>and</strong> the<br />

purpose of observed actions by <strong>in</strong>ternally replicat<strong>in</strong>g action effects us<strong>in</strong>g our own motor<br />

repertoire. The model has been validated <strong>in</strong> tasks which <strong>in</strong>volve goal-directed reach<strong>in</strong>ggrasp<strong>in</strong>g-plac<strong>in</strong>g<br />

sequences. The model architecture consists of various reciprocally<br />

coupled dynamic fields that represent <strong>in</strong> their fir<strong>in</strong>g patterns action means, action goals<br />

<strong>and</strong> contextual cues. In the action simulation layer (ASL) learned action cha<strong>in</strong>s<br />

composed of populations encod<strong>in</strong>g different motor acts exist that are l<strong>in</strong>ked to specific<br />

plac<strong>in</strong>g goals or end states. Individual cha<strong>in</strong>s may be preshaped by connected<br />

populations represent<strong>in</strong>g contextual cues <strong>and</strong> prior task <strong>in</strong>formation. Through mapp<strong>in</strong>gs<br />

between populations encod<strong>in</strong>g congruent motor acts <strong>in</strong> the action observation (AOL)<br />

<strong>and</strong> the action simulation layer, the cha<strong>in</strong> is triggered whenever <strong>in</strong>put from AOL<br />

<strong>in</strong>dicates the observation of a specific motor act. The self-stabiliz<strong>in</strong>g properties of the<br />

neural populations form<strong>in</strong>g the cha<strong>in</strong> ensure that the associated goal representation<br />

reaches a suprathreshold activation level even <strong>in</strong> cases where only partial visual<br />

<strong>in</strong>formation about the action performed by another person is available (e.g., only the<br />

reach<strong>in</strong>g toward the object is observed). In a jo<strong>in</strong>t action context, the <strong>in</strong>formation about<br />

the <strong>in</strong>ferred goal may then be used by the observer to select an appropriate<br />

complementary action sequence.<br />

It is worth stress<strong>in</strong>g that the dynamic field model with<strong>in</strong> which the high-level<br />

reason<strong>in</strong>g capacity is realized represents a distributed but fully <strong>in</strong>tegrated dynamical<br />

system that emerges from sensorimotor orig<strong>in</strong>s. The same action cha<strong>in</strong>s that support<br />

overt motor behaviour are covertly used dur<strong>in</strong>g action observation to make sense of the<br />

behaviours of others. The populations form<strong>in</strong>g the cha<strong>in</strong>s encode entire goal-directed<br />

motor primitives such as grasp<strong>in</strong>g <strong>and</strong> plac<strong>in</strong>g that abstract from the f<strong>in</strong>e details of the<br />

h<strong>and</strong> movements. For such a high-level motor vocabulary the metric structure of the<br />

underly<strong>in</strong>g space, which is a core concept of <strong>DFT</strong>, is not directly observable. However,<br />

the metric may still be def<strong>in</strong>ed operationally by the degree of overlap of neuronal<br />

representations. In the present implementations, a motor acts such as grasp<strong>in</strong>g is<br />

represented by a localized activation pattern <strong>in</strong> an otherwise <strong>in</strong>active population. The<br />

pool of grasp<strong>in</strong>g neurons that becomes activated above threshold gets <strong>in</strong>put from<br />

connected populations. In a modell<strong>in</strong>g attempt to expla<strong>in</strong> the development of goaldirected<br />

cha<strong>in</strong>s, it has been recently shown that the <strong>in</strong>terplay of the field dynamics <strong>and</strong> a<br />

Hebbian learn<strong>in</strong>g dynamics may result <strong>in</strong> the emergence of dist<strong>in</strong>ct, non-overlapp<strong>in</strong>g<br />

subpopulations of grasp<strong>in</strong>g neurons that become l<strong>in</strong>ked dur<strong>in</strong>g learn<strong>in</strong>g to specific goal<br />

representations [27]. In other words, depend<strong>in</strong>g on the ultimate goal of the action<br />

sequence <strong>in</strong> which the grasp<strong>in</strong>g behaviour is embedded, the localized activity pattern <strong>in</strong><br />

the grasp<strong>in</strong>g field will evolve at different locations of the field. On this view, the<br />

similarity on the goal level def<strong>in</strong>es the metric of the field, that is, the extent to which<br />

subpopulations will overlap. The development of such goal-directed representations has<br />

important implications for cognitive behaviour s<strong>in</strong>ce it allows <strong>in</strong>dividuals to act at the<br />

time of the grasp<strong>in</strong>g <strong>in</strong> anticipation of others’ motor <strong>in</strong>tentions.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!