29.08.2013 Views

Connectionist Modeling of Experience-based Effects in Sentence ...

Connectionist Modeling of Experience-based Effects in Sentence ...

Connectionist Modeling of Experience-based Effects in Sentence ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Chapter 2 Issues <strong>in</strong> Relative Clause Process<strong>in</strong>g<br />

Accord<strong>in</strong>g to Gibson and Thomas, exceed<strong>in</strong>g a theoretic memory capacity limit by excessive<br />

load causes a loss <strong>of</strong> costly predictions. A successful parse is possible as long as<br />

memory demands throughout the sentence stay with<strong>in</strong> a certa<strong>in</strong> capacity range. However,<br />

when high complexity causes the load to exceed the limit, a breakdown <strong>of</strong> the<br />

parser has to be prevented by prun<strong>in</strong>g activation. In the sense <strong>of</strong> the discrete nature <strong>of</strong><br />

SPLT this means that the prediction <strong>of</strong> certa<strong>in</strong> syntactic categories have to be dropped.<br />

The prun<strong>in</strong>g hypothesis assumes that the predictions to be forgotten are those caus<strong>in</strong>g<br />

the biggest part <strong>of</strong> SPLT memory cost at the current po<strong>in</strong>t <strong>in</strong> the sentence. In example<br />

(17) the po<strong>in</strong>t <strong>of</strong> highest memory cost is the deepest embedded subject the cl<strong>in</strong>ic (NP3).<br />

At this po<strong>in</strong>t two predictions are held <strong>in</strong> memory: VP2 predicted by NP2 and VP3<br />

predicted by NP3. S<strong>in</strong>ce VP2 is further up <strong>in</strong> the sentence and has to be held longer<br />

<strong>in</strong> memory than the successive VP3, it causes more memory cost. Consequently, the<br />

prediction <strong>of</strong> the second VP gets pruned and therewith forgotten.<br />

(17) a. [The patient]NP1 whoi [the nurse]NP2 whoj [the cl<strong>in</strong>ic]NP3 [had hired ej ]VP3<br />

[admitted ei]VP2 [met Jack]VP1 .<br />

b. * [The patient]NP1 whoi [the nurse]NP2 whoj [the cl<strong>in</strong>ic]NP3 [had hired ej ]VP3<br />

[met Jack]VP1 .<br />

Vasishth et al. (2008) restate the prun<strong>in</strong>g hypothesis <strong>in</strong> terms <strong>of</strong> decay as def<strong>in</strong>ed <strong>in</strong><br />

the DLT (Gibson, 2000) and refer to it as the VP-forgett<strong>in</strong>g Hypothesis. Vasishth et al.<br />

calculate Integration and Storage Cost at the three VPs to determ<strong>in</strong>e the “po<strong>in</strong>t <strong>of</strong> greatest<br />

difficulty” <strong>in</strong> the sentence. The DLT cost predictions for example (17) are illustrated<br />

<strong>in</strong> figure 2.5. At the first VP (VP3) two <strong>in</strong>tegrations take place. The object the nurse<br />

with two <strong>in</strong>terven<strong>in</strong>g discourse referents (cl<strong>in</strong>ic and hired) and the subject the cl<strong>in</strong>ic with<br />

one <strong>in</strong>terven<strong>in</strong>g discourse referent (hired) are <strong>in</strong>tegrated. At this moment there are two<br />

active predictions held <strong>in</strong> memory: the predicate <strong>of</strong> the upper RC (admitted), caused<br />

by read<strong>in</strong>g nurse, and the ma<strong>in</strong> verb. This makes a total cost <strong>of</strong> 4. At the second verb<br />

(admitted) the object the patient and the subject the nurse are <strong>in</strong>tegrated. The patient<br />

has a distance <strong>of</strong> four discourse referents (nurse, cl<strong>in</strong>ic, hired, and admitted) from the<br />

verb, the object nurse is separated by two, and just the matrix verb is predicted. This<br />

makes a total memory cost <strong>of</strong> 8 at the VP2 site. F<strong>in</strong>ally, on the third VP, by <strong>in</strong>tegrat<strong>in</strong>g<br />

the patient and predict<strong>in</strong>g a direct object, a cost <strong>of</strong> 6 is ga<strong>in</strong>ed. Conclud<strong>in</strong>g from the<br />

calculations, VP2 has the highest memory cost and, hence, is forgotten.<br />

The difference between Vasishth et al.’s and Gibson and Thomas’ account is that<br />

the latter added Storage Cost on the noun and Integration Cost <strong>of</strong> the predicted verb,<br />

whereas Vasishth et al. just use the total cost on the verb. The predictions, however, are<br />

the same. Let me try to reformulate the decay approach more <strong>in</strong>tuitively. The important<br />

measure <strong>of</strong> the decay approach is Integration Cost. By count<strong>in</strong>g the number <strong>of</strong> <strong>in</strong>terven<strong>in</strong>g<br />

discourse referents it is a discrete <strong>in</strong>direct measure <strong>of</strong> time. Or, as Vasishth et al. put<br />

it: it is “a discretized abstraction over some activation decay function that determ<strong>in</strong>es<br />

the strength <strong>of</strong> a memorial representation.” Hence, decay could be described as a function<br />

<strong>of</strong> time and <strong>in</strong>terven<strong>in</strong>g memory load with the assumption that a high memory load<br />

40

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!