29.08.2013 Views

Connectionist Modeling of Experience-based Effects in Sentence ...

Connectionist Modeling of Experience-based Effects in Sentence ...

Connectionist Modeling of Experience-based Effects in Sentence ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

4.4 Forgett<strong>in</strong>g <strong>Effects</strong><br />

the other hand it is possible that regularity does not have a relevant impact on empirical<br />

studies <strong>of</strong> Mandar<strong>in</strong> extraction preferences and the explanation is left to other factors.<br />

4.4 Forgett<strong>in</strong>g <strong>Effects</strong><br />

4.4.1 The model<br />

As presented <strong>in</strong> chapter 3, the forgett<strong>in</strong>g effect <strong>in</strong> center-embedded structures was addressed<br />

<strong>in</strong> a connectionist study by Christiansen and Chater (1999). They tra<strong>in</strong>ed an<br />

SRN on right-branch<strong>in</strong>g and center-embedd<strong>in</strong>g structures and then assessed the output<br />

node activations after see<strong>in</strong>g the sequence NNNVV. The activations showed a clear 2VP<br />

preference consistent with empirical data from English speakers. The artificial language<br />

that covered center-embedd<strong>in</strong>g abba and right-branch<strong>in</strong>g aabb dependency patterns is<br />

perfectly comparable to the simple English grammar <strong>of</strong> object and subject relative<br />

clauses used by MacDonald and Christiansen (2002). Thus, it should be possible to<br />

replicate the effect with the SRNs tra<strong>in</strong>ed on the English grammar for the replication<br />

<strong>in</strong> section 4.2. In German RCs, however, no real right-branch<strong>in</strong>g occurs, given the embedded<br />

RC is always attached to its head noun. Hence, <strong>in</strong> the German grammar used<br />

<strong>in</strong> section 4.2 both ORC and SRC exhibit a center-embedd<strong>in</strong>g abba pattern. This fact<br />

could result <strong>in</strong> the SRN exposed to a German grammar be<strong>in</strong>g more tra<strong>in</strong>ed on verb-f<strong>in</strong>al<br />

center-embedd<strong>in</strong>g structures than the English counterpart result<strong>in</strong>g <strong>in</strong> different predictions<br />

for an NNNVV sequence. Suppos<strong>in</strong>g that the difference <strong>in</strong> SRC realization <strong>in</strong><br />

the corpora approximately reflects an essential word order regularity difference between<br />

German and English, the SRN predictions will shed light on the part that experience<br />

plays <strong>in</strong> the explanation for the forgett<strong>in</strong>g effect.<br />

I extended the study by Christiansen and Chater (1999) to ga<strong>in</strong> GPE values for both<br />

conditions on all regions after the miss<strong>in</strong>g verb. In order to achieve that, it was necessary<br />

to have a grammar that simulates the forgett<strong>in</strong>g effect, hence allows NNNVV sequences<br />

to be complete. Thus, <strong>in</strong> the probability table for the drop-V2 test<strong>in</strong>g corpus the column<br />

referr<strong>in</strong>g to the position <strong>of</strong> V2 was deleted. In consequence the test<strong>in</strong>g probabilities<br />

were adequate to a ‘N1 N2 N3 V3 V1’ grammar with the first verb (V3) be<strong>in</strong>g bound<br />

to N1 by number agreement and the second verb (V1) to N3. This is equivalent to<br />

forgett<strong>in</strong>g the prediction <strong>in</strong>duced by N2. The GPE for the ungrammatical conditions was<br />

calculated aga<strong>in</strong>st these drop-V2 probabilities. So, if the network is mak<strong>in</strong>g grammatical<br />

predictions, the error values for V1 and subsequent regions should be higher <strong>in</strong> the<br />

drop-V2 condition. On N1 the SRN would predict a verb <strong>in</strong> number agreement with<br />

N2. Then the network would predict another verb, but the test grammar predicts the<br />

determ<strong>in</strong>er. After this po<strong>in</strong>t the network’s predictions should be completely confused<br />

because the just observed sequence is <strong>in</strong>consistent with any structural generalizations<br />

developed dur<strong>in</strong>g tra<strong>in</strong><strong>in</strong>g. If the networks predictions are not too locally dependent,<br />

the predictions should be wrong for the last word (direct object <strong>of</strong> the ma<strong>in</strong> clause), too.<br />

73

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!