29.08.2013 Views

Connectionist Modeling of Experience-based Effects in Sentence ...

Connectionist Modeling of Experience-based Effects in Sentence ...

Connectionist Modeling of Experience-based Effects in Sentence ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

4.3 RC Extraction <strong>in</strong> Mandar<strong>in</strong><br />

The GPE score measured on a certa<strong>in</strong> word only tells us how the predictions <strong>based</strong><br />

on previous words fit the probabilistic grammar. It does not <strong>in</strong>clude any effect <strong>of</strong> the<br />

current word itself.<br />

GPE<br />

0.0 0.2 0.4 0.6 0.8 1.0<br />

Mandar<strong>in</strong> SRC<br />

N1 de N2 V2 N3<br />

Region<br />

epoch 1<br />

epoch 2<br />

epoch 3<br />

GPE<br />

0.0 0.2 0.4 0.6 0.8 1.0<br />

Mandar<strong>in</strong> ORC<br />

V1 de N2 V2 N3<br />

Region<br />

Figure 4.4: Simulation 1: Mandar<strong>in</strong> ORC regularity.<br />

epoch 1<br />

epoch 2<br />

epoch 3<br />

See figure 4.4 for GPE scores <strong>of</strong> SRCs and ORCs by tra<strong>in</strong><strong>in</strong>g epochs. For means<br />

and standard errors see table A.1 <strong>in</strong> the appendix. Collaps<strong>in</strong>g over all regions and<br />

epochs there was a significant advantage for object relatives. The difference shrank with<br />

<strong>in</strong>creas<strong>in</strong>g epochs. For the ORC there was significant improvement on the ma<strong>in</strong> verb<br />

over the three epochs. The SRC improved on the ma<strong>in</strong> verb and the relativizer. On<br />

the first region (N1/V1) there was a marg<strong>in</strong>al advantage for the ORC <strong>in</strong> the first epoch.<br />

The second region (de) showed a significant object advantage <strong>in</strong> all epochs. There was<br />

also an object advantage on region 4 (V2), which, however, disappeared after the second<br />

epoch due to SRC improvement. Region three and five did not show any effect.<br />

The results <strong>of</strong> experiment 1 showed the predicted frequency × regularity <strong>in</strong>teraction.<br />

In contrast to the English results by MC02 the regularity effect is seen <strong>in</strong> object relatives<br />

<strong>in</strong> Mandar<strong>in</strong>. The effect, however, is is not located on the embedded RC but ma<strong>in</strong>ly on<br />

the relativizer. It seems as the predictions for position 4 (here the relativizer) are easier<br />

for the ORC because <strong>of</strong> the familiarity with the sequence ‘N V . . . ’ where the relativizer<br />

should have a quite low cont<strong>in</strong>uation probability due to the small RC frequency <strong>in</strong> the<br />

corpus. On the other hand, the SRC sequence ‘V N’ is very rarely occurr<strong>in</strong>g at the<br />

sentence beg<strong>in</strong>n<strong>in</strong>g, mak<strong>in</strong>g more tra<strong>in</strong><strong>in</strong>g necessary to learn the correct predictions.<br />

Over tra<strong>in</strong><strong>in</strong>g the network has to learn to assign a high activation to the relativizer after<br />

an ‘V N’ and to exclude almost all other words as a cont<strong>in</strong>uation.<br />

Experiment 1 superficially confirms the ORC regularity hypothesis. However, as the<br />

69

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!