29.08.2013 Views

Connectionist Modeling of Experience-based Effects in Sentence ...

Connectionist Modeling of Experience-based Effects in Sentence ...

Connectionist Modeling of Experience-based Effects in Sentence ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Chapter 4 Two SRN Prediction Studies<br />

The impact <strong>of</strong> structural regularity is rather disconfirmed by several studies f<strong>in</strong>d<strong>in</strong>g a<br />

subject advantage on the relativizer.<br />

By chang<strong>in</strong>g the RC type proportions <strong>in</strong> favor <strong>of</strong> the SRC <strong>in</strong> simulation 2 the object<br />

advantage decreased dramatically. The RC region showed a subject advantage after two<br />

tra<strong>in</strong><strong>in</strong>g epochs. Compared to human data this f<strong>in</strong>d<strong>in</strong>g is also <strong>in</strong>consistent. In empirical<br />

studies, on the RC region only an object preference was found (Hsiao and Gibson, 2003;<br />

L<strong>in</strong> and Garnsey, 2007; Qiao and Forster, 2008). The assessment <strong>of</strong> frequency effects<br />

<strong>in</strong> simulation 2 is to be understood as a tentative approach to account for the complex<br />

<strong>in</strong>terplay <strong>of</strong> statistical constra<strong>in</strong>ts that drive learn<strong>in</strong>g. Direct predictions for sentence<br />

process<strong>in</strong>g patterns may, however, not be justified. An SRN-<strong>based</strong> regularity test like<br />

<strong>in</strong> simulation 1 is more or less straightforward as long as the structures <strong>in</strong> question are<br />

clearly def<strong>in</strong>ed. But the structural choice may not reflect the regularity relations that<br />

are really <strong>in</strong>fluenc<strong>in</strong>g skill <strong>in</strong> human readers. In order to ga<strong>in</strong> more precise predictions,<br />

further corpus <strong>in</strong>spections are necessary. For example, the exact proportion <strong>of</strong> RClike<br />

structures or elided-subject clauses with respect to the whole corpus was neglected<br />

dur<strong>in</strong>g the present study but could potentially have <strong>in</strong>fluenced the results.<br />

Note that the regularity pattern <strong>in</strong> Mandar<strong>in</strong> as revealed by the simulations is not<br />

easily comparable with the English simulation. In English there were effects <strong>of</strong> difficulty<br />

ma<strong>in</strong>ly occurr<strong>in</strong>g on the verbs. This is due to the number agreement between subject<br />

and predicate. Notably, between the verb and its direct object no agreement is necessary.<br />

This agreement pattern delivers as a side effect a sort <strong>of</strong> semantic <strong>in</strong>formation,<br />

comparable to thematic roles. Therewith, agreement gives rise to a simulation <strong>of</strong> <strong>in</strong>tegration<br />

difficulty effects, evolv<strong>in</strong>g from the need to relate verbs to their subject. I<br />

hypothesize that these “<strong>in</strong>tegration effects” are the ma<strong>in</strong> reason for the nice fit by region<br />

<strong>of</strong> human data. Mandar<strong>in</strong>, on the other hand, does not conta<strong>in</strong> specific noun-verb<br />

dependencies. In a sense, the network just needs to count nouns and verbs <strong>in</strong>stead <strong>of</strong><br />

establish<strong>in</strong>g pairwise relationships. Thus, the Mandar<strong>in</strong>-tra<strong>in</strong>ed network is not required<br />

to deal with the concept <strong>of</strong> a sentential subject. Consequently, no “<strong>in</strong>tegration difficulty”<br />

comparable to English is expected. Of course, this is not comparable to human process<strong>in</strong>g<br />

<strong>of</strong> Mandar<strong>in</strong>. Predicates and their arguments are <strong>in</strong>deed <strong>in</strong>volved <strong>in</strong> dependencies<br />

like thematic roles and other semantic relationships. It is imag<strong>in</strong>able that the miss<strong>in</strong>g <strong>of</strong><br />

specific noun-verb relationships is the reason for the absent pattern match between the<br />

Mandar<strong>in</strong> simulation and human data. An implementation <strong>of</strong> the miss<strong>in</strong>g dependencies<br />

similar to the simplified English grammar seems straightforward to test that hypothesis.<br />

The <strong>in</strong>terpretability <strong>of</strong> the result <strong>of</strong> such a simulation would, however, be questionable.<br />

A possible <strong>in</strong>terpretation <strong>of</strong> the all-over contradictory simulation results with respect<br />

to human data is that the effects observed here do <strong>in</strong> fact not reflect experience-relevant<br />

regularities. Assum<strong>in</strong>g on the other hand, that the simulation results are <strong>in</strong>terpretable as<br />

show<strong>in</strong>g regularity properties that play a role <strong>in</strong> human sentence comprehension, there<br />

are two possible <strong>in</strong>terpretations <strong>of</strong> the results: a) Assum<strong>in</strong>g regularity plays a role <strong>in</strong> the<br />

extraction preference, the fact that the regularity effect on the ORC was very weak <strong>in</strong><br />

the simulations could be one <strong>of</strong> the reasons for the <strong>in</strong>conclusive empirical results. b) On<br />

72

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!