20.07.2013 Views

PhD Research Project Proposal

PhD Research Project Proposal

PhD Research Project Proposal

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>PhD</strong> <strong>Research</strong> <strong>Project</strong> <strong>Proposal</strong><br />

Title:<br />

Factors contributing to non-native accent in prosody:<br />

From perception to articulation<br />

Department of Linguistics, University of Konstanz<br />

Name: Yuki Asano<br />

Email: Yuki.Asano@uni-konstanz.de<br />

Student number: 01/773842<br />

Supervisor: Prof. Dr. Bettina Braun<br />

November 3, 2011


1 Introduction<br />

Anyone who has ever learned a foreign language knows how difficult it is<br />

to acquire prosody (rhythm, intonation) in a second language (L2). What is<br />

usually referred to as a foreign accent is often retained even by advanced<br />

learners. Studies have shown evidence of the existence of foreign-accented<br />

prosody by L2 learners (e.g., Altmann and Kabak, 2011, for review), but still<br />

little is known about which stages of speech processing (perception, storage<br />

or articulation?) contribute to foreign-accented prosody. Some studies (e.g.,<br />

Escudero et al., 2008; Broersma and Cutler, 2008) show that learners’ deviant<br />

prosodic forms are not only related to articulation, but can also be caused by<br />

diverse levels of phonological processing such as at the level of speech perception<br />

or the storage of novel prosodic forms in learner’s mental lexicon.<br />

The central question in my study is whether the deviant prosodic forms produced<br />

by L2 learners are due to problems in L2 perception, problems regarding<br />

the storing process of L2 phonological information, or problems regarding<br />

L2 production (integrating segmental and suprasegmental information at the<br />

articulatory level).<br />

To investigate this research question, five experiments will be conducted: Experiment<br />

1 already tested the kind of difficulties language learners show in<br />

their L2 prosody when it comes to highly frequent L2 words. After a basic<br />

understanding of the issue, Experiments 2 and 3 will examine whether the<br />

deviant L2 prosodic forms found in Experiment 1 are due to their speech perception<br />

or storage. Experiments 4 and 5 will examine whether the difficulties<br />

found in speech production are additionally caused by problems at the articulatory<br />

level in addition to by problems that relate to learners’ speech perception<br />

and storage. The study aims to give new insight into the extremely<br />

understudied relationships of the L2 speech perception, storage and production,<br />

taking the developmental aspects of L2 acquisition into consideration by<br />

testing learners at various L2 proficiency levels. It will also provide didactic<br />

implications, which will contribute to an improvement of L2 learning and<br />

teaching.<br />

2 Background and state of the art<br />

2.1 Languages in focus: German and Japanese<br />

Japanese and German have different prosodic systems in many aspects. One<br />

such aspect is speech rhythm, which has been referred to as temporal sequencing<br />

of similar event (Dalton and Hardcastle, 1977, 41). Japanese is<br />

known to be what is called a mora-timed language (Cutler and Otake, 1994) (=<br />

Each mora has a psychologically perceivable similar duration.), while German<br />

is categorized as a stress-timed language (Kohler, 1982) (= Stressed syllables<br />

define the rhythm.). Additionally, stressed syllables in a German word are<br />

produced with higher pitch, longer duration, and more energy (Gussenhoven,<br />

2007).<br />

1


Other aspects are the realizations and the functions of pitch accent: Pitch<br />

accents in Japanese and German have different phonetic forms and phonological<br />

functions. Japanese pitch accents are characterized sorely by changing<br />

F0, while German pitch accents are manifested by changes in F0, as well<br />

as by increase in intensity and duration (Beckman and Pierrehumbert, 1986).<br />

Comparing the functional aspects of pitch accents, pitch accents in Japanese<br />

are lexically specified (Beckman and Pierrehumbert, 1986) and have thus primarily<br />

a lexical function (for example áme means ‘rain’ and amé ‘candy’) and<br />

post-lexical uses appear only secondary in the light of a primarily lexical restriction.<br />

On the contrary, German pitch accents are used mainly for postlexical<br />

purposes (e.g., Gibbon, 1998).<br />

Despite the duration change caused by stress and pitch change, a long German<br />

vowel can still be distinguished from a short vowel. This is due to the<br />

fact that vowel quality plays a phonemic role in the production of German,<br />

which creates a contrast between a short and a long vowel (Bennett, 1968).<br />

However, unlike German, there is no significant quality difference between<br />

Japanese short and long vowels (Ueyama, 2000). This is the reason that duration<br />

in Japanese is very important and no change occurs in the duration of<br />

pitch accent.<br />

2.2 Phonological processing and representation<br />

in speech perception, storage and production<br />

In order to investigate foreign-accented prosody, it is important to explore<br />

not only speech production itself, but also speech perception and storage,<br />

since speech production and perception are known to be intimately linked<br />

(a.o. Liberman and Mattingly, 1985). As Levelt’s model (1989) also shows,<br />

the mental lexicon is directly connected with the Speech-Comprehension System,<br />

which supports the reciprocal relationship between speech perception<br />

and production. In this section, models of speech perception and production<br />

that are relevant to my study will be briefly presented.<br />

2.2.1 Lexical and sublexical phonology<br />

First of all, the difference between lexical and sublexical phonology (Ramus,<br />

2001) should be made clear, since nonword stimuli will be used in my experiments.<br />

Lexical phonological representation (see Figure 1) serves as permanent storage<br />

for phonological forms of words, whereas sublexical phonological representation<br />

is storage within short-term memory (a.o. Gathercole, 1999) or<br />

in working memory (a.o. Baddeley et al., 1998) for whatever can be represented<br />

in a phonological format, that is, word, whole utterances and nonsense<br />

sequences of phonemes (nonwords) (Ramus, 2001, 201). In acoustic representation<br />

(ibid.) (or acoustic storage in Gathercole, 1999), acoustic record of<br />

the most recent auditory speech item is stored in a sensory form (Gathercole,<br />

1999, 413). In the case of nonwords, the link between lexical and sublexical<br />

2


Figure 1: Model of speech perception and production (Szenkovits and Ramus,<br />

2005, 255)<br />

phonological information fails to exist (except when a listener tries to remember<br />

nonwords by associating them with real words). Or in case of L2 words,<br />

acoustic representations, articulatory representations, sublexical phonological<br />

representations as well as lexical phonological representations are not<br />

connected with each other directly (cf. Kroll and Stewart, 1994). Gathercole<br />

(1999) claims that interference from L1 to L2 occurs before phonological information<br />

is stored in lexical phonological representations. Therefore, in my<br />

experiments, I will examine the stages that occur before lexical phonological<br />

representation.<br />

2.2.2 Perception and storage<br />

In Baddeley’s working memory model (Baddeley, 1986), sublexical phonological<br />

input is assumed to be stored in the working memory, within which memory<br />

traces will fade, if not revived, within one or two seconds. If an articulatory<br />

control process (‘articulatory rehearsal’) occurs, memory traces can be<br />

maintained in the phonological buffer by means of subvocal rehearsal. This<br />

phonological loop (Baddeley, 1986) is not only specialized for the retention of<br />

verbal information over short periods of time, but is also assumed to have links<br />

to long-term memory, helping with the storage of new words as phonological<br />

episodic information (e.g., Baddeley et al., 1998; Baddeley, 2000; Baddeley<br />

and Wilson, 2002). Comparing the model of Baddeley (1986) with Ramus‘<br />

model (2001) (see Figure 1), the memory trace in Baddeley (1986) is comparable<br />

with acoustic representations in Ramus (2001) and storage in working<br />

memory (Baddeley, 1986) with sublexical phonological representations in Ramus<br />

(2001).<br />

3


The function of the phonological loop to build episodic memory representations<br />

in long-term memory is supposed to be more relevant in processing of<br />

unfamiliar phonological forms (e.g., in L2 or in nonwords) than in processing<br />

of familiar words (cf. Baddeley et al., 1998). Normally, existing conceptual,<br />

lexical or phonological knowledge will be used to store and learn a new word<br />

(e.g., Kroll and Stewart, 1994). When unfamiliar phonological forms are presented,<br />

lexical knowledge is not available and therefore it is difficult to build<br />

an association to support storing and learning. In such a case, listeners tend<br />

to rely on the phonological loop system to provide the necessary temporary<br />

storage of the phonological material while more stable long-term phonological<br />

memories are being constructed (Baddeley et al., 1998, for review).<br />

In my study, it will be examined how L1 and L2 listeners perceive unfamiliar<br />

words (which contain phonological structures of the target language) at<br />

the level of acoustic representations and store them at the sublexical phonological<br />

representations. By presenting nonwords, both L1 and L2 listeners<br />

have no direct access to the existing lexical knowledge, however L1 listeners<br />

will probably still have an advantage, since phonological structures of sound<br />

stimuli are more familiar to them. At which point of phonological processing<br />

(acoustic or sublexical phonological representations) do L1 and L2 listeners<br />

show clear differences? Do L2 listeners have more difficulties when they must<br />

map the acoustic input onto sublexical phonological representations and store<br />

it by relying on the phonological loop? Or do they show difficulties in the earlier<br />

stage, before the phonological information gets into the loop?<br />

2.2.3 Speech production<br />

Speech production varies according to what kind of production task is intended<br />

to be performed. For instance, spontaneous speech is clearly different<br />

from reading aloud (Clark and Van Der Wege, 2002; Guaïtella, 1999). In this<br />

study, two types of speech production tasks are relevant: Immediate imitation<br />

and delayed imitation. The former type of task might be better called ‘articulation’<br />

than speech production and does not require the access to sublexical<br />

phonological representations, but primarily to acoustic representations.<br />

The latter type of speech production requires the phonological information to<br />

be stored at least in sublexical phonological representations. In this sense,<br />

the delayed imitation paradigm needs to access the same representations as<br />

semi-spontaneous speech (‘output sublexical phonological representations’,<br />

see Figure 1).<br />

The whole process of an immediate imitation is supported by the motor theory<br />

of speech perception (Liberman et al., 1967) as well as the theory of Direct<br />

Realism (Fowler, 1986), which claim that speech perception is automatically<br />

mediated by an innate, specialised speech module to which listeners have<br />

no conscious access. Even though the theories differ from each other as to<br />

whether specialized processes are necessary to account for speech perception<br />

(Liberman and Whalen, 2000), they both share the view that an immediate<br />

imitation is nothing more than phonetic gestures which are automatically mediated<br />

during speech perception. In Figure 1, an immediate imitation should<br />

4


e indicated through the arrow which connects acoustic representations and<br />

articulatory representations. However, the theory evoked a large amount of<br />

critical commentary (a.o. Lane, 1965; Galantucci et al., 2006; Massaro and<br />

Chen, 2008). One of the persuasive criticism is the claim that listeners simply<br />

perceive speech in terms of a prototypical pattern recognition process (but not<br />

detailed articulatory gestures) using various contextual constraints. In speech<br />

comprehension, the goal is to understand, which is not necessarily linked to<br />

the actual gestures of the speaker (Massaro and Chen, 2008, 453). Despite<br />

these criticism, I will use the motor theory of speech perception for my data<br />

interpretation and discussion, since the goal of my experimental task (immediate<br />

imitation) is not to understand, but to pay attention to phonological forms<br />

closely as possible and imitate the stimuli as accurately as possible. The experimental<br />

task does not reflect the daily speech perception situations.<br />

At the articulatory level, however, regardless of the type of speech production<br />

task, a speaker must coordinate several different prosodic cues simultaneously<br />

(e.g., Levelt, 1989). In the case of real words (except for in the case of<br />

an immediate imitation task) a phonetic or articulatory plan for each word will<br />

be built in phonological encoding (Levelt, 1989) (see Figure 2), by processing<br />

and integrating morphological, metrical, segmental information as well as<br />

post-lexical meaning such as intonational meaning simultaneously (ibid., 13).<br />

Even in the case of nonwords, it has been shown that phonological encoding<br />

takes place despite the absence of lexical cues (e.g., Christopher, 1981).<br />

Figure 2: An outline of the architecture for the phonological encoding of connected<br />

speech (Levelt, 1989, 366)<br />

In phonological encoding, L2 learners may face the first processing difficulty<br />

in metrical spellout. At this stage, metrical patterns which are stored in mental<br />

lexicon are used (ibid., 322). Regarding Japanese and German, the moratimed<br />

and stress-timed speech systems can compete with each other. Since I<br />

5


assume that the metrical plans influence all further processes in phonological<br />

encoding, successful metrical processing seems to be essential for successful<br />

phonological encoding. The present study investigates, which stage of speech<br />

production and articulation foreign-accented speech occurs at. More specifically,<br />

we will examine whether L2 speakers have difficulties at the level of<br />

sublexical phonological representations or articulatory representations by coordinating<br />

several prosodic cues.<br />

3 Work Schedule<br />

3.1 Experiment 1 (Data gathered in March 2011)<br />

Experiment 1 determines the difficulties in L2 speech production by examining<br />

how L1 and L2 speakers integrate several prosodic cues to produce daily<br />

frequently used words. In the task, 60 participants (15 Japanese and German<br />

native speakers, 15 German learners of Japanese as well as 15 Japanese learners<br />

of German) (non-native speakers rated with respect to their proficiency in<br />

L2) had to repeatedly (3 times) seek a person’s attention in three situations,<br />

saying the following words: Two Japanese words, one with lexical pitch accent<br />

(Sumimasen ‘Excuse me’– with a lexical pitch accent on the penultimate<br />

mora), one without (Konnichiwa ‘Hello’) and a German word (Entschuldigung<br />

‘Excuse me’). The task was presented with a laptop using its presentation<br />

function. The experimental design allowed us to test how the strengthening<br />

of one aspect (repeated attention-seeking) influences the coordination of<br />

suprasegmental cues in two languages ceteris paribus.<br />

Clear transfers from one’s L1 to L2 were found. Relevant results for Experiments<br />

2–5 are briefly summarized: 1) German learners of Japanese ignored<br />

the Japanese lexical pitch accent and produced the utterance with various<br />

kinds of pitch accents like in their L1, German. 2) German learners produced<br />

deviant rhythmic forms compared to the ones of Japanese native speakers.<br />

The analysed words, Sumimasen and Konnichiwa, are very frequently used<br />

words. Therefore, it can be assumed that the learners could have had enough<br />

exposure to be potentially able to store the natives’ phonetic and phonological<br />

realization.<br />

Why is L2 speech production still different despite the large amount of L2 inputs?<br />

In which stage of speech processing do these deviant forms occur? Further<br />

experiments are designed to determine whether the deviant forms found<br />

in L2 speech production represent a lack of abilities to distinguish two different<br />

speech rhythms and pitch movement and/or a lack of abilities to categorize<br />

them and/or problems in mapping phonological information into representations<br />

or in coordinating several prosodic cues at the articulatory level.<br />

3.2 Experiment 2<br />

Experiment 2 will address the question of whether the difficulties of L2 learners<br />

in Experiment 1 are due to problems at the level of acoustic representa-<br />

6


tions (=distinction) or at the level of sublexical phonological representations<br />

(= storage). 15 adult Japanese listeners and 15 Japanese learners of German<br />

as well as 15 German listeners with no learning history of Japanese (as<br />

a comparison group) are going to be tested in an AX procedure. The paired<br />

nonsense sound stimuli differ or do not differ in Japanese moraic structures<br />

and pitch accents as follows:<br />

Condition (a): Contrastive moraic structures (2 moraic vs. 3 moraic)<br />

mora structure pair pitch accent<br />

CVCV vs. CVCCV zusu vs. zuusu H(igh)H vs. HH<br />

CVCV vs. CVCCV zusu vs. zuusu L(ow)H vs. LH<br />

CVCV vs. CVCCV zusu vs. zussu HL vs. HL<br />

CVCV vs. CVCVV zusu vs. zussu HH vs. HH<br />

CVCV vs. CVCVV zusu vs. zusuu LH vs. LH<br />

CVCV vs. CVCVV zusu vs. zusuu HL vs. HL<br />

Condition (b): Contrastive pitch accents (HH vs. HL vs. LH)<br />

pitch accent pair mora structure<br />

HL vs. HH zusu vs. zusu CVCV vs. CVCV<br />

HH vs. LH zusu vs. zusu CVCV vs. CVCV<br />

LH vs. HL zusu vs. zusu CVCV vs. CVCV<br />

HL vs. HH zussu vs. zussu CVCCV vs. CVCCV<br />

HH vs. LH zussu vs. zussu CVCCV vs. CVCCV<br />

LH vs. HL zussu vs. zussu CVCCV vs. CVCCV<br />

HL vs. HH zuusu vs. zuusu CVVCV vs. CVVCV<br />

HH vs. LH zuusu vs. zuusu CVVCV vs. CVVCV<br />

LH vs. HL zuusu vs. zuusu CVVCV vs. CVVCV<br />

The condition (a) examines whether listeners are sensitive to the differences<br />

in two different moraic structures. The condition (b) explores whether listeners<br />

hear differences between two different pitch accent patterns. Based on<br />

the sonority values of consonants and vowels (Hardison and Saio, 2010), further<br />

candidates for experimental stimuli are going to be selected (e.g., \sufu\,<br />

\bada\). The candidate will be assessed in terms of ‘wordlikeness’ to ensure<br />

that all stimuli are comparable (non)wordlike in both languages. This pretest<br />

for the stimuli selection will be conducted by testing 18 Japanese and 18 German<br />

L1 speakers using an web-based experiment (WebExp2) which enables<br />

us to get both participants’ responses and reaction times (e.g., Keller et al.,<br />

2009).<br />

After the pretest, the selected materials will be recorded by a female speaker<br />

of Japanese. Prior to the experiments, the duration of the first and second<br />

vowels, the second consonants as well as the pitch contours will be measured<br />

to assure that the contrast was realized. Each token will be three times so<br />

that different tokens of the same type can be used in the experiment.<br />

More importantly, the duration of ISI will be varied. Stimuli will be presented<br />

either with short ISI (= 500 ms) or with long ISI (=2000 ms). It has been<br />

reported that there is a relationship between the duration of the ISI and discrimination<br />

performance in the AX procedure (Pisoni, 1973). The condition<br />

7


with short ISI enables us to test whether German learners have difficulties<br />

in distinguishing between two stimuli at the level of acoustic presentations.<br />

On the contrary, the stimuli presented with long ISI will answer the question<br />

about the difficulties at the level of sublexical phonological representations<br />

(e.g., Bradlow and Pisoni, 1999).<br />

Stimuli will be randomized and assigned and presented using the neurobehavioural<br />

experiment program Presentation. Participants’ responses (using<br />

signal detection theory, cf. Macmillan and Creelman, 2005) and reaction times<br />

(using multi-level regression models, cf. Baayen et al., 2008) will be analysed.<br />

3.3 Experiment 3<br />

In Experiment 3, a categorization (ABX) task will be conducted, in which the<br />

same participants who participated in Experiment 2 take part. An ABX task<br />

requires the listeners to attend to acoustic cues in order to build categories<br />

on which to make a decision, while in an AX task, they generally pay attention<br />

only to the acoustic cues which are relevant to distinguish the stimuli (cf.<br />

Council, 1989). In order to proceed with an ABX task, it is necessary to store<br />

AB in (sublexical) phonological representations so that they can be compared<br />

with X later (critical views however see also ibid.). The participants and materials<br />

will be the same as described in Experiments 2. Only the presentation<br />

of stimuli will be modified as follows:<br />

Condition (a): Moraic structures of A and B are different.<br />

A B X<br />

zusu (HL) zussu (HL) zusu (HL) or zussu (HL)<br />

zusu (LH) zussu (LH) zusu (LH) or zussu (HL)<br />

zusu (HL) zuusu (HL) zusu (HL) or zuusu (HL)<br />

zusu (LH) zuusu (LH) zusu (LH) or zuusu (LH)<br />

Condition (b): Pitch accents of A and B are different.<br />

A B X<br />

zusu (HL) zusu (HH) zusu (HL) or zusu (HH)<br />

zusu (HH) zusu (LH) zusu (HH) or zusu (LH)<br />

zusu (LH) zusu (HL) zusu (LH) or zusu (HL)<br />

3.4 Experiment 4 and 5<br />

Experiments 4 and 5 examine whether the deviant phonological forms which<br />

were shown in Experiment 1 are due to the difficulties in coordinating several<br />

L2 phonological cues (difficulties in the articulatory organization) or problems<br />

in sublexical phonological representations (storage). The task is an imitation<br />

task. Participants will be asked to imitate an auditory stimuli, which will be<br />

used also in Experiments 2 and 3. Stimuli will not be presented in a pair,<br />

8


ut instead, each of them will be presented separately. In Experiment 4, participants<br />

have to imitate a stimulus soon after it has been presented. This<br />

experiment investigates whether learners have articulatory problems and difficulties<br />

at the level of articulatory representations. In Experiment 5 however,<br />

they will be asked to start to imitate after a peep tone (= a delayed imitation).<br />

During the pause (before the peep tone), another linguistic task will<br />

be conducted in order to prevent the effects of repetitions until they start to<br />

imitate it. In this way, the difficulties regarding the storage of information in<br />

sublexical phonological representations will be examined. One possible result<br />

to expect would be the case where German learners could perform well only<br />

in Experiment 4, but not in any other experiments. In this case, they are able<br />

to imitate a stimulus only reflexively. This finding will be supported by the<br />

motor theory of speech perception (Liberman et al., 1967). Another possible<br />

scenario that I imagine occurring is that learners succeed in Experiment 4,<br />

but not in Experiment 5. In this case, the possible interpretation is that their<br />

production problems relate to storage.<br />

Throughout the Experiments 2 to 5, I plan to use the same sound stimuli testing<br />

the same participants on the same day. In this way, it can be ensured<br />

that the participants have not gotten any new relevant information which can<br />

impact their further performance and the relationships between articulation<br />

and/speech production and speech perception and storage can be also examined<br />

and analysed.<br />

This project will be concluded by combining the empirical evidence of the<br />

above described experiments to evaluate (and possibly revise) current theories<br />

on the processing of speech production of L1 and L2.<br />

References<br />

Altmann, H. and Kabak, B. (2011). Second language phonology: L1 transfer,<br />

universals, and processing from segments to stress, pages 298–319. Continuum,<br />

London New York.<br />

Baayen, R. H., Davidson, D. J., and Bates, D. M. (2008). Mixed-effects modeling<br />

with crossed random effects for subjects and items. Journal of Memory and<br />

Language, 59(4):390–412.<br />

Baddeley, A. (2000). The episodic buffer: a new component of working memory?<br />

Trends in Cognitive Sciences, 4(11):417–423.<br />

Baddeley, A., Gathercole, S., and Papagno, C. (1998). The phonological loop<br />

as a language learning device. Psychological Review, 105(1):158–173.<br />

Baddeley, A. and Wilson, B. A. (2002). Prose recall and amnesia: implications<br />

for the structure of working memory. Neuropsychologia, 40(10):1737–1743.<br />

Baddeley, A. D. (1986). Working memory. Oxford University Press, Oxford.<br />

Beckman, M. E. and Pierrehumbert, J. B. (1986). Intonational structure in<br />

Japanese and English. Phonology Yearbook, 3:54.<br />

9


Bennett, D. C. (1968). Spectralt forms and duration as cues in the recognition<br />

of English and German vowels. Language and Speech, 11:65–85.<br />

Bradlow, A. R., N. L. C. and Pisoni, D. B. (1999). Effects of talker, rate and<br />

amplitude variation on recognition memory for spoken words. Perception<br />

Psychophysics, 61(2):206–219.<br />

Broersma, M. and Cutler, A. (2008). Phantom word activation in L2. System,<br />

36(1):22–34.<br />

Christopher, B. (1981). Hemispheric asymmetry in lexical access and phonological<br />

encoding. Neuropsychologia, 19(3):473–478.<br />

Clark, H. H. and Van Der Wege, M. (2002). Psycholinguistics. In Medin, D. L.,<br />

editor, Stevens Handbook of Experimental Psychology, Memory and Cognitive<br />

Processes, number 2, pages 209–259. John Wiley, New York.<br />

Council, N. R. (1989). Classification of complex nonspeech sounds. National<br />

Academy Press, Washington D. C.<br />

Cutler, A. and Otake, T. (1994). Mora or phoneme? Further evidence for<br />

language-specific listeningfurther evidence for language-specific listening.<br />

Journal of Memory and Language, 33(6):824–844.<br />

Dalton, P. and Hardcastle, W. J. (1977). Disorders of fluency and their effects<br />

on communication. E. Arnold, London.<br />

Escudero, P., Hayes, R., and Mitterer, H. (2008). Novel L2 words and assymetric<br />

lexical access. Journal of Phonetics, 36(2):345–360.<br />

Fowler, C. A. (1986). An event approach to the study of speech perception<br />

from a direct-realist perspective. Journal of Phonetics, 14:3–28.<br />

Galantucci, B., Fowler, C., and Turvey, M. (2006). The motor theory of speech<br />

perception reviewed. Psychonomic Bulletin & Review, 13(3):361–377.<br />

Gathercole, S. E. (1999). Cognitive approaches to the development of shortterm<br />

memory. Trends in Cognitive Sciences, 3:410–418.<br />

Gibbon, D. (1998). Intonation in German. In Hirst, D. and Cristo, A., editors,<br />

Intonation systems: a survey of twenty languages, chapter 4, pages 78–95.<br />

Cambridge University Press.<br />

Guaïtella, I. (1999). Rhythm in speech: What rhythmic organizations reveal<br />

about cognitive processes in spontaneous speech production versus reading<br />

aloud. Journal of Pragmatics, 31(4):509–523.<br />

Gussenhoven, C. (2007). Intonation, pages 253–280. Cambridge University<br />

Press, Cambridge.<br />

Hardison, D. M. and Saio, M. M. (2010). Development of perception of second<br />

language japanese geminates: Role of duration, sonority, and segmentation<br />

strategy. Applied Psycholinguistics, 31(01):81–99.<br />

10


Keller, F., Gunasekharan, S., Mayo, N., and Corley, M. (2009). Timing accuracy<br />

of web experiments: A case study using the webexp software package.<br />

Behavior <strong>Research</strong> Methods, 41(1):1–12.<br />

Kohler, K. (1982). Rhythmus im deutschen. In Kohler, Klaus, J., Schäfer-<br />

Vincent, K., and Timmermann, G., editors, Experimentelle Untersuchungen<br />

zu Zeitstrukturen im Deutschen, volume 19, pages 89–106. Institut für<br />

Phonetik der Universität Kiel.<br />

Kroll, J. F. and Stewart, E. (1994). Category interference in translation and picture<br />

naming: Evidence for asymmetric connections between bilingual memory<br />

representations. Journal of Memory and Language, 33:149–174.<br />

Lane, H. (1965). The motor theory of speech perception: A critical review.<br />

Psychological Review, 72(4):275–309.<br />

Levelt, Willem, J. M. (1989). Speaking. From intention to articulation. The<br />

MIT Press, Cambridge, MA.<br />

Liberman, A. M., Cooper, F. S., Shankweiler, D. P., and Studdert-Kennedy, M.<br />

(1967). Perception of the speech code. Psychological Review, 74(6):431–<br />

461.<br />

Liberman, A. M. and Mattingly, I. G. (1985). The motor theory of speech perception<br />

revised. Cognition, 21(1):1–36.<br />

Liberman, A. M. and Whalen, D. H. (2000). On the relation of speech to language.<br />

Trends in Cognitive Sciences, 4(5):187–196.<br />

Macmillan, N. and Creelman, C. (2005). Detection theory: a user’s guide.<br />

Lawrence Erlbaum Associates, Mahwah, New Jersey.<br />

Massaro, D. and Chen, T. (2008). The motor theory of speech perception revisited.<br />

Psychonomic Bulletin & Review, 15(2):453–457.<br />

Pisoni, D. B. (1973). Auditory and phonetic memory codes in the discrimination<br />

of consonants and vowels. Perception & Psychophysics, 13:253–260.<br />

Ramus, F. (2001). Outstanding questions about phonological processing in<br />

dyslexia. Dyslexia, 7(4):197–216.<br />

Szenkovits, G. and Ramus, F. (2005). Exploring dyslexic’ phonological deficit i:<br />

Lexical vs sub-lexical and input vs output processes. Dyslexia, 11:253–268.<br />

Ueyama, M. (2000). Prosodic Transfer: An Acoustic Study of L2 English vs. L2<br />

Japanese. <strong>PhD</strong> thesis, University of California.<br />

11

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!