05.01.2013 Views

Perceptual Coherence : Hearing and Seeing

Perceptual Coherence : Hearing and Seeing

Perceptual Coherence : Hearing and Seeing

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

was high, then the resolution of the other signal did not affect the judgments.<br />

But if the resolution of either signal was intermediate, the resolution<br />

of the other signal affected the bias. Increasing the auditory resolution<br />

led to a stronger auditory bias <strong>and</strong> vice versa. There was a reciprocal shift<br />

based on the reliability of each signal.<br />

It is unknown how observers estimate the variability of the sensory<br />

information. Do observers have to learn in conscious fashion, or would the<br />

spread of excitation across a population of cells derive the variability automatically?<br />

As argued in chapter 6, adaptation to changes in sensory arrays<br />

are multifaceted. Fairhall et al. (2001) found that adaptation to changes<br />

in the input variability occurred within 1 s for the fly <strong>and</strong> that speed suggests<br />

that updating occurs automatically. Nonetheless, observers still can<br />

attend to one modality voluntarily <strong>and</strong> thereby override any biasing toward<br />

the modality with the greater precision.<br />

McGurk Effect<br />

Auditory <strong>and</strong> Visual Segmentation 417<br />

Speaking brings forth both visual <strong>and</strong> acoustic information concerning the<br />

articulatory production of speech sounds. We have argued that people will<br />

normally assume that the auditory <strong>and</strong> visual information represent the<br />

same activity or event (e.g., the sight of a swinging hammer <strong>and</strong> the sound<br />

of its impact). Moreover, we have argued when there is conflict, the most<br />

sensitive modality will dominate the percept.<br />

When we come to speech, the role of vision is less clear. The visual<br />

information comes from three sources: (1) lip modulation; (2) maximum<br />

lip velocity; <strong>and</strong> (3) maximum lip amplitude (Summerfield, 1991). Lip reading<br />

is very difficult. Summerfield (1991) estimated that there are about 12<br />

distinct visual configurations, so that about 63% of speech sounds are invisible<br />

to sight. Nonetheless, visual articulation information can be very<br />

helpful in difficult, noisy conditions. Improvements as much as 50% have<br />

been reported (Sumby & Pollack, 1954). Furthermore, Munhall, Jones,<br />

Callans, Kuratate, <strong>and</strong> Vatikiotis-Bateson (2004) found that rhythmic head<br />

movements were correlated with the pitch (fundamental frequency) <strong>and</strong><br />

amplitude of the speaker’s voice <strong>and</strong> that visual information could improve<br />

performance by 100% over that possible using auditory information only.<br />

The large improvement with visual input argues that speech perception is<br />

inherently multimodal <strong>and</strong> that the perceptual goal is identifying the articulatory<br />

gestures that underlie both the auditory <strong>and</strong> visual outputs (Rosenblum,<br />

2004).<br />

A striking illusion that has been used to study temporal <strong>and</strong> spatial ventriloquism<br />

has been termed the McGurk effect (MacDonald & McGurk,<br />

1978; McGurk & MacDonald, 1976). Participants are shown a person saying<br />

simple consonant-vowel syllables coupled with an acoustic recording

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!