05.01.2013 Views

Perceptual Coherence : Hearing and Seeing

Perceptual Coherence : Hearing and Seeing

Perceptual Coherence : Hearing and Seeing

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

288 <strong>Perceptual</strong> <strong>Coherence</strong><br />

One way to strip away the surrounding components <strong>and</strong> make them<br />

form a separate auditory source is to manipulate the temporal properties of<br />

the components. The most powerful acoustic cue for the fusion of component<br />

sounds is onset asynchrony (see chapter 9). In nearly all instances, if<br />

frequency components start at the same instant, they are perceived as coming<br />

from the same object. On this basis, we would expect that if the target<br />

component started before or after the remaining components, it would<br />

be treated as a separate entity, <strong>and</strong> performance would then mimic that of a<br />

single component presented alone. D. M. Green <strong>and</strong> Dai (1992) found that<br />

leads <strong>and</strong> lags as small as 10 ms disrupted detection (this is slightly shorter<br />

than the 20–30 ms delay that causes components to split apart in other<br />

tasks discussed in chapter 9). Hill <strong>and</strong> Bailey (2002) attempted to create a<br />

separate stream by presenting the target component to one ear <strong>and</strong> the<br />

flanking components to the other ear (termed dichotic presentation). If the<br />

target <strong>and</strong> flanking components were synchronous, dichotic presentation<br />

was only slightly worse. If the components were presented asynchronously,<br />

detection was much worse, but there was no added decrease due to dichotic<br />

presentation. Onset asynchrony between two sounds is a much stronger<br />

basis for the formation of two sounds than is different spatial locations<br />

(also discussed in chapter 9).<br />

A second way to strip away the surrounding components is by means of<br />

coherent amplitude modulation. There are several possibilities: (a) the target<br />

component is modulated, but the nontarget components are not (or the<br />

reverse); or (b) all the components are modulated, but the target component<br />

is modulated out of phase to the other components. For all of these conditions,<br />

the detection of the increment in intensity is impaired.<br />

Comodulation Masking Release<br />

The fundamental lesson from profile analysis research is that the acoustic<br />

signal is typically treated as representing a single source <strong>and</strong> that listeners attend<br />

to the entire spectrum. Isolating individual components leads to degradation<br />

in the ability to detect the amplitude change of those components.<br />

We can also demonstrate that the acoustic signal with a coherent temporal<br />

pattern is treated as a single source from the reverse direction. Suppose<br />

we have a target masked by noise. How can we make that target more discriminable?<br />

If adding more tonal components to form a coherent spectral<br />

shape makes one tonal target more discriminable, then, paradoxically,<br />

adding noise that combines with the masking noise to form a coherent<br />

noise source also should make the target more detectable. The trick is to<br />

amplitude modulate both the original masking noise <strong>and</strong> the added noise,<br />

identically in frequency, phase, <strong>and</strong> depth, to make the coherent noise into<br />

a source. The coherent noise source now seems separate from the target.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!