03.12.2012 Views

Science of Aphasia 5 Cross Linguistic Aspects of Aphasia ...

Science of Aphasia 5 Cross Linguistic Aspects of Aphasia ...

Science of Aphasia 5 Cross Linguistic Aspects of Aphasia ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

DISCRIMINATION OF SIMILAR ENVIRONMENTAL SOUNDS FOLLOWING STROKE:<br />

LEFT HEMISPHERE PATIENTS FAIL TO USE SEMANTIC FACILITATION<br />

Alessandra LORENZI, Luigi A. Vignolo<br />

University <strong>of</strong> Brescia<br />

Auditory nonverbal agnosia (auditory agnosia sensu strictiori) consists in the selective inability to recognize<br />

nonverbal sounds. Carl Kleist (1928) maintained that the inability to perceive isolated sounds or noises<br />

(perceptive Geräuschtaubheit) should be distinguished from the inability to understand the meaning <strong>of</strong> noises<br />

(Geräuschsinntaubheit). Quantitative behavioural studies carried out since the 1960s (see Vignolo, 1982;<br />

Schnider et al., 1994) confirmed this dichotomy and investigated its anatomical bases. They led to define the<br />

existence <strong>of</strong> two types <strong>of</strong> auditory agnosia: an acoustic-discriminative one, preferentially associated with lesions<br />

<strong>of</strong> the right hemisphere and a semantic-associative one, specifically associated with lesions <strong>of</strong> the left<br />

hemisphere and aphasia. The former consists <strong>of</strong> imperception for the acoustic structure <strong>of</strong> sounds. The latter<br />

consists <strong>of</strong> defective recognition <strong>of</strong> the meaning <strong>of</strong> sounds and may be considered to be a cognitive rather than a<br />

perceptual disorder, involving one or several sensory modalities and consisting in the inability to put together<br />

different aspects <strong>of</strong> the same concept.<br />

The problem with all these studies was that acoustic discrimination and semantic recognition were investigated<br />

by tests involving different sensory modalities. Discrimination was tested by a purely auditory same/different<br />

test, while identification was tested by an auditory-visual multiple choice test. This difference made it difficult to<br />

compare results.<br />

The present study was undertaken in order to verify the right-left difference employing same/different<br />

discrimination tests strictly confined to the auditory modality. We wished to investigate the ability to<br />

discriminate pairs <strong>of</strong> acoustically similar environmental sounds, with or without semantic help. We assumed that<br />

the discrimination between two acoustically similar sounds should be facilitated when they originate from two<br />

different semantic sources ( e.g. miaowing cat– crying baby ) as opposed to when they come from two sources<br />

belonging to the same semantic group ( e.g. cat 1 – cat 2 ). In the first case the semantic difference adds to the<br />

acoustic difference in order to permit the discrimination, while in the second case no such facilitation is present.<br />

If this is true, patients with right hemisphere damage (who can identify the meaning <strong>of</strong> sounds) should be able to<br />

take advantage <strong>of</strong> the semantic help, like normal controls, while patients with left hemisphere damage (who have<br />

problems in identifying the meaning) should not be able to make good use <strong>of</strong> this help.<br />

A new test, the Environmental Sounds Discrimination Test, was administered to 20 controls and 12 unilateral<br />

hemispheric stroke patients (5 RBD [right brain damaged], 7 LBD [left brain damaged]) harbouring a small or<br />

medium sized lesion (diameter less than or equal to 3 cm) on the CT scan. Exclusion criteria were: a history <strong>of</strong><br />

hearing difficulties before stroke, multiple lesions, trauma, neurodegenerative diseases. The test consists <strong>of</strong> 40<br />

items, each made up <strong>of</strong> a pair <strong>of</strong> meaningful environmental sounds. 14 items are pairs <strong>of</strong> acoustically similar<br />

sounds, originating from different semantic sources (e.g. applause-tape-writer): they constitute Task 1. 14 items<br />

are pairs <strong>of</strong> acoustically similar sounds, originating from sources belonging to the same semantic category (e.g.<br />

thunder 1 – thunder 2; wind 1 – wind 2): they constitute Task 2. The remaining 12 items are pairs <strong>of</strong> identical<br />

sounds (e.g. hen-hen). The patient was told that he was going to hear two familiar noises one after the other. He<br />

was asked to listen to them carefully and to say whether they where the same or different.<br />

We expected that a. controls would perform both tasks well, but Task 1 (with semantic help) better than Task 2 (<br />

without semantic help ); b. RBD patients would perform both tasks much worse than the other groups but, like<br />

controls, Task 1 better than Task 2 (because both controls and RBD use the semantic help); c. LBD patients<br />

would perform both tasks better than RBD, without significant differences between Task 1 and Task 2 (being<br />

unable to use the semantic help).<br />

Results confirmed our expectations. The total number <strong>of</strong> errors made by RBD patients were double with respect<br />

to the LBD and more than triple with respect to controls. Both controls and RBD patients could easily<br />

discriminate among similar sounds originating from different semantic sources (Task 1). Virtually all errors<br />

made by these two groups concerned the discrimination <strong>of</strong> sounds produced by the same semantic source (Task<br />

2). This indicates that in both controls and right brain damaged, the semantic help makes acoustic<br />

discrimination significantly easier. By contrast, LBD patients were the only group in which a remarkable<br />

number <strong>of</strong> errors in Task 1 was found. This indicates, in our view, that semantic help is less efficient in LBD than<br />

in RBD and controls.<br />

In conclusion, this study, carried out with a test specifically confined to the acoustic sphere, provided further<br />

evidence <strong>of</strong> the semantic impairment typical <strong>of</strong> the left damaged subjects.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!