5 years ago

View - BECS

View - BECS

stimulus onset, the

stimulus onset, the transient brain response is elicited at considerably longer latencies (up to 1000 ms from stimulus onset). Unlike the N1m, the latter response yields a strong correlation between the peak latency and behavioral indices of sound detection. In essence, the latency fluctuation of the response is highly reminiscent of the behavior of the MMN (Tiitinen et al., 1994), although no time-consuming oddball paradigm and offline subtraction procedures are needed. The peak amplitude of this response is larger when stimulus duration is shortened and it is further enhanced when subjects attend to the stimuli. Previous studies were carried out with healthy young subjects and using sinusoidal sounds only, but the observed brain response might also turn out to be useful in clinical settings because it can be obtained rapidly and it provides a direct link to behavioral sound detection (Mäkinen et al., 2004; Tiitinen et al., 2005). 1.1. The effects of aging Aging is associated with brain volume reduction (atrophy) affecting particularly the gray matter of frontal cortex, but also involving other regions such as temporal, parietal, subcortical (especially thalamic), cerebellar and white matter regions (Raz et al., 1998; Good et al., 2001; Jernigan et al., 2001; Tisserand et al., 2002; Alexander et al., 2006). This aging-related progressive atrophy begins in early adulthood (Mortamet et al., 2005). Neuron reductions due to atrophy also affect the cochlea and its connections to the nuclei and cerebral cortex (Frisina and Walton, 2006). Bilateral auditory processing is presumably disrupted as a consequence of destruction of callosal fibres (Pekkonen et al., 1995). Changes associated with the aging brain affect perceptual and cognitive processes as assessed with, for example, neuropsychological tests. Deficits in memory, language skills, speed of mental processing and attention can be detected in neurologically asymptomatic aged people, and the decline in cognitive performance has been shown to be related to anatomical atrophy in the central nervous system measured with MRI (Ylikoski et al., 1998; Kennedy and Raz, 2009). The degree of aging-related cognitive decline, however, has been shown to vary considerably from one individual to another (Wilson et al., 2002). Speech processing seems to be especially susceptible to agingrelated deterioration. For instance, aging is associated with reduced accuracy of behavioral reactions while attending to speech (Geal-Dor et al., 2006). Research on language functions shows that the aged have difficulties understanding speech (Divenyi et al., 2005) especially in everyday conversation (Murphy et al., 2006), in complex and noisy conditions (Pichora-Fuller and Souza, 2003; Bertoli et al., 2005), and when words are spoken rapidly (Schneider et al., 2005). Previous studies indicate aging-related decline in specific-linguistic functions, such as in the retrieval of phonological or orthographic components, as well as in the production of spoken or written language (Goodglass et al., 1999; Taylor and Burke, 2002). It is also likely that attentional abilities are affected during aging. Gaeta et al. (2003) found aging-related alterations in the anterior–posterior scalp distribution of ERPs alluding to a decreased efficiency of attentional processes, and Bennett et al. (2004) demonstrated that aging alters auditory ERPs associated with attentional regulation. These authors therefore hypothesized that aging-related hearing and speech comprehension deficits can in fact be explained by deficits in attentional abilities. It has also been shown that irrelevant sounds capture the attention of the aged more easily than that of the young (Andrés et al., 2006; Espinoza-Varas and Jang, 2006; Tun et al., 2002). L.E. Matilainen et al. / Clinical Neurophysiology 121 (2010) 902–911 903 1.2. Non-invasive study of aging-related changes in auditory processing One useful avenue for studying the aging-related changes in neural circuits and cognitive functioning could be the use of noninvasively obtained event-related potentials and fields (ERPs and ERFs) measured with EEG and MEG, respectively. A major advantage of these responses over behavioral measures is the information they provide on auditory processing irrespective of task demands (e.g., without attentional engagement). Auditory brain stem responses (ABRs) have aroused considerable interest as they correlate accurately with behaviorally measured hearing thresholds (Kaga and Tanaka, 1980; Stapells and Oates, 1997; Gorga et al., 2006). Unfortunately, ABRs are not elicited by speech sounds but, rather, by pure tones and clicks (e.g., Bauch et al., 1980). Thus, recent research has extended to long-latency ERPs and ERFs whose major advantage over ABRs is that they are more closely tied to perception, reflect attentional resourcing, and can be evoked by complex sounds such as speech (for a review, see Cone-Wesson and Wunderlich, 2003). Currently, one of the most studied long-latency responses is the auditory N1 or N1m (obtained via EEG or MEG measurements, respectively), which is elicited by any discrete audible stimulus and reaches its response maximum at approximately 100 ms post-stimulus (for reviews, see Eggermont and Ponton, 2002; May and Tiitinen, 2010). The N1(m) has been suggested to reflect sensory memory (Lu et al., 1992) and signal detection (Parasuraman and Beatty, 1980). The N1(m) is sensitive to various aspects of speech, for example, speech fundamental frequency (Mäkelä et al., 2002), intonation (Mäkelä et al., 2004a,b), and periodicity (Yrttiaho et al., 2008; Yrttiaho et al., 2009). In general, it appears to be a sensitive index of the spectral complexity of the auditory stimulus: speech stimuli evoke a more pronounced N1(m) response than sinusoids (Tiitinen et al., 1999; Kirveskari et al., 2006), and the N1(m) latency for speech sounds is longer than that for sinusoids (Eulitz et al., 1995; Diesch et al., 1996; Tiitinen et al., 1999). The N1m response elicited by speech has been shown to be more prominent in the left than in the right hemisphere (Eulitz et al., 1995; Kirveskari et al., 2006; see, however, Mathiak et al., 1999; Tiitinen et al., 1999; Shtyrov et al., 2000). In the right hemisphere, the source location of the N1m to vowel stimuli is posterior than that to sinusoids (Diesch et al., 1996). Conversely, in the left hemisphere, the source location of the N1m evoked by vowels is anterior to the source of the activity elicited by tones (Kuriki and Murase, 1989). The N1(m) is thought to reflect several temporally overlapping, spatially distributed neural sources in the auditory cortex, and it is considered usable in most clinical settings as it requires minimal testing time and offline analyses (Tremblay and Kraus, 2002; May and Tiitinen, 2010). An important cognitive factor influencing the dynamics of longlatency ERPs and ERFs is attention, which can be directed either voluntarily, by expectations or needs, or involuntarily, by novelty or unexpectedness to a subset of environmental sensory information. In ERP and ERF studies, attentional engagement has been observed to augment the P1(m), N1(m) and P3(m) response amplitudes (for N1(m), see, e.g., Tiitinen et al., 2005; for P3, see, e.g., Escera et al., 2003). However, despite decades of research into the effects of aging via long-latency brain responses, several challenges do exist. In N1(m) studies, linking the physiologically measured brain responses to cognitive functions or behavioral performance has turned out to be somewhat difficult as the N1(m) latency and amplitude poorly reflect reaction time (Jaskowski et al., 1994). A variety of effects on response latency and amplitude have been found: for example, the latency of the N1(m) to speech and nonspeech sounds has been found either to be prolonged as a result

904 L.E. Matilainen et al. / Clinical Neurophysiology 121 (2010) 902–911 of aging (Pekkonen et al., 1995; Tremblay et al., 2002; Geal-Dor et al., 2006) or to remain intact (Amenedo and Diaz, 1999; Squires and Ollo, 1999; Fjell and Walhovd, 2001). Prolonged response latencies have been suggested to be a consequence of aging-related cortical neuronal loss and related decreasing of the signal processing speed or of a decrease in responsiveness of the auditory neurons (Pekkonen et al., 1995; Tremblay et al., 2002; Harkrider et al., 2005). With respect to response amplitudes, the majority of EEG (Amenedo and Diaz, 1999; Harkrider et al., 2005; Snyder and Alain, 2005; Fabiani et al., 2006) and MEG studies (e.g., Golob and Starr, 2000) have related aging to an enhanced N1(m) amplitude. However, in other studies, aging appears to decrease the N1 amplitude (Papanicolau et al., 1984) or to have no effect on it (Polich, 1997). Enhanced response amplitudes in the aged have been explained by, for example, aging reducing central inhibitory mechanisms (Amenedo and Diaz, 1999; Harkrider et al., 2005). Based on the above findings, it is difficult to arrive at a consistent picture of what actually happens to the neuronal circuits of the aging brain: the timing and strength of brain activation varies from study to study. Tentatively, it appears that the variation of the results between the above studies may arise from methodological heterogeneity, as different sample sizes, stimuli, measurements and recording conditions have been used: the sample size of the subjects in the above-mentioned studies varies between 20 (Pekkonen et al., 1995) and 120 (Polich, 1997), and the average age of the aged subjects between 55 (Harkrider et al., 2005) and 70 years (Geal-Dor et al., 2006). Tones (Pekkonen et al., 1995; Polich, 1997), vowels (Snyder and Alain, 2005), syllables (Harkrider et al., 2005), and words (Geal-Dor et al., 2006) have been used as stimuli. Also the paradigms differ from each other consisting of, for example, the oddball paradigm (Polich, 1997; Amenedo and Diaz, 1999) and a memory task (Golob and Starr, 2000). 1.3. The aims of the study Previous studies on the transient brain response have focused exclusively on the effects of sinusoidal stimuli in healthy young subjects (Mäkinen et al., 2004; Tiitinen et al., 2005). Thus, the main objective of this study was to examine the suitability of the transient brain response for applied research in a group of aged subjects. Simultaneously, due to our use of both sinusoidal and speech stimuli, further observations on sound detection in both young and aged subjects with respect to stimulus complexity were expected to emerge. More specifically, based on previous results on the effects of aging on the N1(m) response, differences between young and aged subjects in both response amplitude (e.g., Amenedo and Diaz, 1999; Golob and Starr, 2000; Harkrider et al., 2005) and latency (e.g., Pekkonen et al., 1995; Geal-Dor et al., 2006) were expected. In light of previous N1(m) results regarding the effect of stimulus complexity (Tiitinen et al., 1999; Kirveskari et al., 2006), speech stimuli were expected to elicit more prominent cortical activity than sinusoidal stimuli, and attentive engagement to the stimuli was expected to further enhance the response amplitudes (Mäkinen et al., 2004; Tiitinen et al., 2005). 2. Methods 2.1. Subjects Nine healthy young subjects (average age 24 years; range 21– 27 years; five females) and nine healthy aged subjects (average age 61 years; range 51–79 years; five females; see Table 1) participated in the study with written informed consent and the approval of the Ethical Committee of Helsinki University Central Hospital (HUCH). All subjects were right-handed, native Finnish Table 1 Age and gender of the young and the aged subjects. Young subjects Aged subjects Subject Age (years) Gender Subject Age (years) Gender Y_01 25 f A_01 51 f Y_02 24 m A_02 71 f Y_03 21 m A_03 79 f Y_04 24 f A_04 58 m Y_05 26 f A_05 53 m Y_06 21 f A_06 61 f Y_07 23 m A_07 53 m Y_08 27 m A_08 56 m Y_09 25 f A_09 64 f speakers, had no documented hearing disorders or brain dysfunctions, and did not use drugs affecting the central nervous system. Originally, a group of 15 young subjects were studied, from which six were excluded from the final analyses due to technical problems during the measurement session and/or because of unreliable source localization results. The nine aged subjects were matched both in gender and handedness with the young subjects. 2.2. Stimuli The stimuli were created by using a sinusoidal tone and a speech sound as raw material. The production of the stimuli was started by recording a natural utterance (vowel /a/) produced by a native male speaker of Finnish using the fundamental frequency of 113 Hz. The recording was conducted in an anechoic chamber with a high-quality condenser microphone (Bruel and Kjaer 4188). Sound was digitized using a sampling frequency of 22050 Hz and a resolution of 16 bits. The strongest harmonic, located at 570 Hz in vicinity of the first formant of the vowel /a/, was measured by Fourier transforming the time-domain sound waveform. This frequency value was then used to produce a 570- Hz sinusoidal with a constant amplitude and a duration of 750 ms. By concatenating one fundamental period of the vowel waveform, a speech signal of equal duration was synthesized next. The waveforms of both the tone and the produced speech sound were then multiplied with a 750-ms ramp signal in order to temporally modify their intensity characteristics. The amplitude of the ramp, in dB, rose linearly by 90 dB over its duration. In order to calibrate the stimuli, the A-weighted sound pressure level was measured from the output of the sound delivery system. The levels of both the sinusoidal and speech stimulus were adjusted so that the sound pressure level (SPL) at the end of the stimuli corresponded to 60 dB. Hence, the procedure yielded two stimuli, a tone and a vowel, whose intensity increased linearly from inaudible ( 30 dB) to audible (60 dB) over a duration of 750 ms. 2.3. Measurements and procedures Magnetic field gradients produced by cortical activation were measured with a 306-channel whole-head MEG device (Vectorview, Elekta Neuromag Oy, Finland) at the BioMag Laboratory at HUCH. The sensor array of the device has 102 recording sites, each comprising one magnetometer and two orthogonal planar gradiometers that measure the longitudinal and latitudinal derivatives of the magnetic field normal to the magnetometer helmet. The measurements were carried out in two recording conditions. In the active condition, the subject was under instruction to attend to the sounds and to press a response key with his or her right-hand index finger when the sounds became audible. In the passive condition, the subject was instructed to ignore the auditory stimuli and to watch a silent movie. The head coordinates

BECs - DEPARTMENT OF PHYSICS - Boise State University
Popularity dynamics on the Web and Wikipedia - BECS
Permanency planning - Gouvernement du Québec
中期報告2013 -
View - Economic Research Forum
Auditory Research Bulletin 2007 Biennial Edition - Advanced Bionics
View Or Download Our Catalog (PDF) - JB Prince
to view presentation - St. Louis Times
View Print Brochure (PDF) - Baycrest
View the Presentation Slides
View presentation - Hatch
Viewing Guide - Stenhouse Publishers
2007 Conference on Implantable Auditory Prostheses Program ...
NGO Views on POLICY Responses to BPA, Parabens ... - COPHES