House of Acoustics Annual Report 2008

House of Acoustics Annual Report 2008

House of Acoustics Annual Report 2008


Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>House</strong> <strong>of</strong> <strong>Acoustics</strong><br />

Department <strong>of</strong> Electrical Engineering<br />

Technical University <strong>of</strong> Denmark<br />

<strong>Annual</strong> <strong>Report</strong> <strong>2008</strong>


&<br />



Department <strong>of</strong> Electrical Engineering<br />


Edited by Finn Jacobsen in February/March 2009<br />



Front page: Predominant mode (2, 2) <strong>of</strong> a doubly curved plate with cross-stiffeners in the x- and ydirection.<br />

See the description <strong>of</strong> the project ‘Modelling structural acoustic properties <strong>of</strong> loudspeaker<br />

cabinets’ on p. 10.<br />

Acoustic Technology and Hearing Research, Speech and Communication, Department <strong>of</strong> Electrical Engineering,<br />

Technical University <strong>of</strong> Denmark, Building 352, Ørsteds Plads, DK-2800 Kgs. Lyngby,<br />

Denmark<br />

Telephone +45 4525 3930<br />

Direct tel. +45 4525 39xx<br />

Telefax +45 4588 0577<br />

Web-server http://www.elektro.dtu.dk/English/research/at.aspx<br />



3<br />

Page<br />

Chairmen’s <strong>Report</strong> 5<br />

Staff 7<br />

1 Research 9<br />

Acoustic Technology 9<br />

Structureborne sound 9<br />

Acoustic Transducers and Measurement Techniques 11<br />

Noise and Noise Control 16<br />

Room <strong>Acoustics</strong> 17<br />

Building <strong>Acoustics</strong> 20<br />

MSc Projects 22<br />

Publications 28<br />

Hearing Systems, Speech and Communication 31<br />

Auditory Signal Processing and Perception 31<br />

Speech Perception 36<br />

Audiology 40<br />

Audio-Visual Speech and Auditory Neuroscience 41<br />

Objective Measures <strong>of</strong> Auditory Function 42<br />

MSc Projects 43<br />

Publications 46<br />

2 Teaching Activities 49<br />

Introductory Level 49<br />

Advanced Level 50<br />

Lecture Notes Issued in <strong>2008</strong> 52<br />

Appendix A: Extramural Appointments 53<br />

Appendix B: Principal Intramural Appointments 55


In the middle <strong>of</strong> <strong>2008</strong> Department <strong>of</strong> Electrical Engineering was reorganised; the five sections<br />

were replaced by eight research groups. A local consequence is that the section Acoustic Technology<br />

was replaced by two groups, ‘Acoustic Technology’ and ‘Hearing Systems, Speech and Communication’,<br />

where the latter essentially corresponds to Centre for Applied Hearing Research. At the same time<br />

the Head <strong>of</strong> the Section, Mogens Ohlrich, resigned from the position he had held for seven years. Mogens<br />

was succeeded by the two group chairmen Finn Jacobsen (Acoustic Technology) and Torsten Dau<br />

(Hearing Systems, Speech and Communication). Seen from the outside it may be slightly confusing that<br />

‘Acoustic Technology’ is now only a part <strong>of</strong> the former section Acoustic Technology. On the other<br />

hand it might have been even more confusing if all names had been changed. Of course the two groups,<br />

which share buildings and research facilities, continue to have a very close relationship; and evidently<br />

we still <strong>of</strong>fer the successful international MSc programme ‘Engineering <strong>Acoustics</strong>’ which involves<br />

courses and projects given by both groups.<br />

Another significant event in <strong>2008</strong> was that DTU’s former research dean Kristian Stubkjær replaced<br />

Jørgen Kjems as Head <strong>of</strong> Department <strong>of</strong> Electrical Engineering by 1 December. Jørgen Kjems<br />

was appointed as Head <strong>of</strong> Department ad interim in mid-2007; his task was to solve internal organisational<br />

problems resulting from DTU’s merging with other research institutions in the beginning <strong>of</strong><br />

2007. There are reasons to believe that he succeeded, and that the resulting restructured organisation<br />

with Kristian Stubkjær as the Head <strong>of</strong> Department will be thriving.<br />

On the staff side there have been significant changes. We are in the middle <strong>of</strong> a generational<br />

handover, with Mogens Ohlrich and Torben Poulsen now on part time. Moreover, as mentioned in last<br />

year’s report our long time colleague Jens Holger Rindel left the University at the end <strong>of</strong> 2007 in order<br />

to concentrate on Odeon, a spin-<strong>of</strong>f company from his room acoustic research; and in September <strong>2008</strong><br />

another long time colleague Anders Christian Gade also left us after a one-year leave <strong>of</strong> absence to concentrate<br />

on work in a joint consulting company. On the other hand Jonas Brunskog was appointed associate<br />

pr<strong>of</strong>essor in the spring <strong>of</strong> <strong>2008</strong>, and Cheol-Ho Jeong was appointed assistant pr<strong>of</strong>essor at the end<br />

<strong>of</strong> the year. Jonas and Cheol-Ho’s research activities are in architectural acoustics and structureborne<br />

sound, and together they give an introductory course for students with a background in civil engineering,<br />

‘Building <strong>Acoustics</strong>’ (given the first time by Jens Holger Rindel in 2007), the course ‘Architectural<br />

<strong>Acoustics</strong>’, and, from 2009, ‘Environmental <strong>Acoustics</strong>’; they also contribute to ‘Sound and Vibration’.<br />

At the end <strong>of</strong> the year Jörg Buchholz was appointed associate pr<strong>of</strong>essor. Jörg’s research activities are in<br />

communication acoustics, human sound perception and audiology. In terms <strong>of</strong> teaching, he will be taking<br />

over Torben Poulsen’s course on ‘Acoustic Communication’, which is a compulsory part <strong>of</strong> the<br />

MSc programme Engineering <strong>Acoustics</strong>.<br />

Research grants received in <strong>2008</strong> include a three-year project on ‘Spectro-temporal processing <strong>of</strong><br />

complex sounds in the human auditory system’ supported by the Danish Research Council (FTP); a<br />

five-year cooperation agreement between CAHR and the three Danish hearing aid companies GN Re-<br />

Sound, Oticon and Widex that supports research and education in the field <strong>of</strong> human acoustic communication<br />

by 12.4 million kr; and ‘A binaural master hearing aid platform’ supported by the Oticon foundation.<br />

New research projects started in <strong>2008</strong> include Iris Arweiler’s PhD project ‘Processing <strong>of</strong> spatial<br />

sounds in the impaired auditory system’, co-financed by DTU, the Graduate School on Sense Organs,<br />

Nerve Systems, Behaviour, and Communication (‘SNAK’) and Phonak; Lu Yuan’s industrial PhD project<br />

with Bang & Olufsen on the modelling <strong>of</strong> structural acoustic properties <strong>of</strong> loudspeaker cabinets;<br />

and David Pelegrin García’s PhD project ‘Speech comfort in class rooms’. This project, which is financed<br />

by the Swedish insurance and research organisation AFA, is a part <strong>of</strong> a larger cooperative project<br />

together with the Logopedics, Phoniatry and Audiology group at Lund University, Sweden. By the<br />

end <strong>of</strong> the year we received a PhD scholarship from Department <strong>of</strong> Electrical Engineering for Efrén<br />

Fernandez Grande for a project on acoustic holography.<br />

Two <strong>of</strong> our PhD students defended their theses in <strong>2008</strong>: in November Torsten Elmkjær defended<br />

his thesis on helmets with active noise control, and about one month later it was Gilles Pigasse’s turn to<br />


defend his thesis ‘Cochlear delays in humans using otoacoustic emissions and auditory evoked potentials’.<br />

In the EU-funded PhD student exchange programme ‘European Doctorate in Sound and Vibration<br />

Studies II’ we hosted two visiting PhD students in <strong>2008</strong>: Eleftheria Georganti, University <strong>of</strong> Patras,<br />

Greece, studied room transfer functions and reverberant signal statistics supervised by Finn Jacobsen;<br />

and Lars-Göran Sjökvist, Chalmers University, Sweden, studied structural sound transmission and attenuation<br />

in lightweight structures, supervised by Jonas Brunskog and Finn Jacobsen. Other guest PhD<br />

students include Bastiaan Warnaar, University <strong>of</strong> Amsterdam, who spent three months studying computational<br />

models <strong>of</strong> inner-ear hearing loss, and Yong-Bin Zhang, Hefei University <strong>of</strong> Technology, China,<br />

who began a one-year stay in September in which he studies near field acoustic holography supervised<br />

by Finn Jacobsen.<br />

Once again Steven Greenberg, The Speech Institute, Berkeley, CA, USA, spent three months<br />

working together with Thomas Ulrich Christiansen on speech research. Another guest researcher,<br />

Arturo Orozco Santillán, Universidad Nacional Autónoma de México, worked together with Finn<br />

Jacobsen on a project on sound intensity during the summer.<br />

In October, CAHR celebrated its 5th birthday. Pr<strong>of</strong>essor Brian Moore from Cambridge University<br />

gave a guest speech, and most CAHR researchers presented their current projects with talks or<br />

posters. Afterwards all guests, colleagues and friends were invited to test various special drinks at the<br />

cocktail party in the ‘CAHR bar’.<br />

The interest amongst foreign visiting students on short-term-stays continues to be high and we<br />

have a satisfactory intake <strong>of</strong> foreign and Danish students on our international two-year MSc-programme<br />

in Engineering <strong>Acoustics</strong>: this year we ‘house’ twenty-four foreign master students. We gratefully acknowledge<br />

that three <strong>of</strong> the foreign students in our international MSc-programme in Engineering<br />

<strong>Acoustics</strong> are receiving a DTU Student Sponsorship (DSS-sponsorship) financed by the companies<br />

Widex, Oticon, and Oticon Polska.<br />

Mogens Ohlrich Finn Jacobsen Torsten Dau<br />


Head <strong>of</strong> Acoustic Technology<br />

Finn Jacobsen, MSc, PhD, Dr. Techn., Associate<br />

Pr<strong>of</strong>essor<br />

Head <strong>of</strong> Hearing Systems, Speech and Communication,<br />

and Head <strong>of</strong> Centre for Applied<br />

Hearing Research<br />

Torsten Dau, Dr. rer. nat. habil., Pr<strong>of</strong>essor<br />

Associate Pr<strong>of</strong>essors<br />

Finn T. Agerkvist, MSc, PhD<br />

Hans-Heinrich Bothe, Dr. habil., PD<br />

Jonas Brunskog, MSc, PhD, docent<br />

Jörg M. Buchholz, Dr. Ing.<br />

Anders Christian Gade, MSc, PhD<br />

Mogens Ohlrich, BSc, PhD<br />

Torben Poulsen, MSc<br />

Assistant Pr<strong>of</strong>essors<br />

James M. Harte, BSc, PhD<br />

Cheol-Ho Jeong, MSc, PhD<br />

Scientists and Research Assistants on External<br />

Grants<br />

Salvador Barrera Figueroa, MSc, PhD (DFM)<br />

Thomas Ulrich Christiansen, MSc, PhD<br />

Dimitrios Christ<strong>of</strong>oridis, MSc<br />

Marton Marschall, MSc<br />

Tarmo Saar, MSc<br />

PhD Students<br />

Iris Arweiler, Dipl.-Ing.<br />

Torsten Haaber Leth Elmkjær, MSc (Terma)<br />

Sylvain Favrot, MSc<br />

Morten Løve Jepsen, MSc<br />

Jens Bo Nielsen, MSc<br />

David Pelegrin García, BSc, MSc<br />

Tobias Piechoviak, MSc<br />

Gilles Pigasse, MSc<br />

Sébastien Santurette, MSc<br />

Olaf Strelcyk, MSc<br />

Eric Thompson, BSc, MSc<br />

Sarah Verhulst, BSc, MSc<br />

Industrial PhD Students<br />

Lola Blanchard, MSc (B&O ICEpower)<br />

Helen Connor, MSc (Widex)<br />

Lars Friis, MSc (Widex)<br />

Yu Luan, BSc, MSc (Bang & Olufsen)<br />

Guilin Ma, BSc, MSc (GN ReSound)<br />

STAFF<br />

7<br />

Visiting PhD Students<br />

Eleftheria Georganti, University <strong>of</strong> Patras,<br />

Greece<br />

Lars-Göran Sjökvist, Chalmers University,<br />

Gothenburg, Sweden<br />

Bastiaan Warnaar, University <strong>of</strong> Amsterdam, the<br />

Netherlands<br />

Yong-Bin Zhang, Hefei University <strong>of</strong> Technology,<br />

China<br />

Visiting Scientists<br />

Steven Greenberg, The Speech Institute, Berkeley,<br />

CA, USA<br />

Arturo Orozco Santillán, Universidad Nacional<br />

Autónoma de México, Mexico City<br />

Emeritus Pr<strong>of</strong>essors<br />

Knud Rasmussen, MSc (DFM)<br />

Administrative and Technical Staff<br />

Nadia Jane Larsen, Secretary<br />

Tom A. Petersen, Assistant Engineer<br />

Jørgen Rasmussen, Assistant Engineer<br />

Aage Sonesson, Assistant Engineer<br />

Caroline van Oosterhout, Project secretary

1. RESEARCH<br />

The groups ‘Acoustic Technology’ and ‘Hearing Systems, Speech and Communication’ are parts<br />

<strong>of</strong> Department <strong>of</strong> Electrical Engineering at DTU and share research and teaching facilities. The research<br />

comprises investigations <strong>of</strong> generation, propagation and effects <strong>of</strong> sound and vibration, as well as auditory<br />

signal processing, speech, and perception <strong>of</strong> sound. The research may involve theoretical analyses,<br />

numerical techniques, subjective experiments and advanced measurement techniques.<br />


Acoustic Technology is concerned with structureborne sound, passive and active noise control,<br />

outdoor sound propagation, transducer technology (loudspeakers, microphone calibration), and acoustic<br />

measurement techniques (sound intensity, beamforming and acoustic holography), as well as room<br />

acoustics and building acoustics.<br />

Structureborne Sound<br />

Minimisation <strong>of</strong> vibroacoustic feedback in hearing aids<br />

Lars Friis<br />

Supervisors: Mogens Ohlrich and Finn Jacobsen<br />

Feedback problems in hearing aids are <strong>of</strong>ten caused by vibroacoustic transmission from the<br />

loudspeaker to the microphones. The objective <strong>of</strong> this industrial PhD-project has been to examine these<br />

mechanisms. This has been approached by developing a vibroacoustic model <strong>of</strong> a hearing aid. The<br />

model combines finite element analysis and traditional methods for modelling acoustics and vibration<br />

with an alternative, relatively new method, ‘fuzzy structures’, which is intended for predicting the vibrations<br />

<strong>of</strong> a deterministic ‘master’ structure with attached complex ‘fuzzy’ substructures with partly<br />

unknown dynamic properties. The main effect <strong>of</strong> the fuzzy substructures is to introduce high damping<br />

in the response <strong>of</strong> the master structure. An important part <strong>of</strong> the fuzzy theory is the modelling <strong>of</strong> fuzzy<br />

substructures attached to the master through a continuous interface. It has been shown that the continuous<br />

interface reduces the damping effect <strong>of</strong> the fuzzy substructures. Furthermore, an experimental<br />

method has been developed for estimating the fuzzy parameters <strong>of</strong> complex substructures. Simulation<br />

results <strong>of</strong> the model <strong>of</strong> the hearing aid show a very good agreement with measurements.<br />

The project is an industrial PhD project carried out in cooperation with the hearing aid manufacturer<br />

Widex A/S, with Lars Baekgaard Jensen as the industrial supervisor. The defence takes place in<br />

the spring <strong>of</strong> 2009.<br />

Experimental arrangement for electro-acoustic measurements on a resiliently suspended ‘behind the ear’<br />

hearing aid.<br />


Modelling structural acoustic properties <strong>of</strong> loudspeaker cabinets<br />

Yu Luan<br />

Supervisors: Mogens Ohlrich and Finn Jacobsen<br />

The objective <strong>of</strong> this industrial PhD-project is to develop techniques for predicting the structural<br />

acoustic response <strong>of</strong> loudspeaker cabinets <strong>of</strong> irregular shapes. The purpose is to minimise unwanted<br />

audible vibration <strong>of</strong> the cabinet that may reduce the sound quality <strong>of</strong> the loudspeaker. The project attempts<br />

to develop a model for simulating the structural acoustic properties in the low to mid-frequency<br />

ranges. Initially, a simple approach has been pursued for examining cross-stiffened rectangular panels<br />

and slightly curved shells. This involved the development <strong>of</strong> an improved ‘smearing method’ for predicting<br />

the natural frequencies <strong>of</strong> simply supported cross-stiffened rectangular plates. In contrast to Szilard’s<br />

original method, the improved method takes stiffeners in both the x- and y-direction into account<br />

in calculation <strong>of</strong> the equivalent bending stiffness. With finite element calculations and measurement<br />

results as a reference, the improved method has been found to give a higher accuracy <strong>of</strong> the natural frequencies<br />

than Szilard’s method; the prediction errors are approximately halved. Moreover, it has been<br />

found that the improved method can be applied for doubly curved rectangular panels with crossstiffeners.<br />

The results show that engineering accuracy can be obtained with relatively limited computational<br />

effort. As an example the figure on the front page shows the calculated modal deformation <strong>of</strong> a<br />

(2, 2) mode <strong>of</strong> a doubly curved cross-stiffened plate.<br />

The project is an industrial PhD project carried out in cooperation with Bang & Olufsen, with<br />

Søren Bech, Mogens Brynning, Lars F. Knudsen and Gert Munch as the industrial supervisors.<br />

Vibratory strength <strong>of</strong> machine sources and structural power transmission<br />

Mogens Ohlrich and Tarmo Saar<br />

This ongoing work focuses on a practical characterisation <strong>of</strong> vibrating machine sources and on<br />

simple techniques for predicting the vibratory power that such sources inject into connected receiving<br />

structures. The developed characterisation that specifies the ‘terminal power’ <strong>of</strong> a vibrating source, is<br />

incorporated and further developed in the project ‘VibPower’ as the chosen technique to be used for<br />

predicting vibratory power transmission from machinery. Vibration <strong>of</strong> complex machines is mostly<br />

measured and the developed technique makes use <strong>of</strong> the fact that source characterisation and prediction<br />

<strong>of</strong> power transmission can be simplified if all mobility cross-terms and spatial cross-coupling <strong>of</strong> source<br />

velocities can be neglected in the analysis. For structurally compact machines, however, the influence<br />

<strong>of</strong> cross-coupling may be important, at least at low frequencies. Such cross-coupling is examined in its<br />

most simple form for a rotor-type source with two flange mounts; see the figure below.<br />

This work is a continuation <strong>of</strong> Tarmo Saar’s MSc project.<br />

X<br />

impeller<br />

front<br />

terminals<br />

Z<br />

front<br />

bearing<br />

diffuser<br />

stator<br />

10<br />

brushes<br />

1 2<br />

shaft<br />

washer<br />

Y<br />

rotor<br />

rear<br />

bearing<br />

rear<br />

terminals<br />

Schematic illustration <strong>of</strong> a rotor-type machine source in the form <strong>of</strong> a high-speed ‘vacuum-motor’ with flange<br />

terminals at the front and rear ends.

Determination <strong>of</strong> structureborne sound power and reduction <strong>of</strong> vibrational power transmission<br />

(VibPower)<br />

Mogens Ohlrich<br />

The purpose <strong>of</strong> this project is to develop a source strength characterisation method and to develop<br />

design guides for reducing the transmission <strong>of</strong> structureborne sound from large machines to their foundation<br />

structures. In the project a Round Robin Test has been carried out <strong>of</strong> the source-strength measurement<br />

technique based on previous research at the Acoustic Technology (AT). The test arrangement<br />

comprises an industrial gearbox and a lightweight foundation placed at the laboratory <strong>of</strong> Machine Design,<br />

Tampere University <strong>of</strong> Technology (TUT). The Round Robin Test was carried out in succession<br />

by AT, TUT, and VTT, and the developed source strength technique was used by TUT and VTT on<br />

three industrial test cases. Power transmission in rotational coordinates can usually be neglected, and<br />

this has been verified by simulation studies conducted by VTT using a large finite element model <strong>of</strong> a<br />

complicated engine-foundation arrangement. It was found that the power in rotational coordinates is<br />

less than one percent <strong>of</strong> the translational power. The project was finished in March <strong>2008</strong>.<br />

In addition to the mentioned research institutes the following industrial partners participate: ABB<br />

Industry OY, Wärtsilä Marine Finland OY, and Moventas OY, all from Finland. The National Technology<br />

Agency <strong>of</strong> Finland finances this research with contributions from the three companies.<br />

Acoustic Transducers and Measurement Techniques<br />

Time variance <strong>of</strong> loudspeaker suspension<br />

Finn Agerkvist<br />

The electrodynamic loudspeaker is conventionally described by a lumped parameter model.<br />

However, the behaviour <strong>of</strong> the loudspeaker is in fact much more complicated than this model would<br />

suggest. At high levels the loudspeaker becomes nonlinear, and the nonlinearity <strong>of</strong> the compliance <strong>of</strong><br />

the suspension is one <strong>of</strong> the major sources <strong>of</strong> distortion. Unfortunately the compliance is known to vary<br />

with time depending on the signal fed to the loudspeaker. Three loudspeakers, identical except for the<br />

type <strong>of</strong> surround, have been investigated with respect to changes in compliance. Measurement both in<br />

the linear domain as well as in the nonlinear domain have been carried out At low levels the compliance<br />

changes significantly with the level <strong>of</strong> the measurement signal, and very little memory is observed. In<br />

the nonlinear domain compliance also increases with the level <strong>of</strong> the measurement signal, but previous<br />

results suggesting that the shape <strong>of</strong> the nonlinearity also changes have not been confirmed by this investigation.<br />

Creep, a viscoelastic effect in loudspeaker suspension parts<br />

Finn Agerkvist<br />

This project investigates the viscoelastic behaviour <strong>of</strong> loudspeaker suspension parts, which can<br />

be observed as an increase in displacement far below the resonance frequency. The creep effect means<br />

that the suspension cannot be modelled as a simple spring. The need for an accurate creep model is even<br />

larger as it is attempted to extend the validity <strong>of</strong> loudspeaker models into the nonlinear domain. Different<br />

creep models are investigated and implemented both in simple lumped parameter models and in<br />

time domain nonlinear models, and the simulation results are compared with a series <strong>of</strong> measurements<br />

on three version <strong>of</strong> the same loudspeaker with different thickness and rubber type used in the surround.<br />

The project is carried out in collaboration with Tymphany A/S.<br />

Error correction <strong>of</strong> loudspeakers<br />

Bo R. Petersen<br />

Supervisor: Finn Agerkvist<br />

In order to apply nonlinear compensation in loudspeakers, the compensation algorithm needs<br />

very accurate information about the loudspeaker parameter. Some <strong>of</strong> these parameters have a strong<br />

time dependence, and it is therefore important to track them in real time. A system identification algorithm<br />

has been developed and tested, and the convergence has been tested with musical signals. In order<br />


to help the algorithm tracking the changes in the suspension compliance an investigation has been carried<br />

out on how the small signal compliance changes with the mechanical and electrical load <strong>of</strong> the<br />

loudspeaker. Another effect in the loudspeaker is that the nonlinear force factor depends on the current<br />

in the voice coil; this effect has been investigated in measurements and simulations.<br />

Bo Petersen has been enrolled as a PhD student at Esbjerg Institute <strong>of</strong> Technology, Aalborg University,<br />

with Jens Arnsbjerg as the main supervisor. The thesis was defended successfully on 13 October<br />

<strong>2008</strong>.<br />

Determination <strong>of</strong> microphone responses and other parameters from measurements <strong>of</strong> the membrane<br />

velocity and boundary element calculations<br />

Salvador Barrera-Figueroa, Finn Jacobsen, and Knud Rasmussen<br />

Left: the measurement setup; right: acoustic centre <strong>of</strong> an LS1 microphone. Solid line: experimental estimate; line<br />

with square markers: numerical estimate assuming a Bessel-like movement; line with circular markers: numerical<br />

estimate obtained assuming a uniform velocity; and line with star-markers: estimate from the hybrid method.<br />

Normalised amplitude and phase <strong>of</strong> the velocity <strong>of</strong> the membrane <strong>of</strong> a microphone at different frequencies. Solid<br />

line: normalised amplitude; dash-dotted line: phase. Left: LS1 microphone; right: LS2 microphone.<br />

Numerical calculations <strong>of</strong> the pressure, free-field and random-incidence response <strong>of</strong> condenser<br />

microphones are usually carried out on the basis <strong>of</strong> an assumed displacement distribution <strong>of</strong> the microphone<br />

diaphragms. The conventional assumption is that the displacement follows a Bessel function, and<br />

this assumption is probably valid at frequencies below the resonance frequency. However, at higher<br />

frequencies the diaphragm is heavily coupled with the thin air film between the diaphragm and the back<br />

plate and with resonances in the back chamber <strong>of</strong> the microphone. A solution to this problem is to<br />


measure the velocity distribution <strong>of</strong> the membrane by means <strong>of</strong> a non-contact method, for instance laser<br />

vibrometry. The measured velocity distributions can be used together with a numerical formulation<br />

such as the Boundary Element Method for estimating the microphone response and other parameters as<br />

e.g. the acoustic centre. The velocity distributions <strong>of</strong> a number <strong>of</strong> condenser microphones have been<br />

measured using a laser vibrometer. This measured velocity distribution was used for estimating the microphone<br />

responses and parameters. The agreement with experimental data is good. This method can be<br />

used as an alternative for validating the parameters <strong>of</strong> the microphones determined by classical calibration<br />

techniques.<br />

Relation between the radiation impedance and the diffuse-field response <strong>of</strong> a condenser microphone<br />

Salvador Barrera-Figueroa, Finn Jacobsen, and Knud Rasmussen<br />

The relation between the diffuse-field response and the radiation impedance <strong>of</strong> a microphone has<br />

been investigated. Such a relation can be derived from classical theory. The practical measurement <strong>of</strong><br />

the radiation impedance requires a) measuring the volume velocity <strong>of</strong> the membrane <strong>of</strong> the microphone,<br />

and b) measuring the pressure on the membrane <strong>of</strong> the microphone. The first measurement is carried out<br />

by means <strong>of</strong> laser vibrometry. The second cannot be implemented in practice. However, the pressure on<br />

the membrane can be calculated numerically by means <strong>of</strong> the boundary element method. In this way, a<br />

hybrid estimate <strong>of</strong> the radiation impedance is obtained. The resulting estimate <strong>of</strong> the diffuse-field response<br />

is compared with experimental estimates <strong>of</strong> the diffuse-field response determined from measurements<br />

in a reverberant room and with numerical calculations. The different estimates are in good<br />

agreement at frequencies below the resonance frequency. Although the method may not be <strong>of</strong> great<br />

practical utility, it provides a useful validation <strong>of</strong> the estimates obtained by other means.<br />

Diffuse-field correction <strong>of</strong> microphones. The thick solid line is the diffuse-field correction determined using reciprocity;<br />

the dash-dotted line is the random-incidence correction; the line with circular markers is the randomincidence<br />

correction calculated using the boundary element method; the line with star markers is the diffuse-field<br />

correction determined from the radiation resistance. Left: LS1 microphone; right: LS2 microphone.<br />

High quality active earphone system<br />

Lola Blanchard<br />

Supervisors: Finn Agerkvist and Finn Jacobsen<br />

In order to improve the sound quality <strong>of</strong> concha headphones, a study <strong>of</strong> their current limitations<br />

has been carried out. One <strong>of</strong> the most used concha headphones, the Ipod ‘earbud’, has been studied, in<br />

parallel to the Bang & Olufsen ‘A8’. The limitations <strong>of</strong> the measurement equipment have been looked<br />

at as well. The target response has been chosen to be the response <strong>of</strong> a Sennheiser HD 650. The main<br />

focus has been on measuring the leak and defining the coupling between the ear and the headphone by<br />


means <strong>of</strong> measurements and modelling. The influence <strong>of</strong> the back volume has also been investigated,<br />

and it has been observed that the vented box headphone does not behave as a vented loudspeaker <strong>of</strong><br />

normal size. Further investigations should be carried out, especially on the coupling between the front<br />

and rear <strong>of</strong> the loudspeaker.<br />

This industrial PhD project is carried out in cooperation with B&O ICEpower, with Lars Brandt<br />

Rosendahl Hansen and Anders Røser Hansen as the industrial supervisors.<br />

New strategies for feedback suppression in hearing instruments<br />

Guilin Ma<br />

Supervisors: Finn Jacobsen and Finn Agerkvist<br />

Audible feedback is among the most prominent problems with hearing instruments. Unchecked<br />

feedback can lead to system instability and cause very unpleasant whistling or howling. The objective<br />

<strong>of</strong> this industrial PhD project is to examine and develop new ways <strong>of</strong> improving feedback suppression<br />

techniques. Two aspects have been investigated in <strong>2008</strong>. The first is about modelling the physical feedback<br />

path. The feedback path is subject to dramatic environmental changes, for instance when the user<br />

picks up a phone and is therefore very difficult to model. A new model based on reflection assumptions<br />

has been proposed, and this model shows much better accuracy in modelling the measured dynamic<br />

feedback paths. The second aspect is about feedback adaptation. Feedback cancellation suffers from<br />

correlation problems, and therefore on-line estimation <strong>of</strong> the feedback path is problematic. A new approach<br />

for decorrelating the signals has been proposed. The proposed method shows an excellent performance<br />

in the first test. The matter will be investigated further in 2009.<br />

The project is carried out in cooperation with GN ReSound with Fredrik Gran as the industrial<br />

supervisor.<br />

Danish primary laboratory <strong>of</strong> acoustics (DPLA)<br />

Knud Rasmussen and Salvador Barrera-Figueroa<br />

DPLA was established in 1989 by the Agency for Development <strong>of</strong> Trade and Industry as a cooperation<br />

between Brüel & Kjær Sound and Vibration and Acoustic Technology, DTU. DPLA is responsible<br />

for maintaining and disseminating the basic unit <strong>of</strong> sound pressure (the pascal) and acceleration<br />

m/s 2 ) in Denmark. The associated research and international cooperation is mainly performed by DTU.<br />

International cooperation is accomplished through meetings within EURAMET, the International Electrotechnical<br />

Commission (IEC), the Consultative Committee for <strong>Acoustics</strong>, Ultrasound and Vibration<br />

(CCAUV) under BIPM, and by participating in common projects organised by these bodies. By the end<br />

<strong>of</strong> 2005 the responsibilities and activities <strong>of</strong> Acoustic Technology were transferred to the Danish Fundamental<br />

Metrology (DFM), located in building 307 on the DTU Campus. This was formally accepted<br />

by DANAK-Metrology in 2007. However, close research cooperation is maintained between DFM and<br />

Acoustic Technology on acoustic metrology and measurement techniques.<br />

CCAUV.A-K4 key comparison<br />

Knud Rasmussen and Salvador Barrera-Figueroa<br />

An international key comparison on free field reciprocity calibration <strong>of</strong> type ½” laboratory standard<br />

microphones in the frequency range from 3 to 31 kHz has been agreed on in 2006. DPLA acts as<br />

the pilot laboratory with nine participating countries (Brazil, Denmark, France, Germany, Great Britain,<br />

Japan, Korea, Mexico and USA). Two microphones <strong>of</strong> type B&K 4180 have been circulating among<br />

the participants since the end <strong>of</strong> February 2007. During 2007 Great Britain has withdrawn its participation<br />

and in <strong>2008</strong> also USA withdrew. A draft report was discussed among the participants during a<br />

meeting in October at the CCAUV meeting in Paris.<br />

Free-field comparison calibration <strong>of</strong> WS1 and WS2 microphones<br />

Knud Rasmussen and Salvador Barrera-Figueroa<br />

Free field reciprocity calibration <strong>of</strong> microphones is very time consuming, and thus a secondary<br />

calibration technique based on comparison with the primary reference microphones has been developed.<br />

A small broad band loudspeaker is used as sound source. A ¼” microphone placed close to the centre <strong>of</strong><br />


the source is used to monitor the sound signal. The transfer function between this monitor microphone<br />

and the microphone under test is determined, with the latter placed on axis about 70 cm from the<br />

source. To improve the accuracy the standard procedure is based on comparison with three reference<br />

microphones, all calibrated by the reciprocity technique. This technique has been approved by DANAK<br />

and EURAMET in <strong>2008</strong>. The uncertainty <strong>of</strong> the comparison calibration is estimated to be about 0.15 dB<br />

in the frequency range 1 to 30 kHz.<br />

The uncertainty in intensity-based sound power measurements<br />

Arturo Orozco Santillán and Finn Jacobsen<br />

The sound power emitted by a source provides the most practical and general description <strong>of</strong> its<br />

acoustic radiation. Sound intensity measurements make it possible to determine the sound power <strong>of</strong> a<br />

source in situ, even in the presence <strong>of</strong> other sources. However, the fact that intensity-based sound power<br />

measurements can take place under widely different conditions makes it extremely difficult to evaluate<br />

the resulting measurement uncertainty. The general objective <strong>of</strong> this work was to analyse the effect <strong>of</strong><br />

the most common sources <strong>of</strong> error on sound power determination based on intensity measurements. The<br />

sources <strong>of</strong> error include the orientation <strong>of</strong> the measurement surface, the spatial sampling, phase mismatch,<br />

the finite difference error, scattering and diffraction, the projection error, the presence <strong>of</strong> an extraneous<br />

source, and reverberation. A theoretical analysis was been carried out based on computer<br />

simulations, which showed that phase mismatch is one <strong>of</strong> the most significant sources <strong>of</strong> error. Experimental<br />

data obtained under extreme measurement conditions supplement the theoretical results.<br />

Near field acoustic holography based on the equivalent source method and pressure-velocity<br />

transducers<br />

Yong-Bin Zhang<br />

Supervisor: Finn Jacobsen<br />

The advantage <strong>of</strong> using the normal component <strong>of</strong> the particle velocity rather than the sound pressure<br />

in the hologram plane as the input <strong>of</strong> conventional spatial Fourier transform-based near field acoustic<br />

holography has recently been demonstrated. This investigation has examined whether there might be<br />

a similar advantage in using the particle velocity as the input <strong>of</strong> near field acoustic holography based on<br />

the equivalent source method. Error sensitivity considerations indicate that this method is less sensitive<br />

to measurement errors when it is based on particle velocity input data than when it is based on measurements<br />

<strong>of</strong> sound pressure data, and this is confirmed by a simulation study and by experimental results.<br />

A method that combines pressure- and particle velocity-based reconstructions in order to distinguish<br />

between contributions to the sound field generated by sources on the two sides <strong>of</strong> the hologram<br />

plane has also been examined.<br />

Near field measurement with pressure-velocity transducer in the large anechoic room.<br />


Noise and Noise Control<br />

Combined health effects <strong>of</strong> particles and noise<br />

Jonas Brunskog and Torben Poulsen<br />

Air pollution is an important risk factor for the cardiopulmonary disease. There is also evidence<br />

<strong>of</strong> a causal effect between noise exposure and health effects due to hypertension and ischemic heart disease.<br />

However, although particles and noise originate mostly from traffic, no study has examined the<br />

combined effects. The methodology <strong>of</strong> this project is based on controlled studies with well defined exposure<br />

and medical examination combined with determination <strong>of</strong> measurable physiological response<br />

and reported annoyance level. Healthy volunteers will be exposed to low particle concentration and<br />

background noise, high particle concentration and low background noise, low particle concentration and<br />

high traffic noise, and high particle concentration and traffic noise. Effects <strong>of</strong> particles, noise, and combined<br />

particle and noise exposure on heart rate variability, stress hormones and inflammatory markers<br />

in blood will be examined. Cognitive and psychological experiments will also be carried out.<br />

Active noise cancellation headsets<br />

Torsten H. Leth Elmkjær<br />

Supervisor: Finn Jacobsen<br />

Torsten Elmkjær’s PhD study has been motivated by the extremely high sound pressure levels<br />

encountered onboard airborne military platforms. The suggested solution is hearing protectors with active<br />

noise control based on a hybrid multi-channel combination <strong>of</strong> feedforward and feedback control.<br />

Active noise reduction systems based on feedforward control are limited by lack <strong>of</strong> coherence between<br />

the reference signals and the error signals to be minimised, and active noise reduction systems based on<br />

feedback control are limited by time delays; therefore a considerable part <strong>of</strong> the research has been focused<br />

on examining these limitations. Another issue studied in this work is the development <strong>of</strong> adaptive<br />

algorithms that can handle non-Gaussian “heavy tailed” impulsive signals.<br />

The project was successfully defended on 7 November <strong>2008</strong>.<br />

Sound quality evaluation model <strong>of</strong> air cleaners<br />

Cheol-Ho Jeong<br />

A comparison <strong>of</strong> tested and predicted subjective scores. (a) Performance; (b) annoyance.<br />

People in a quiet enclosed space expect calm sound at low operational levels for routine cleaning<br />

<strong>of</strong> air. However, in the condition <strong>of</strong> high operational levels <strong>of</strong> the cleaner, a powerful yet not annoying<br />

sound is desired. A model for evaluating the sound quality <strong>of</strong> air cleaners <strong>of</strong> the mechanical type has<br />

been developed based on objective and subjective analyses. Signals from various air cleaners were recorded<br />

and edited by increasing or decreasing the loudness in three loudness bands. Subjective tests<br />

using the edited sounds were conducted both by the semantic differential method and by the method <strong>of</strong><br />

successive intervals. Two major characteristics, performance and annoyance, were factored out from the<br />

principal component analysis. The subjective feeling <strong>of</strong> performance was related to both low and high<br />


frequencies; whereas the annoyance was related to high frequencies. Annoyance and performance indices<br />

<strong>of</strong> air cleaners were modelled from the subjective responses and the measured sound quality metrics:<br />

loudness, sharpness, roughness, and fluctuation strength, using the multiple regression method.<br />

This project has been carried out in collaboration with Korea Advanced Institute <strong>of</strong> Science and<br />

Technology (KAIST) and Woongjin Coway co. Ltd.<br />

Room <strong>Acoustics</strong><br />

Increase <strong>of</strong> voice level and speaker comfort in lecture rooms<br />

Jonas Brunskog and Anders Christian Gade<br />

Teachers <strong>of</strong>ten suffer from health problems related to their voice. These problems are related to<br />

the acoustics <strong>of</strong> the lecture rooms. However, there is a lack <strong>of</strong> studies linking the room acoustic parameters<br />

to the voice <strong>of</strong> the speaker. In this pilot study, the main goals were to investigate whether objectively<br />

measurable parameters <strong>of</strong> the rooms can be related to an increase <strong>of</strong> the voice sound power produced<br />

by speakers and to the speakers’ subjective judgments about the rooms. The sound power level<br />

produced by six speakers was measured in six different rooms with different size, reverberation time<br />

and other physical attributes. Objective room acoustic parameters were measured in the same rooms,<br />

including reverberation time and room gain, and questionnaires were handed out to persons who had<br />

experience talking in the rooms. Significant changes in the sound power produced by the speaker were<br />

be found in different rooms. It was also observed that the changes mainly have to do with the size <strong>of</strong> the<br />

room and to the gain produced by the room. To describe this quality a new room acoustic quantity<br />

called ‘room gain’ is proposed.<br />

Initial work in this project was carried out by two students, Gaspar Payá Bellester and Lilian Reig<br />

Calbo, in special courses.<br />

Statistical analysis and modelling <strong>of</strong> room acoustics<br />

Eleftheria Georganti<br />

Supervisor: Finn Jacobsen<br />

The main concern <strong>of</strong> this work has been to examine the statistical properties <strong>of</strong> measured room<br />

transfer functions across frequency and space inside typical rooms. The relationship between standard<br />

deviation and the source-receiver distance has been also examined and it was shown that beyond room<br />

critical distance, the spatial standard deviation approaches the result obtained for the deviation across<br />

frequency. Simplified versions <strong>of</strong> transfer functions were obtained by complex smoothing using onethird<br />

octave analysis. The idea was that complex smoothing might lead to simplifications that could be<br />

useful for audio applications such as room compensation, auralisation and dereverberation techniques.<br />

The sound power <strong>of</strong> a source in a reverberation room<br />

Finn Jacobsen<br />

Normalised spatial standard deviation<br />

2<br />

1.5<br />

1<br />

0.5<br />

0<br />

10 2<br />

Small lightly damped room<br />

Frequency [Hz]<br />

10 3<br />

Measured<br />

Eq. (4)<br />

Eq. (11)<br />

17<br />

Normalised space-frequency standard deviation<br />

2<br />

1.5<br />

1<br />

0.5<br />

0<br />

10 2<br />

Small lightly damped room<br />

Frequency [Hz]<br />

10 3<br />

Measured<br />

Eq. (4)<br />

Eq. (11)<br />

Spatial standard deviation (left) and space-frequency standard deviation (right) <strong>of</strong> the sound power <strong>of</strong> a source.

It is well known that the sound power output <strong>of</strong> a source emitting a pure tone or a narrow band <strong>of</strong><br />

noise varies significantly with its position in a reverberation room, and even larger variations occur between<br />

different rooms. The resulting substantial uncertainty in measurements <strong>of</strong> sound power as well as<br />

in predictions based on knowledge <strong>of</strong> sound power is one <strong>of</strong> the fundamental limitations <strong>of</strong> ‘energy<br />

acoustics’. The existing theory for this phenomenon is fairly complicated and has only been validated<br />

rather indirectly. A simpler theory has been developed that gives predictions in excellent agreement<br />

with the established theory, and the results have validated both experimentally in a number <strong>of</strong> rooms,<br />

and from finite element calculations. The results confirm the phenomenon known as ‘weak Anderson<br />

localisation’ – an increase <strong>of</strong> the reverberant part <strong>of</strong> the sound field at the source position. The numerical<br />

calculations have been carried out by Alfonso Rodríguez Molares, University <strong>of</strong> Vigo, Spain.<br />

The incident sound intensity in a reverberation room<br />

Finn Jacobsen<br />

The conventional method <strong>of</strong> measuring the insertion loss <strong>of</strong> a partition relies on an assumption <strong>of</strong><br />

the sound field in the source room being diffuse and the classical relation between the incident sound<br />

power per unit area and a spatial average <strong>of</strong> the sound pressure in the source room; and it has always<br />

been considered impossible to measure the sound power incident on a wall directly. Moreover, whereas<br />

it has been established for many years that one should use the ‘Waterhouse correction’ for determining<br />

the transmitted sound power in the receiving room, opinions vary as to whether one should use any correction<br />

for determining the incident sound power in the source room. Theoretical analysis shows that<br />

one should indeed use such a correction in the source room. In order to validate this theory a method <strong>of</strong><br />

measuring the incident sound power based on ‘statistically optimised near field acoustic holography’<br />

and the sound pressure on the wall has been developed.<br />

New weighting factors for estimating Sabine absorption coefficients<br />

Cheol-Ho Jeong<br />

In practice, a perfectly diffuse sound field cannot be achieved in a reverberation room, and therefore<br />

measured absorption coefficients in such rooms deviate from the theoretical random incidence absorption<br />

coefficients. The actual angular distributions <strong>of</strong> the incident acoustic energy should be taken<br />

into account. This study tried to improve the agreement by weighting the random incidence absorption<br />

coefficient according to the actual incident sound energy. The angular distribution <strong>of</strong> the incident energy<br />

density is simulated using the beam tracing method for various room shapes and source positions.<br />

The proposed angle-weighted absorption coefficients agree satisfactorily with the measured absorption<br />

data determined using the reverberation room method. At high frequencies and for large samples, the<br />

averaged weighting corresponds well with the measurement, whereas at low frequencies and for small<br />

panels a relatively flat distribution agrees better.<br />

Statistical absorption coefficient<br />

1.2<br />

1<br />

0.8<br />

0.6<br />

0.4<br />

0.2<br />

10 2<br />

0<br />

e=2.4 m<br />

Frequency (Hz)<br />

A comparison <strong>of</strong> absorption coefficients. , Measurement; , absorption by the averaged weighting<br />

function. , possible range <strong>of</strong> the weighted absorption coefficient; , the random incidence absorption.<br />

18<br />

10 3

Reconstruction <strong>of</strong> sound pressures in an enclosure using the phased beam tracing method<br />

Cheol-Ho Jeong<br />

Source identification in an enclosure is not an easy task because <strong>of</strong> wave interference and wall reflections,<br />

in particular at mid and high frequencies. A phased beam tracing method has been applied to<br />

the reconstruction <strong>of</strong> source pressures inside an enclosure at medium frequencies. First, surfaces <strong>of</strong> an<br />

extended source were divided into small segments. From the each source segment one beam was projected<br />

into the field, and all emitted beams were traced. Radiated beams from the source reach array<br />

sensors after travelling various paths. Collecting all the pressure histories at the field points, sourceobserver<br />

relations can be constructed in a matrix-vector form for each frequency. By multiplying the<br />

measured field data with the pseudo-inverse <strong>of</strong> the calculated transfer function, one obtains the distribution<br />

<strong>of</strong> source pressure. Omnidirectional spherical and cubic sources in a rectangular enclosure were<br />

taken as examples in the simulation tests. The reconstruction error was investigated by Monte Carlo<br />

simulation. When the source was reconstructed by the new method it was shown that the sound power<br />

spectrum <strong>of</strong> the source in an enclosure could be estimated with precision.<br />

p s,m<br />

reflection coefficient, r 1<br />

st<br />

a 1<br />

1 order reflection,<br />

direct ray, a 0<br />

rd<br />

3 order reflection,<br />

a3 reflection coefficient, r 3<br />

19<br />

nd<br />

2 order reflection,<br />

p n<br />

a 2<br />

Array <strong>of</strong> field points<br />

A drawing for reconstructing source information using the phased beam tracing with an array <strong>of</strong> field points.<br />

Sound intensity over an absorber in a reverberation room<br />

Cheol-Ho Jeong<br />

Normalized intensity<br />

1.8<br />

1.6<br />

1.4<br />

1.2<br />

1<br />

0.8<br />

0.6<br />

0.4<br />

0.2<br />

0<br />

0 10 20 30 40 50 60 70 80 90<br />

Incidence angle (deg.)<br />

Distributions <strong>of</strong> the average incident intensity over the sample. , 250 Hz; , 500 Hz; , 1<br />

kHz; , 2 kHz; , 4 kHz.<br />

When measuring absorption coefficients <strong>of</strong> absorbers in reverberation rooms a perfect diffuse<br />

field cannot be achieved because <strong>of</strong> the large absorbing area. Therefore measured absorption coefficients<br />

in reverberation rooms deviate from random incidence absorption coefficients, which assume a<br />

uniform intensity distribution over an absorber. If the actual distributions can be identified, it may be<br />

possible to compensate for it. This study is concerned with sound intensity distributions over an absorber<br />

sample by means <strong>of</strong> numerical simulations. A phased beam tracing has been used in order to estimate<br />

sound pressures at two points near the sample, resulting in the incident sound intensity onto the<br />

reflection coefficient, r 2

sample. The incident intensity distribution is affected by room properties such as total absorption area,<br />

room geometry and volume, as well as the locations <strong>of</strong> a source-receiver pair. Assuming one surface is<br />

fully covered by an absorber, the effects <strong>of</strong> observation position and absorption coefficient <strong>of</strong> the sample<br />

on the incident intensity have been investigated. Near a corner <strong>of</strong> the sample the sound intensity at<br />

grazing incidence is limited, and low absorption coefficients make the incident intensity uniform.<br />

Speakers’ comfort and increase <strong>of</strong> their voice level in lecture rooms<br />

David Pelegrin García<br />

Supervisors: Jonas Brunskog and Torben Poulsen<br />

Most studies <strong>of</strong> classroom acoustics have focused on the listeners’ point <strong>of</strong> view. In logopedics<br />

and phoniatrics it has been stated that the teachers’ voice health problems is a major concern, and studies<br />

have shown that every teacher on average has two days <strong>of</strong> absence every year because <strong>of</strong> voice<br />

problems. To improve the teaching environment from an acoustic point <strong>of</strong> view it is necessary to get a<br />

better understanding <strong>of</strong> the relationship between a speaker and the physical environment. It is assumed<br />

that variations in the voice power level are a good indicator <strong>of</strong> the vocal effort experienced by the<br />

teachers. An auditory virtual environment has been developed in order to carry the experiments regarding<br />

the influence <strong>of</strong> the acoustic environment on the voice power. The setup consists <strong>of</strong> twenty-nine<br />

loudspeakers placed in a sphere arrangement in a damped room. The system, shown in the figure below,<br />

makes it possible to recreate the acoustics <strong>of</strong> any classroom in real time. First, the impulse response<br />

with directional information is obtained. For each reflection the level, delay, spectral distribution and<br />

incidence angle are specified.<br />

Building <strong>Acoustics</strong><br />

Overview <strong>of</strong> the laboratory setup.<br />

Flanking transmission in lightweight building structures<br />

Jonas Brunskog<br />

Lightweight building structures are <strong>of</strong> large interest in the building industry since the tendency is<br />

toward a more industrialised building production, and since low weight means cheaper foundation.<br />

However, the lightweight building structures are in many acoustic aspects different from traditional<br />

building structures such as concrete and brick walls; they contain more layers and materials and are <strong>of</strong>ten<br />

periodic in nature. This means that the usual building acoustic models based on statistical energy<br />

analysis (SEA) do not work properly for such structures. Instead analytical wave or modal based methods<br />

should be used to analyse the underlying physics. The purpose <strong>of</strong> this work is to improve the simplified<br />

approaches such as SEA as specified in the standard EN 12354. An ad hoc working group has<br />

been established, and this project should give input to the working group.<br />

The work is carried out in cooperation with Lars-Göran Sjökvist, SP-Trätek, Sweden, and Hyuck<br />

Chung, University <strong>of</strong> Auckland, New Zealand.<br />


State <strong>of</strong> the art <strong>of</strong> acoustics in wooden buildings<br />

Jonas Brunskog<br />

A Swedish consortium was initiated by SP Trätek in 2007 in order to improve the general competence<br />

in the acoustics <strong>of</strong> wooden buildings. The consortium consists <strong>of</strong> all national R&D performers,<br />

leading companies in the building, building materials and wood sectors, and leading consultants. A report<br />

is the first result. The report includes a literature survey, analysis and identification <strong>of</strong> industrial<br />

needs for producing wooden buildings with good acoustic comfort, and a list <strong>of</strong> research needed to<br />

reach that goal. For wooden constructions there are several features that differ from those in concrete<br />

and other heavy constructions. The weight <strong>of</strong> a construction is an important parameter for the airborne<br />

sound insulation at low frequencies, and therefore wood constructions may have poor sound insulation.<br />

Impact sound from people walking is a common sound insulation problem for lightweight floors. Flanking<br />

transmission is another problem. Noise from installations is <strong>of</strong>ten dominated by low frequencies,<br />

and therefore special consideration is needed for wooden constructions. Existing models, e.g. the European<br />

prediction standard EN 12354, are best suited for heavy constructions,. The costly process <strong>of</strong> using<br />

test-buildings is common even though the results are not useful for slightly different building constructions.<br />

Hence, there is a need for developing prediction tools.<br />

This work was carried out together with a large group <strong>of</strong> Swedish scientist, acoustic consultants<br />

and industry representatives.<br />

Structural sound transmission and attenuation in lightweight structures<br />

Lars-Göran Sjökvist<br />

Supervisors: Jonas Brunskog and Finn Jacobsen<br />

For some years it has been popular to build large houses with lightweight building techniques,<br />

especially with wood. This project have examined features related to sound insulation in lightweight<br />

buildings. The vibration pattern has been measured for a junction, and a simplified lightweight structure<br />

has been analysed theoretically. Both studies focused on the attenuation rate, i.e. the rate at which the<br />

vibrations decrease. The results show that the attenuation rate is high in the direction across beams; in<br />

the direction along the beams there is not much attenuation. The attenuation rate depends on several<br />

parameters, in particular the vibration frequency and structural stiffness. Very high attenuation is observed<br />

for wavelengths longer than the distance between the beams. It had been implied that the vibration<br />

level decreases fast away from the impact source in a wooden lightweight structure. The present<br />

work showed that no such decrease could be proved with good significance in the direction along the<br />

beams.<br />

The plate seen from above. The position <strong>of</strong> the beams is marked with black lines and the excitation point is<br />

marked with a white '+' sign. The vibration level for the 5000 Hz one-third octave band is displayed by grey scale,<br />

where darker means more vibration; see the bar at the right side <strong>of</strong> the figure.<br />


MSc Projects<br />

The influence <strong>of</strong> material properties and receiver suspension shapes in hearing aids<br />

Xinyi Chen<br />

Supervisors: Finn Jacobsen and Mogens Ohlrich<br />

Rubber is an important material with versatile engineering applications. When rubber is used for<br />

vibration isolation the dynamic elastic modulus and damping <strong>of</strong> the material are <strong>of</strong> concern. This project<br />

studied the rubber suspension in a hearing aid. The suspension holds the loudspeaker and should<br />

prevent the transmission <strong>of</strong> vibrations to the shell <strong>of</strong> the hearing aid. The frequency range <strong>of</strong> concern<br />

was broad (up to kilohertz). Neither the static modulus or damping nor the dynamic modulus or damping<br />

<strong>of</strong> the material are usually known. The rubber suspension was examined both from a material point<br />

<strong>of</strong> view and from a structural point <strong>of</strong> view. The dynamic elastic modulus (Young's modulus here) and<br />

the damping were determined by different methods. Unfortunately, the results deviated from each other<br />

because <strong>of</strong> an inherent feature <strong>of</strong> one <strong>of</strong> the methods and because <strong>of</strong> the uncertainty caused by the experiments.<br />

However, a general conclusion is that both the Young’s modulus and the damping <strong>of</strong> the<br />

rubber suspension are frequency dependent in the frequency range <strong>of</strong> concern.<br />

The project was carried out in cooperation with Oticon with Martin Larsen and Søren Halkjær as<br />

the local supervisors.<br />

Properties <strong>of</strong> a bone conduction transducer<br />

Gorm Dannesboe<br />

Supervisor: Finn Jacobsen<br />

This project examined a bone conduction microphone for use in headset applications for Bluetooth<br />

and for radio. This was done both by theoretical modelling and through experiments. Simulations<br />

were made by modelling the bone conduction microphone with electroacoustic analogous circuits. The<br />

response <strong>of</strong> the head to vibrations generated by speech was determined experimentally. The bone conduction<br />

microphone was improved by modifying several aspects <strong>of</strong> the design. Frequency response and<br />

dampening <strong>of</strong> external noise were tested on prototypes <strong>of</strong> the new design.<br />

The project was carried out in cooperation with Invisio.<br />

Microphone calibration<br />

Marta Díaz Ben<br />

Supervisor: Finn Jacobsen<br />

Working standard microphones are usually calibrated by comparison with a laboratory standard<br />

microphone with a known sensitivity. A comparison calibration involves exposing the two microphones<br />

to the same sound pressure either simultaneously or sequentially. The purpose <strong>of</strong> this project was to<br />

examine the uncertainty <strong>of</strong> a calibration setup with the two microphones mounted face to face very<br />

close to each other, and determine the influence <strong>of</strong> the distance between the microphone diaphragms,<br />

the position <strong>of</strong> the source, possible reflections in the anechoic room etc. The theoretical part <strong>of</strong> the work<br />

involved development <strong>of</strong> a boundary element model for calculating the sound pressure on the diaphragms<br />

<strong>of</strong> the two microphones.<br />

This project was carried out in cooperation with Danish Fundamental Metrology (DFM), with<br />

Salvador Barrera Figueroa as the supervisor at DFM.<br />

Sound radiation from a loudspeaker cabinet using the boundary element method<br />

Efrén Fernandez Grande<br />

Supervisor: Finn Jacobsen<br />

Ideally, the walls <strong>of</strong> a loudspeaker cabinet are completely rigid. However, in reality, the cabinet<br />

is excited by the vibration <strong>of</strong> the loudspeaker units and by the sound pressure inside the cabinet. The<br />

radiation <strong>of</strong> sound caused by such vibrations can in some cases become clearly audible. The purpose <strong>of</strong><br />

this project was to provide a tool for evaluating the contribution from the cabinet to the overall sound<br />

radiated by a loudspeaker. The specific case <strong>of</strong> a B&O Beolab 9 early prototype was investigated,<br />


ecause an influence <strong>of</strong> sound from the cabinet had been reported. The radiation from the cabinet was<br />

calculated using the boundary element method. The analysis examined both the frequency domain and<br />

the time domain. A significant influence <strong>of</strong> the cabinet was been detected, which becomes especially<br />

apparent during the sound decay process.<br />

This project was carried out in cooperation with Bang & Olufsen.<br />

Room acoustic analysis <strong>of</strong> theatres with actors performing in the audience area<br />

Berti Gil Reyes<br />

Supervisors: Jonas Brunskog and Cheol-Ho Jeong<br />

The purpose <strong>of</strong> this project was to investigate the optimum position and orientation <strong>of</strong> performers<br />

in order to improve the acoustics <strong>of</strong> theatres using simulations based on geometrical acoustics. Five<br />

types <strong>of</strong> actual setting <strong>of</strong> stalls, covering possible configurations <strong>of</strong> drama theatres, were investigated.<br />

The Gladsaxe theatre was modelled using the Odeon room acoustic model. Investigation <strong>of</strong> the speech<br />

directivity pattern lead to the conclusion that an optimum speech aperture angle for a speaker is 50°<br />

with respect to the frontal direction, irrespective <strong>of</strong> frequency and elevation angle. Effects <strong>of</strong> actors’<br />

movement and orientation were also examined through a number <strong>of</strong> computer models.<br />

Jens Holger Rindel, Odeon, served as external supervisor.<br />

Two configurations, a U-shaped and an arena setting for drama performances.<br />

Acoustic conditions in small musical venues<br />

Hallur Johannessen<br />

Supervisors: Jonas Brunskog and Torben Poulsen<br />

The acoustic properties <strong>of</strong> three small venues for performances <strong>of</strong> rhythmic music were investigated<br />

using the Odeon room acoustic s<strong>of</strong>tware. Sound samples from rock and jazz were used for assessment<br />

<strong>of</strong> the simulated modifications <strong>of</strong> acoustic conditions <strong>of</strong> the venues. It was found that the subjective<br />

parameter ‘clearness’ was the most relevant for prediction <strong>of</strong> preference. The sound level <strong>of</strong> the<br />

rock music sample affected the assessment <strong>of</strong> the acoustic conditions. The overall satisfaction <strong>of</strong> the<br />

acoustic condition increased with the level <strong>of</strong> the music, No correlation between the subjective ‘clearness’<br />

and the objective ‘clarity’, C80, was found. A change in the integration time from 80 ms to 30 ms<br />

in the calculation <strong>of</strong> clarity, called C30, revealed good correlation with the subjective ‘clearness’.<br />

Investigation <strong>of</strong> parameter drift in microtransducers: a study <strong>of</strong> temperature dependence<br />

Julien Jourdan<br />

Supervisor: Finn Agerkvist<br />

The purpose <strong>of</strong> this project was to investigate the electrical and mechanical parameter drift with<br />

state variables in microtransducers in order to determine those that are important to include in an optimised<br />

model. The investigation focused on the drift <strong>of</strong> linear parameters with temperature, and tried to<br />

indentify the parameters that can be regarded as constant and the parameters that vary with the temperature.<br />

The latter were modelled. Several heat transfer mechanisms occur in the transducer: conduction,<br />

convection, radiation and heat storage. These mechanisms contribute to the heating <strong>of</strong> the microtransducer,<br />

and therefore the temperature behaviour <strong>of</strong> the microtransducer with frequency was studied for<br />


different input powers and voltages. Based on the results from temperature measurements performed on<br />

a microtransducer, a thermal model was developed.<br />

Loudspeaker suspension<br />

Georgios Kostopoulos<br />

Supervisors: Finn Agerkvist, Mogens Ohlrich and Finn Jacobsen<br />

Many vibrating systems and actuators, including loudspeakers, are weakly nonlinear. Most studies<br />

<strong>of</strong> nonlinearities in loudspeakers indicate that one <strong>of</strong> the important sources is the loudspeaker suspension.<br />

The inner suspension (known as the ‘spider’) provides the main part <strong>of</strong> the restoring force on<br />

the loudspeaker’s diaphragm. This project used a finite element model for studying the linear as well as<br />

the nonlinear behaviour <strong>of</strong> a spider. The results were compared with experimental data. Satisfactory<br />

agreement were obtained both for the linear and for the nonlinear behaviour, indicating that the model<br />

can be used for predicting the properties <strong>of</strong> a spider. An investigation on the influence <strong>of</strong> the spider’s<br />

geometrical parameters on its nonlinear behaviour showed that small changes in the construction <strong>of</strong> the<br />

spider or even in the way that the voice coil is attached can cause significant changes in the stiffness<br />

curve <strong>of</strong> this component.<br />

Spherical near field acoustic holography<br />

Guillermo Moreno<br />

Supervisor: Finn Jacobsen<br />

Spherical near field acoustic holography is a recently developed technique that makes it possible<br />

to reconstruct the sound field inside and just outside a spherical microphone array. The purpose <strong>of</strong> this<br />

project was to extend the existing theory for measurements with an acoustically transparent microphone<br />

array to measurements with an array with the microphones mounted on a rigid sphere. A rigid sphere is<br />

somewhat more practical and gives better defined boundary conditions, but in measurements very near<br />

a source there is the potential problem <strong>of</strong> multiple reflections between the sphere and the surface <strong>of</strong> the<br />

source modifying the incident sound field and therefore the reconstruction. Numerical simulations and<br />

measurements were carried out. The results obtained from both simulations and measurements show a<br />

minimum influence <strong>of</strong> the reflections on the accuracy.<br />

This project was carried out in cooperation with Brüel & Kjær.<br />

Pressure [dB re 20uPa]<br />

70<br />

65<br />

60<br />

55<br />

50<br />

45<br />

40<br />

35<br />

30<br />

0 500 1000 1500 2000 2500 3000<br />

Frequency [Hz]<br />

24<br />

Pressure [dB re 20uPa]<br />

75<br />

70<br />

65<br />

60<br />

55<br />

50<br />

45<br />

40<br />

35<br />

0.1 0.2 0.3 0.4 0.5 0.6 0.7<br />

Y axis [m]<br />

Sound pressure level generated by an ‘experimental monopole’ at the centre <strong>of</strong> the sphere, which is 20 cm (left)<br />

and 40 cm (right) from the monopole. Blue dotted line, ‘true’ pressure (measured without the sphere); red dashed<br />

line, reconstructed pressure.<br />

Determination <strong>of</strong> sound insulation based on sound pressure and sound intensity<br />

Lluís Enrique Navarro Muñoz<br />

Supervisor: Finn Jacobsen<br />

The sound insulation <strong>of</strong> partitions is usually determined from sound pressure measurements but<br />

can instead be based on sound intensity measurements. The purpose <strong>of</strong> this project was to compare<br />

these two methods and examine their advantages and disadvantages. Another point in the work was to

examine the still unsettled issue <strong>of</strong> whether or not the ‘Waterhouse correction’ should be used not only<br />

in the receiving room (as is well established in conventional pressure-based measurements) but also in<br />

the source room. To examine the matter it was attempted to change the volume <strong>of</strong> the source room in a<br />

scale model <strong>of</strong> two reverberation rooms. Unfortunately the possible effect <strong>of</strong> the volume <strong>of</strong> the source<br />

room could not be detected because <strong>of</strong> unavoidable flanking transmission in the scale model rooms.<br />

Damping <strong>of</strong> ultrasound for food production applications<br />

Adrien Roux<br />

Supervisor: Finn Jacobsen<br />

The Sonosteam process combines the effect <strong>of</strong> vapour and ultrasound and is used for disinfecting<br />

products such as food. However, long exposure to very high levels <strong>of</strong> ultrasound may be unhealthy for<br />

human beings. Thus the purpose <strong>of</strong> the project was to study methods <strong>of</strong> protecting people from the ultrasound.<br />

In particular sound absorbers for very high frequencies and satisfying the standards <strong>of</strong> the<br />

food industry were examined. Because <strong>of</strong> the high absorption <strong>of</strong> air in this frequency range it turned out<br />

to be rather difficult to measure the absorbers under test. Consequently, the focus <strong>of</strong> the project turned<br />

to the measurement method for determining the absorption in the ultrasound frequency range. A test<br />

chamber consisting <strong>of</strong> a small pyramid made <strong>of</strong> glass and filled with dry nitrogen was used as a reverberation<br />

chamber, and the Schroeder method was used to determine the reverberant decay from measured<br />

impulse responses.<br />

The project was carried out in cooperation with SonoSteam.<br />

Reduction <strong>of</strong> low frequency noise from ventilation systems<br />

Semir Samardzic<br />

Supervisor: Finn Jacobsen<br />

The performance <strong>of</strong> low-frequency silencers can be predicted using a plane wave model, the<br />

transmission matrix method. Because <strong>of</strong> their low (static) pressure loss concentric tube resonators are<br />

regarded as the most suitable reactive element for use in ventilation systems. Improvement <strong>of</strong> the silencers’<br />

performance at low frequencies was obtained by studying various configurations <strong>of</strong> concentric<br />

tube resonators, such as partitioning <strong>of</strong> the cavity, partial perforation, and using various perforation degrees<br />

<strong>of</strong> the inner tube. A number <strong>of</strong> configurations <strong>of</strong> concentric tube resonators were compared<br />

through simulations and measurements. A simple and convenient laboratory technique for measuring<br />

the insertion loss <strong>of</strong> a silencer with a high and with a low source impedance was developed. Fairly good<br />

agreement between the predicted and measured results was obtained.<br />

The project was carried out in cooperation with Lindab.<br />

Musicians’ room acoustics conditions<br />

David Santos Domínguez<br />

Supervisors: Jonas Brunskog, Cheol-Ho Jeong and Anders Christian Gade<br />

A 3D rendering system with eight loudspeakers for real time auralization.<br />


Usually musicians have problems hearing themselves or other musicians during performances on<br />

a badly designed stage. Different design recommendations and parameters as objective measures have<br />

been proposed since the early 1970s, and it seems to be generally accepted that musicians have one<br />

main concern: getting the right balance between hearing themselves (support) and hearing others. Because<br />

it is not clear how to obtain this balance research is needed. This project attempted to investigate<br />

musicians’ room acoustics conditions. An eight channel 3D rendering system based on Ambisonics was<br />

deloped and adapted for use in subjective room acoustics experiments with musicians where the musicians<br />

could play and hear the rendered room in real time. A set <strong>of</strong> pilot experiments with soloist musicians<br />

based on rooms modelled using Odeon was conducted in order to test the influence <strong>of</strong> the quantity<br />

and quality <strong>of</strong> the early reflections received by the musicians. The results indicated that more experimentation<br />

and test subjects are needed before clear conclusions can be established.<br />

A vacuum motor as a mechanical noise source<br />

Tarmo Saar<br />

Supervisor: Mogens Ohlrich<br />

Sketch <strong>of</strong> vacuum cleaner shell-structure with resiliently installed vacuum motor.<br />

Left: resilient rubber support element for vacuum-motor; right: symmetric test rig for determining the translational<br />

dynamic stiffness <strong>of</strong> a resilient support element (i.e. the black vibration isolators in the figure) under different<br />

static deformation.<br />

The purpose <strong>of</strong> this project was to investigate the dynamic properties and vibratory source<br />

strength <strong>of</strong> a typical vacuum motor; to develop a method for measuring dynamic stiffness <strong>of</strong> different<br />

types <strong>of</strong> resilient rubber mounts; and to examine a technique for estimating the power transmitted from<br />

a resiliently mounted motor to the plastic shell <strong>of</strong> a vacuum cleaner. Vibration isolation <strong>of</strong> the vacuum<br />

motor is necessary since even a minor rotor-imbalance produces audible vibration and noise because <strong>of</strong><br />

the high revolution speed. Data and properties required for predicting the transmitted power are the free<br />

velocities <strong>of</strong> the source at its connecting points, the mobilities at these terminals, and the isolator stiffness,<br />

as well as the mobilities <strong>of</strong> the receiving shell structure. The complex dynamic stiffness <strong>of</strong> the isolators<br />

was calculated from mobility measurements performed on specially designed test rigs that allow<br />


for static compression, as shown in the figure. Calibration and prediction <strong>of</strong> the transmitted power have<br />

revealed that cross-coupling between terminals plays a role for the examined case at frequencies below<br />

100 Hz; this is the subject <strong>of</strong> further investigations.<br />

The project was carried out in cooperation with Peter Nøhr Larsen from Nilfisk-Advance A/S.<br />

Evaluation and development <strong>of</strong> simplified models for airborne and impact sound insulation <strong>of</strong><br />

double leaf structures<br />

Irma Albolario Vedsted<br />

Supervisors: Jonas Brunskog and Mogens Ohlrich<br />

This study examined some <strong>of</strong> the available simplified prediction models for sound insulations.<br />

The main focus was on double leaf lightweight structures. The purpose was to find the weaknesses <strong>of</strong><br />

the prediction models and try to improve the models. The evaluations were done on the basis <strong>of</strong> the result<br />

<strong>of</strong> laboratory measurements which were compared with the evaluated and improved prediction<br />

models. The light weight double leaf structures considered in the study were mainly <strong>of</strong> gypsum board.<br />

The predicted models included in this study were those with almost the same parameters as used in<br />

laboratory measurements, and the investigation was limited to the results <strong>of</strong> the laboratory measurements<br />

and included sound transmission through the direct path and not flanking transmissions.<br />

The acoustics <strong>of</strong> bullrings used for musical concerts<br />

Raúl Zambrano Izcara<br />

Supervisors: Cheol-Ho Jeong and Jonas Brunskog<br />

Many popular concerts are given in large arenas or huge stadiums that are not built for musical<br />

purposes. Thus in Spain concerts <strong>of</strong>ten take place in bullrings. Some <strong>of</strong> these bullrings are covered by a<br />

concave dome in order to provide sound insulation. This project was focused on La Cubierta de Leganés.<br />

Acoustic simulations using the Odeon model were carried out both for the uncovered and covered<br />

bullring. The objective was to improve the acoustic conditions <strong>of</strong> these huge constructions. The<br />

simulations concentrated on the room acoustic parameters reverberation time and clarity. Some focusing<br />

problems were encountered because <strong>of</strong> the circular shapes <strong>of</strong> the venue. Moreover, for the covered<br />

bullring, because <strong>of</strong> low absorption in the ceiling at low frequencies, long reverberation times and low<br />

clarity were observed.<br />

Jens Holger Rindel, Odeon, served as external supervisor.<br />

A model <strong>of</strong> a bullring, and the distribution <strong>of</strong> the sound pressure level determined by Odeon simulations.<br />

Compensation <strong>of</strong> loudspeaker nonlinearities - DSP implementation<br />

Karsten Øyen<br />

Supervisor: Finn Agerkvist<br />

In this project compensation <strong>of</strong> loudspeaker nonlinearities was investigated. A compensation system<br />

based on a loudspeaker model (a computer simulation <strong>of</strong> a real loudspeaker), was first simulated in<br />

MATLAB and implemented on DSP for real-time testing. This was a pure feedforward system. However,<br />

loudspeaker parameters are drifting because <strong>of</strong> temperature and aging. This reduces the performance<br />

<strong>of</strong> the compensation. To improve the system online tracking <strong>of</strong> the loudspeaker linear parameters<br />


is needed. The compensation system was tested without such parameter identification, using the loudspeaker<br />

diaphragm excursion as the output measure. The loudspeaker output and the output <strong>of</strong> the loudspeaker<br />

model were monitored, and the loudspeaker model was manually adjusted to fit the real loudspeaker.<br />

This was done by real-time tuning on DSP. The system seemed to work for some input frequencies<br />

and not for others.<br />


Journal Papers<br />

S. Barrera-Figueroa, K. Rasmussen and F. Jacobsen: A note on determination <strong>of</strong> the diffuse-field sensitivity <strong>of</strong><br />

microphones using the reciprocity technique. Journal <strong>of</strong> the Acoustical Society <strong>of</strong> America 124, <strong>2008</strong>, 1505-1512.<br />

L. Friis and M. Ohlrich: Vibration modeling <strong>of</strong> structural fuzzy with continuous boundary. Journal <strong>of</strong> the Acoustical<br />

Society <strong>of</strong> America 123, <strong>2008</strong>, 718-728.<br />

L. Friis and M. Ohlrich: Simple vibration modeling <strong>of</strong> structural fuzzy with continuous boundary by including<br />

two-dimensional spatial memory. Journal <strong>of</strong> the Acoustical Society <strong>of</strong> America 124, <strong>2008</strong>, 192-202.<br />

F. Jacobsen, X. Chen and V. Jaud: A comparison <strong>of</strong> statistically optimized near field acoustic holography using<br />

single layer pressure velocity measurements and using double layer pressure measurements. Journal <strong>of</strong> the Acoustical<br />

Society <strong>of</strong> America 123, <strong>2008</strong>, 1842-1845.<br />

A.I. Tarrero, M.A. Martín, J. González, M. Machimbarrena and F. Jacobsen: Sound propagation in forests: A<br />

comparison <strong>of</strong> experimental results and values predicted by the Nord 2000 model. Applied <strong>Acoustics</strong> 69, <strong>2008</strong>,<br />

662-671.<br />

G.B. Jónsson and F. Jacobsen: A comparison <strong>of</strong> two engineering models for outdoor sound propagation: Harmonoise<br />

and Nord2000. Acta Acustica united with Acustica 94, <strong>2008</strong>, 282-289.<br />

J. Escolano, F. Jacobsen and J.J. López: An efficient realization <strong>of</strong> frequency dependent boundary conditions in<br />

an acoustic finite difference time-domain model. Journal <strong>of</strong> Sound and Vibration 316, <strong>2008</strong>, 234-247.<br />

C.-H. Jeong, J.G. Ih and J.H. Rindel: An approximate treatment <strong>of</strong> reflection coefficient in the phased Beam tracing<br />

method for the simulation <strong>of</strong> enclosed sound fields at medium frequencies. Applied <strong>Acoustics</strong> 69, <strong>2008</strong>, 601-<br />

613.<br />

C.-H. Jeong and J.-G. Ih: On the errors <strong>of</strong> the phased beam tracing method for the room acoustic analysis. Journal<br />

<strong>of</strong> the Acoustical Society <strong>of</strong> Korea 27, <strong>2008</strong>, 1-11 (in Korean).<br />

Y. Luan and F. Jacobsen: A method <strong>of</strong> measuring the Green’s function in an enclosure. Journal <strong>of</strong> the Acoustical<br />

Society <strong>of</strong> America 123, <strong>2008</strong>, 4044-4046.<br />

M.C. Vigeant, L.M. Wang and J.H. Rindel: Investigation <strong>of</strong> orchestra auralizations using the multi-channel multisource<br />

auralization technique. Acta Acustica united with Acustica 94, 866-882, <strong>2008</strong>.<br />

N. Stefanakis, J. Sarris, G. Cambourakis and F. Jacobsen: Power-output regularization in global sound equalization.<br />

Journal <strong>of</strong> the Acoustical Society <strong>of</strong> America 123, <strong>2008</strong>, 33-36.<br />

Theses<br />

L. Friis: An investigation <strong>of</strong> internal feedback in hearing aids. Department <strong>of</strong> Electrical Engineering, Technical<br />

University <strong>of</strong> Denmark. PhD thesis, <strong>2008</strong>.<br />

T. H. Leth Elmkjær: Foundations <strong>of</strong> active control. Active noise reduction helmets. Department <strong>of</strong> Electrical Engineering,<br />

Technical University <strong>of</strong> Denmark. PhD thesis, <strong>2008</strong>.<br />


Chapters in Books<br />

F. Jacobsen: Intensity techniques. Chapter 58 (pp. 1109-1127) in Handbook <strong>of</strong> Signal Processing in <strong>Acoustics</strong>,<br />

eds. D. Havelock, S. Kuwano and M. Vorländer. Springer Verlag, New York, <strong>2008</strong>.<br />

F. Jacobsen and Hans-Elias de Bree: The Micr<strong>of</strong>lown particle velocity sensor. Chapter 68 (pp. 1283-1291) in<br />

Handbook <strong>of</strong> Signal Processing in <strong>Acoustics</strong>, eds. D. Havelock, S. Kuwano and M. Vorländer. Springer Verlag,<br />

New York, <strong>2008</strong>.<br />

Edited Books<br />

Acoustic Signals and Systems (ed. F. Jacobsen), Part I (pp. 3-144) <strong>of</strong> Handbook <strong>of</strong> Signal Processing in <strong>Acoustics</strong>,<br />

eds. D. Havelock, S. Kuwano and M. Vorländer. Springer Verlag, New York, <strong>2008</strong>.<br />

Conference Papers<br />

F. Agerkvist and B. Rohde Pedersen: Time variance <strong>of</strong> the suspension nonlinearity. Proceedings <strong>of</strong> the 125th Audio<br />

Engineering Society, <strong>2008</strong><br />

F. Agerkvist, K. Thorborg and C. Tinggaard: A study <strong>of</strong> the creep effect in loudspeaker suspension. Proceedings<br />

<strong>of</strong> the 125th Audio Engineering Society, <strong>2008</strong>.<br />

B. Rohde Pedersen and F. Agerkvist: Non-linear loudspeaker unit modeling. Proceedings <strong>of</strong> the 125th Audio Engineering<br />

Society, <strong>2008</strong>.<br />

S. Barrera-Figueroa, F. Jacobsen and K. Rasmussen: On determination <strong>of</strong> microphone response and other parameters<br />

by a hybrid experimental and numerical method. Proceedings <strong>of</strong> <strong>Acoustics</strong> ’08, Paris, France, <strong>2008</strong>, pp.<br />

1519-1524.<br />

S. Barrera-Figueroa, F. Jacobsen and K. Rasmussen: On the relation between the radiation impedance and the<br />

diffuse-field response <strong>of</strong> measurement microphones. Proceedings <strong>of</strong> Inter-Noise <strong>2008</strong>, Shanghai, China, <strong>2008</strong>.<br />

L. Blanchard: High sound quality and concha headphones: where are the limitations? Proceedings <strong>of</strong> <strong>Acoustics</strong><br />

’08, Paris, France, <strong>2008</strong>, pp. 717-722.<br />

J. Brunskog, A.C. Gade, G. Payà Bellester and L. Reig Calbo: Speaker comfort and increase <strong>of</strong> voice level in lecture<br />

rooms. Proceedings <strong>of</strong> <strong>Acoustics</strong> ‘08, Paris, France, <strong>2008</strong>, pp. 3863-3868.<br />

A.C. Gade: Trends in preference, programming and design <strong>of</strong> concert halls for symphonic music. Proceedings <strong>of</strong><br />

<strong>Acoustics</strong> ’08, Paris, France, <strong>2008</strong>, pp. 345-350.<br />

E. Georganti, J. Mourjopoulos and F. Jacobsen: Analysis <strong>of</strong> room transfer function and reverberant signal statistics.<br />

Proceedings <strong>of</strong> <strong>Acoustics</strong> ’08, Paris, France, <strong>2008</strong>, pp. 5637-5642.<br />

F. Jacobsen: Measurement <strong>of</strong> total sound energy in an enclosure at low frequencies. Proceedings <strong>of</strong> <strong>Acoustics</strong> ’08,<br />

Paris, France, <strong>2008</strong>, pp. 3249-3254.<br />

F. Jacobsen, J. Hald, E. Fernandez and G. Moreno: Spherical near field acoustic holography with microphones on<br />

a rigid sphere. Proceedings <strong>of</strong> <strong>Acoustics</strong> ’08, Paris, France, <strong>2008</strong>, pp. 2869-2873.<br />

F. Jacobsen and X. Chen: The incident sound power in a diffuse sound field. Proceedings <strong>of</strong> Fifteenth International<br />

Congress on Sound and Vibration, Daejeon, Korea, <strong>2008</strong>, pp. 690-695.<br />

F. Jacobsen, G. Moreno, E. Fernandez Grande and J. Hald: Near field acoustic holography with microphones on a<br />

rigid sphere. Proceedings <strong>of</strong> Inter-Noise <strong>2008</strong>, Shanghai, China, <strong>2008</strong>.<br />


C.-H. Jeong and J.-G. Ih: Directional distribution <strong>of</strong> acoustic energy density incident to a surface under reverberant<br />

condition. Proceedings <strong>of</strong> <strong>Acoustics</strong>’ 08, Paris, France, <strong>2008</strong>, pp. 3077-3082.<br />

G.B. Jónsson and F. Jacobsen: A comparison <strong>of</strong> two engineering models for sound propagation: Harmonoise and<br />

Nord2000. Proceedings <strong>of</strong> Joint Baltic-Nordic <strong>Acoustics</strong> Meeting BNAM <strong>2008</strong>, Reykjavik, Iceland, <strong>2008</strong>.<br />

J.D. Alvarez B. and F. Jacobsen: An iterative method for determining the surface impedance <strong>of</strong> acoustic materials<br />

in situ. Proceedings <strong>of</strong> Inter-Noise <strong>2008</strong>, Shanghai, China, <strong>2008</strong>.<br />

Y. Luan: The structural acoustic properties <strong>of</strong> stiffened shells. Proceedings <strong>of</strong> <strong>Acoustics</strong> ’08, Paris, France, <strong>2008</strong>,<br />

pp. 393-398.<br />

L.-G. Sjökvist, J. Brunskog and F. Jacobsen: Parameter survey <strong>of</strong> a rib stiffened wooden floor using sinus modes<br />

model. Proceedings <strong>of</strong> <strong>Acoustics</strong> ’08, Paris, France, <strong>2008</strong>, pp. 3011-3015.<br />

Other Papers<br />

J. Brunskog: Att tala i en undervisningslokal. Bygg & Teknik 99 (3), <strong>2008</strong>.<br />

V. Tarnow and F. Jacobsen: Instruments de mesure en acoustique. Techniques de l’Ingenieur, Dossier R6010,<br />

<strong>2008</strong>.<br />

<strong>Report</strong>s<br />

J. Forssén, W. Kropp, J. Brunskog, S. Ljunggren, D. Bard, G. Sandberg, F. Ljunggren, A. Ågren, O. Hallström, H.<br />

Dybro, K. Larsson, K. Tillberg, K. Jarnerö, L.-G. Sjökvist, B. Östman, K. Hagberg, Å. Bolmsvik, A. Olsson, C.-G.<br />

Ekstrand and M. Johansson: <strong>Acoustics</strong> in wooden buildings. State <strong>of</strong> the art <strong>2008</strong>. SP <strong>Report</strong> <strong>2008</strong>: 16, SP Trätek,<br />

<strong>2008</strong>.<br />

Abstracts<br />

J.-G. Ih and C.-H. Jeong: Acoustic source identification in an enclosed space using the inverse phased beam tracing<br />

at medium frequencies. Journal <strong>of</strong> the Acoustical Society <strong>of</strong> America 123, <strong>2008</strong>, p. 3309.<br />



The group Hearing Systems, Speech and Communication is concerned with auditory signal processing<br />

and perception, psychoacoustics, speech perception, audio-visual speech, audiology and objective<br />

measures <strong>of</strong> the auditory function. The objectives <strong>of</strong> the research is to increase our understanding <strong>of</strong><br />

the functioning <strong>of</strong> the human auditory system and to provide insights that can be useful for technical<br />

applications such as hearing aids, speech recognition systems, hearing diagnostics tools and cochlear<br />

implants.<br />

Auditory Signal Processing and Perception<br />

The research in auditory signal processing and perception is mainly concerned with understanding<br />

the relation between basic auditory functions and measures <strong>of</strong> speech perception. Computational<br />

models <strong>of</strong> auditory perception are developed based on the results from listening experiments both in<br />

normal-hearing and hearing-impaired listeners. These models are integrated into speech and hearing-aid<br />

signal processing applications.<br />

Hearing aid amplification at low input levels<br />

Helen Connor<br />

Supervisor: Torben Poulsen<br />

Persons with a hearing loss <strong>of</strong>ten have a loss <strong>of</strong> audibility <strong>of</strong> s<strong>of</strong>t sounds, for example, distant<br />

voices and birds. This lack <strong>of</strong> audibility can be alleviated by means <strong>of</strong> a compressor hearing aid. This<br />

project considers some audiological factors and hearing aid parameters that influence the preference for<br />

compression threshold. One experiment investigated the subjectively preferred compression threshold<br />

with stimuli from real life everyday environments. The stimuli were compressed <strong>of</strong>fline using six different<br />

compression settings, and the compressed signals were presented to twelve subjects via the direct<br />

audio input <strong>of</strong> binaurally fitted linear hearing aids. The results showed that the preference for low compression<br />

thresholds increased with increased compression release time. A second experiment was a field<br />

trial designed to validate the results from the first experiment when hearing aids are worn by the subjects<br />

in their own daily listening environments. Twenty hearing impaired subjects compared two compression<br />

threshold settings in two two-week trial periods. At the end <strong>of</strong> each trial, the subjects were interviewed<br />

about their programme preference.<br />

This industrial PhD project is carried out in cooperation with the hearing aid manufacturer Widex<br />

A/S with Carl Ludvigsen as the industrial supervisor.<br />

Estimating the basilar-membrane input/output-function in normal-hearing and hearing-impaired<br />

listeners<br />

Morten L. Jepsen<br />

Supervisor: Torsten Dau<br />

To characterise the function <strong>of</strong> cochlear processing it is desirable to estimate the basilar membrane<br />

input/output function behaviourally in humans. Such estimates <strong>of</strong> compression are useful in adjusting<br />

parameters <strong>of</strong> models <strong>of</strong> the basilar membrane to simulate individual hearing loss. In recent<br />

studies, forward masking has been used to estimate the input/output function, showing a linear behaviour<br />

at low input levels and a compressive behaviour above a certain input level (i.e., the knee-point). In<br />

this study, an existing method is extended to estimate the knee-point and the amount <strong>of</strong> compression.<br />

Data have been collected from seven normal-hearing and five hearing-impaired listeners with a mild to<br />

moderate sensorineural hearing loss. Both groups showed large inter-subject but low intra-subject variability.<br />

When the knee-point was estimated for the hearing-impaired listeners it was similar or shifted<br />

up to about 30 dB towards higher input levels, and the amount <strong>of</strong> compression was similar or increased<br />

compared to that <strong>of</strong> normal-hearing listeners.<br />


The basic structure <strong>of</strong> a dual resonance nonlinear filter. The input is split into two paths: a linear path, and a<br />

nonlinear path. The nonlinear path contains a single nonlinear element, a ‘broken-stick’ function that realises an<br />

instantaneous compression above a threshold amplitude. The output <strong>of</strong> the system is the sum <strong>of</strong> the two paths.<br />

Modelling auditory perception <strong>of</strong> individual hearing-impaired listeners<br />

Morten L. Jepsen<br />

Supervisor: Torsten Dau<br />

In this work the perceptual consequences <strong>of</strong> hearing impairment in individual listeners have been<br />

investigated within the framework <strong>of</strong> the computational auditory signal processing and perception model<br />

<strong>of</strong> Jepsen et al. (<strong>2008</strong>). Several parameters <strong>of</strong> the model were modified according to data from psychoacoustic<br />

measurements. Three groups <strong>of</strong> listeners were considered: normal-hearing listeners, listeners<br />

with a mild-to-moderate sensorineural hearing loss, and listeners with a severe sensorineural hearing<br />

loss. The simulations showed that reduced cochlear compression due to outer hair-cell loss quantitatively<br />

accounts for broadened auditory filters. A combination <strong>of</strong> reduced compression and reduced inner<br />

hair-cell function accounts for decreased sensitivity and slower recovery from forward masking. The<br />

model may be useful for the evaluation <strong>of</strong> hearing-aid algorithms, where a reliable simulation <strong>of</strong> hearing<br />

impairment may reduce the need for time-consuming listening tests during development.<br />

Spectro-temporal analysis <strong>of</strong> complex sounds in the human auditory system<br />

Tobias Piechowiak<br />

Supervisor: Torsten Dau<br />

This PhD project investigates the auditory processing <strong>of</strong> modulated sounds. This is relevant since<br />

many natural sounds (including speech) contain amplitude and frequency modulations. One particular<br />

feature <strong>of</strong> such sounds is that they exhibit temporally coherent amplitude modulations across a wide<br />

range <strong>of</strong> audio frequencies. It has been proposed that the auditory system evaluates this acrossfrequency<br />

coherence to facilitate signal detection (or speech recognition) in a noisy acoustic background.<br />

In the modelling part <strong>of</strong> the project, an auditory mechanism for signal enhancement was suggested<br />

that processes sound components across different spectral content (within each ear), in a similar<br />

way as was earlier established for across-ear processing for improved localisation. In order to test the<br />

proposed ‘equalisation-cancellation’ mechanism, various listening experiments were performed and the<br />

results compared with predictions using a computational model <strong>of</strong> auditory processing.<br />

Neural coding and perception <strong>of</strong> pitch in the normal and impaired human auditory system<br />

Sébastien Santurette<br />

Supervisors: Torsten Dau and Jörg Buchholz<br />

The purpose <strong>of</strong> this PhD project is to investigate how the human auditory system extracts the<br />

pitch <strong>of</strong> sounds. Pitch is an essential attribute <strong>of</strong> sound that contributes to speech intelligibility, music<br />


perception, and sound source segregation. It is thus important to understand the mechanisms that underlie<br />

pitch perception. In particular it would be crucial to determine whether the auditory system uses<br />

mechanisms based on temporal cues, spectral cues, or both, for pitch extraction. In order to distinguish<br />

better between possible theories, the project focuses on relating pitch perception outcomes to measures<br />

<strong>of</strong> basic auditory functions in normal-hearing and hearing-impaired listeners. Such an approach could<br />

reveal how and at which level <strong>of</strong> the auditory system pitch is represented, and tell us which modelling<br />

approach to favour, with possible implications for hearing-aid and cochlear-implant signal processing.<br />

Perception <strong>of</strong> pitch.<br />

Relating binaural pitch perception to measures <strong>of</strong> basic auditory functions<br />

Sébastien Santurette<br />

Supervisor: Torsten Dau<br />

Binaural pitch is a tonal sensation produced by introducing an interaural phase shift in binaurallypresented<br />

white noise. As no spectral cues are present in the stimulus, binaural pitch perception is assumed<br />

to rely on accurate temporal fine structure (TFS) coding and intact binaural integration mechanisms.<br />

This study investigated to what extent basic auditory measures <strong>of</strong> binaural processing, TFS processing,<br />

as well as cognitive abilities, are correlated with the ability <strong>of</strong> hearing-impaired listeners to perceive<br />

binaural pitch. It was found that hearing-impaired listeners who could not perceive binaural pitch<br />

neither had a general difficulty extracting tonal objects from noise nor showed reduced cognitive abilities.<br />

Instead, they showed impaired binaural processing <strong>of</strong> TFS, reflected by reduced binaural masking<br />

and binaural intelligibility level differences, as well as a loss <strong>of</strong> phase-locking occurring at much lower<br />

frequencies than for other hearing-impaired subjects. These results suggest that the absence <strong>of</strong> binaural<br />

pitch perception is a good indicator <strong>of</strong> a deficit in low-level binaural processing.<br />

Measurement <strong>of</strong> binaural pitch.<br />


Detection and identification <strong>of</strong> binaural and monaural pitches in dyslexic patients<br />

Sébastien Santurette<br />

Supervisor: Torsten Dau<br />

Binaural pitch stimuli have been used in several recent studies to test for the presence <strong>of</strong> binaural<br />

auditory impairment in reading-disabled subjects. However, the outcome <strong>of</strong> these studies is contradictory;<br />

whereas some studies found that a majority <strong>of</strong> dyslexic subjects were unable to hear binaural pitch,<br />

another study obtained a clear response <strong>of</strong> subjects in the dyslexic group to Huggins’ pitch (HP). This<br />

work aimed at clarifying whether impaired binaural pitch perception is found in dyslexia. Results from<br />

a pitch contour identification test, performed in 31 dyslexic listeners and 31 matched controls, clearly<br />

showed that dyslexics perceived HP equally well as controls. However, nine <strong>of</strong> the dyslexic subjects<br />

were found to have difficulties in identifying pitch contours independently <strong>of</strong> the stimulus used. The<br />

ability <strong>of</strong> subjects to identify pitch contours correctly was found to be significantly correlated to measures<br />

<strong>of</strong> frequency discrimination.<br />

This work was carried out at Division <strong>of</strong> Experimental Otolaryngology, Department <strong>of</strong> Neurosciences,<br />

K.U. Leuven, Belgium, with contributions from Hanne Poelmans, Heleen Luts, and Jan Wouters.<br />

Frequency selectivity, temporal fine-structure processing and speech reception in impaired hearing<br />

Olaf Strelcyk<br />

Supervisor: Torsten Dau<br />

Hearing impaired people encounter great difficulties with speech communication, particularly in<br />

the presence <strong>of</strong> background noise. The benefit <strong>of</strong> hearing aids varies strongly among the individual listeners,<br />

and whereas some show good performance in speech communication, others continue to experience<br />

difficulties. The hypothesis <strong>of</strong> this project is that the variation in performance among listeners<br />

can, at least partly, be traced back to changes in the perception <strong>of</strong> sounds well above hearing threshold.<br />

Thus it has been hypothesised that reduced frequency selectivity as well as deficits in processing <strong>of</strong><br />

temporal fine structure are related to degraded speech reception. Perceptual listening experiments have<br />

been performed on normal-hearing listeners, hearing-impaired listeners and listeners with an obscure<br />

auditory dysfunction. The results gave insights about impairments in the peripheral auditory system,<br />

which were related to speech reception deficits. They provide constraints for future models <strong>of</strong> impaired<br />

auditory signal processing.<br />

This PhD project is co-supervised by Graham Naylor, Oticon.<br />

Auditory processing, based on the spatio-temporal cochlear response, may be degraded in hearing-impaired listeners,<br />

due to reduced frequency selectivity, degraded phase locking or reductions in converging input to coincidence<br />

detectors.<br />

Objective and behavioural estimates <strong>of</strong> cochlear response times in normal-hearing and hearingimpaired<br />

human listeners<br />

Olaf Strelcyk<br />

Supervisor: Torsten Dau<br />

Auditory brainstem responses have been obtained in normal-hearing and hearing-impaired listeners.<br />

The latencies extracted from these responses serve as objective estimates <strong>of</strong> cochlear response<br />

times. In addition, two behavioural measurements have been carried out. In the first experiment, coch-<br />


lear response times were estimated, using the lateralisation <strong>of</strong> pulsed tones interaurally mismatched in<br />

frequency. In the second experiment, auditory-filter bandwidths were estimated. The correspondence<br />

between objective and behavioural estimates <strong>of</strong> cochlear response times was examined. An inverse relationship<br />

between the auditory brainstem response latencies and the filter bandwidths could be demonstrated.<br />

The results might be useful for a better understanding <strong>of</strong> how hearing impairment affects the<br />

cochlear response pattern in human listeners.<br />

LTFAT – The linear time-frequency analysis toolbox<br />

Peter L. Søndergaard<br />

This is ongoing work. The linear time-frequency analysis toolbox was started in 2004 and aims to<br />

be a next-generation MATLAB signal processing toolbox focused on time-frequency analysis using<br />

Gabor analysis. A major strength <strong>of</strong> the toolbox is to enable easy linear or nonlinear modification <strong>of</strong> the<br />

time-frequency representation <strong>of</strong> a signal. The toolbox is Free S<strong>of</strong>tware and can be obtained from<br />

http://ltfat.sourceforge.net. It is joint work with the <strong>Acoustics</strong> Research Institute, Austrian Academy <strong>of</strong><br />

Sciences, Vienna, and Laboratoire d’Analyse, Topologie et Probabilites, Université de Provence, Marseille,<br />

but most <strong>of</strong> the work is carried out at CAHR.<br />

Investigation <strong>of</strong> the independent manipulation <strong>of</strong> envelope and temporal fine structure in psychoacoustics<br />

Peter L. Søndergaard<br />

An important topic in current psychoacoustic research is the importance <strong>of</strong> envelope and temporal<br />

fine structure (TFS) <strong>of</strong> complex signals such as speech. The envelope refers to the slow changes <strong>of</strong> a<br />

signal at a given frequency, and the TFS are the fast changes (the information in the carrier wave). In<br />

the cochlea the envelope and the TFS are naturally extracted from the input by the action <strong>of</strong> the basilar<br />

membrane and the inner hair cells. In this project, it was investigated mathematically which information<br />

is carried by the envelope and by the TFS. It was shown that certain methods used in the recent literature<br />

to investigate the contribution <strong>of</strong> TFS versus envelope for speech and music perception are problematic<br />

since the information carried by the TFS versus the envelope is not independent.<br />

Spectrogram (left) and instantaneous frequency (right) <strong>of</strong> the Danish word ‘Stok’. Either representation <strong>of</strong> the<br />

signal contain the same information.<br />

Interactions <strong>of</strong> interaural time differences and amplitude modulation detection<br />

Eric R. Thompson<br />

Supervisor: Torsten Dau<br />

Psychoacoustic measurements have been carried out to determine the effect <strong>of</strong> a perceived spatial<br />

separation on amplitude modulation detection thresholds. Two temporally-interleaved transposed stimuli<br />

were used as carriers for a narrowband-noise modulation masker and a 16-Hz sinusoidal modulation<br />

probe. With these stimuli, the interaural time difference (ITD) <strong>of</strong> the masker and probe carriers could be<br />


adjusted independently. In the first experiment, the listeners adjusted the interaural level difference <strong>of</strong> a<br />

pointer stimulus to be aligned with the perceived lateral position <strong>of</strong> either the masker or the probe stimulus,<br />

as a function <strong>of</strong> the masker and probe ITDs. The results showed that the listeners could lateralize<br />

the two stimuli separately and robustly. The second experiment measured masked AM detection thresholds<br />

as a function <strong>of</strong> the masker modulation frequency and masker ITD using a diotic 16-Hz AM<br />

probe. These results showed modulation frequency tuning without a spatial release from modulation<br />

masking even though the masker and probe were perceived to have a spatial separation. This suggests<br />

that amplitude modulation cues and lateralisation cues are processed independently and in parallel in<br />

the auditory system.<br />

Speech Perception<br />

The research in speech perception aims at describing the mechanisms underlying speech intelligibility,<br />

i.e., how human listeners decode and integrate the information carried by the speech signal.<br />

Models <strong>of</strong> speech intelligibility and models quantifying speech perception under challenging listening<br />

conditions are under development. Like other areas in the Centre for Applied Hearing Research, the<br />

research is interdisciplinary and relies on psychoacoustics, auditory signal processing and phonetics.<br />

Processing <strong>of</strong> spatial sounds in the impaired auditory system<br />

Iris Arweiler<br />

Supervisors: Jörg Buchholz and Torsten Dau<br />

A common complaint from people with hearing impairment is difficulty with speech communication,<br />

particularly when background noise is present. The problem <strong>of</strong>ten persists even if hearing aids are<br />

used. The overall goal <strong>of</strong> this project is to analyse how hearing-aid signal processing affects speech intelligibility<br />

for listeners with different degrees and types <strong>of</strong> hearing impairments in complex listening<br />

conditions. The first part focuses on the influence <strong>of</strong> early reflections on speech intelligibility. Early<br />

reflections can improve speech intelligibility in noisy conditions. Normal-hearing and hearing-impaired<br />

listeners will perform a monaural and binaural speech intelligibility task in a virtual auditory environment<br />

where direct sound and early reflections can be varied independently. This will lead to a better<br />

understanding <strong>of</strong> the underlying mechanisms involved in early reflection processing. In the second part<br />

<strong>of</strong> the study hearing aids will be fitted to the hearing-impaired listeners and their influence on early reflection<br />

processing will be investigated in the same conditions as before.<br />

Direct sound<br />

36<br />

Early reflections<br />

Ambient noise<br />

Generating spatial sounds.

Building blocks <strong>of</strong> spontaneously spoken Danish—acoustically and perceptually<br />

Thomas U. Christiansen, Steven Greenberg and Torsten Dau<br />

The purpose <strong>of</strong> this project is to identify the basic articulatory, acoustic and auditory elements <strong>of</strong><br />

spontaneously spoken Danish. This is achieved by developing a quantitative description <strong>of</strong> the transformation<br />

from linguistic representation via the acoustic signal to the auditory representation. The<br />

analysis is based on new results from empirical investigations from phonetics and auditory models. A<br />

great challenge lies in the explication <strong>of</strong> how large acoustic variation as a consequence <strong>of</strong> different talkers<br />

in different listening environments can lead to a uniform linguistic percept. The results will impact a<br />

variety <strong>of</strong> research areas such as linguistics, hearing science, cognitive psychology, computational neuroscience,<br />

speech recognition and speech synthesis. Based on two observations new basic elements in<br />

the production and perception <strong>of</strong> spoken language are proposed: 1) Slow energy variations in the<br />

speech signal below 30 Hz are represented in the auditory cortex. It is conjectured that this representation<br />

carries speech information under every day listening conditions. Realistic models <strong>of</strong> this modulation<br />

representation have been proposed in the literature. 2) Certain articulatory and acoustic properties<br />

vary systematically with stress and position in the syllable. These properties are therefore better suited<br />

as candidates for building blocks <strong>of</strong> a quantitative description <strong>of</strong> spoken language than the traditional<br />

units such as phonemes and diphones.<br />

Phonetic flow <strong>of</strong> linguistic processing<br />

Thomas U. Christiansen and Steven Greenberg<br />

In spite <strong>of</strong> a substantial amount <strong>of</strong> research effort, the way in which speech is decoded is poorly<br />

understood. In particular little is known about the process <strong>of</strong> decoding basic speech sounds such as consonants.<br />

This is true both with respect to which underlying acoustic cues are used for decoding and the<br />

stages the decoding process consists <strong>of</strong>. The aim <strong>of</strong> this project is to identify which acoustic cues are<br />

most important for consonant identification, and to outline the ‘phonetic flow <strong>of</strong> linguistic processing’,<br />

i.e., to describe a hierarchical process by which the speech signal is decoded. One <strong>of</strong> the ways to<br />

achieve the project goals is to collect data from a consonant identification task. The results <strong>of</strong> such experiments<br />

are represented in so-called confusion matrices. By analysing the asymmetries <strong>of</strong> these confusion<br />

matrices it is possible to preclude certain decoding schemes, while providing evidence for other<br />

decoding schemes. The results <strong>of</strong> the analyses are potentially important for signal processing in digital<br />

hearing aids and in automatic speech recognition systems.<br />

Objective evaluation <strong>of</strong> a loudspeaker-based room auralisation system<br />

Sylvain Favrot<br />

Supervisors: Jörg Buchholz and Torsten Dau<br />

The setup with loudspeakers in a damped room.<br />


A loudspeaker-based room auralisation system has been developed for studying basic human perception<br />

in realistic environments. The system reproduces a controlled acoustic scenario designed with<br />

the Odeon room acoustics s<strong>of</strong>tware (state-<strong>of</strong>-the-art s<strong>of</strong>tware developed at Acoustic Technology, DTU)<br />

with the help <strong>of</strong> a loudspeaker array <strong>of</strong> twenty-nine loudspeakers placed on the surface <strong>of</strong> a sphere. An<br />

objective evaluation has been carried out demonstrating the applicability <strong>of</strong> the system. Room acoustic<br />

parameters (reverberation time, clarity, speech transmission index and inter-aural cross correlation coefficients)<br />

<strong>of</strong> room impulse responses were compared at the input and the output <strong>of</strong> the system. The results<br />

show that the involved signal processing preserves the temporal, spectral and spatial properties <strong>of</strong><br />

the room impulse response.<br />

Predicting degraded consonant recognition in hearing-impaired listeners<br />

Morten L. Jepsen<br />

Supervisor: Torsten Dau<br />

Reduced ability to recognise consonants plays a major role in general speech perception in noise.<br />

Today little is understood about how our auditory system processes and decodes the speech information.<br />

Which cues are the most salient and why is the normal-functioning system so robust in extracting<br />

the right cues? These aspects are explored by using a model <strong>of</strong> the normal and impaired auditory system<br />

to predict error patterns in consonant recognition. The idea is to use a simple binary consonant recognition<br />

task with synthesised stimuli. This makes it possible us to use a simple speech recogniser such that<br />

the predicted machine error can be associated with the degraded auditory processing. In the analysis <strong>of</strong><br />

the auditory representation <strong>of</strong> the consonants it may be possible to explore and define the salient cues in<br />

consonant discrimination and thereby obtain a better understanding <strong>of</strong> general auditory processing <strong>of</strong><br />

speech.<br />

This work was carried out at Boston University with Oded Ghitza as the local supervisor.<br />

Perceptual compensation for effects <strong>of</strong> reverberation on speech identification<br />

Jens Bo Nielsen<br />

Supervisor: Torsten Dau<br />

Reverberation can be viewed as a lowpass modulation filter that generally reduces speech intelligibility.<br />

Recent research has shown that the auditory system appears to compensate for this effect. This<br />

extrinsic compensation mechanism was demonstrated by asking listeners to identify a test-word embedded<br />

in a carrier sentence. Reverberation was added to the test-word but not to the carrier, and the ability<br />

to identify the test-word decreased because the amplitude modulations were smeared. When a similar<br />

amount <strong>of</strong> reverberation was added to the carrier sentence, the listeners’ ability to identify the test-word<br />

was restored. However, this study has not confirmed that such a compensation mechanism exists. In this<br />

work the reverberant test-word was embedded in several additional carriers without reverberation. All<br />

the carriers enhanced the ability to identify the test-word. This suggests that a listener’s perception <strong>of</strong><br />

the test-word is affected by other carrier characteristics than reverberation. It is proposed that an interfering<br />

effect <strong>of</strong> the non-reverberant carrier in combination with the reverberant test-word can account<br />

for the data that previously have been taken as evidence <strong>of</strong> extrinsic compensation for reverberation.<br />

CLUE - a Danish hearing-in-noise test<br />

Jens Bo Nielsen<br />

Supervisor: Torsten Dau<br />

A Danish speech intelligibility test called Conversational Language Understanding Evaluation<br />

(CLUE) has been developed for assessing the speech reception threshold (SRT). The test consists <strong>of</strong><br />

180 sentences distributed in eighteen phonetically balanced lists. The sentences are based on an open<br />

word-set and represent everyday language. The sentences were equalised with respect to intelligibility<br />

to ensure uniform SRT assessments with all lists. In contrast to several previously developed tests such<br />

as the hearing in noise test where the equalisation is based on objective measures <strong>of</strong> word intelligibility,<br />

the present test uses an equalisation method based on subjective assessments <strong>of</strong> the sentences. The new<br />

equalisation method is shown to create lists with less variance between the SRTs than the traditional<br />

method. The number <strong>of</strong> sentence levels included in the SRT calculation has also been evaluated and is<br />


different from previous tests. The test was verified with fourteen normal-hearing listeners; the overall<br />

SRT lies at a signal-to-noise ratio <strong>of</strong> -3.15 dB with a standard deviation <strong>of</strong> 1.0 dB. The list-SRTs deviate<br />

less than 0.5 dB from the overall mean.<br />

list SRT re: overall SRT [dB]<br />

2<br />

1.5<br />

1<br />

0.5<br />

0<br />

−0.5<br />

−1<br />

−1.5<br />

−2<br />

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18<br />

list number<br />

The verification <strong>of</strong> the CLUE test with 14 normal-hearing listeners showed that the 18 sentence lists lead to<br />

very similar SRT determinations. The mean SRT for each list lies within +/- 0.5 dB <strong>of</strong> the overall average. The<br />

bars indicate one standard deviation.<br />

A new speech manipulation framework<br />

Peter L. Søndergaard<br />

In the psycho-acoustic community there is a desire to gradually use more and more complex test<br />

signals eventually leading to manipulated speech. Speech manipulation systems date back to Flanagan’s<br />

‘Phase vocoder’ from 1966. Because <strong>of</strong> redundancy <strong>of</strong> information in speech, vocoders have never been<br />

able to produce speech signals sounding perfectly natural. The goal <strong>of</strong> this project is to make a limited<br />

system for speech manipulation that can change an input speech signal in a desired way in order to provide<br />

a new suite <strong>of</strong> test signals for the work being done at CAHR and in other research groups. The<br />

work is carried out in close cooperation with Thomas Ulrich Christensen to get input from the linguistics<br />

community.<br />

Monaural and binaural consonant identification in reverberation<br />

Eric R. Thompson<br />

Supervisor: Torsten Dau<br />

Consonant identifications have been obtained monaurally and binaurally using vowel-consonantvowel<br />

stimuli convolved with binaural impulse responses recorded from a concert hall. The impulse<br />

responses had reverberation times <strong>of</strong> about 2.5 s and showed large interaural differences in modulation<br />

transfer functions. The percentage <strong>of</strong> correct identifications were significantly higher when listening<br />

binaurally than in either monaural condition, and the percent correct identifications were significantly<br />

lower for one impulse response than for the other two, despite having similar speech transmission indices.<br />

Not every stimulus that was correctly identified monaurally was also correctly identified binaurally,<br />

indicating binaural interference. About 12% <strong>of</strong> the stimuli that were correctly identified binaurally<br />

were not correctly identified with either monaural condition, which shows a binaural advantage beyond<br />

simple ‘better-ear’ listening. The most frequent errors were voicing confusions, contrary to findings<br />

from previous studies, which have found voicing to be relatively robust against reverberation. The data<br />

will help in the development <strong>of</strong> models <strong>of</strong> binaural speech intelligibility in reverberant environments.<br />

Binaural intensity modulation detection in reverberant environments<br />

Eric R. Thompson<br />

Supervisor: Torsten Dau<br />

The speech transmission index (STI) uses the magnitude <strong>of</strong> the modulation transfer function<br />

(MTF) in a single channel to predict speech intelligibility in rooms. This method <strong>of</strong>ten underestimates<br />

speech intelligibility when binaural listening is possible. The two ears <strong>of</strong>ten have large differences in<br />

the MTF, and interaural modulation phase differences can create perceivable interaural intensity fluctu-<br />


ations that can be used to improve detection <strong>of</strong> intensity modulations. Modulation detection measurements<br />

have been made monaurally and binaurally with three dichotic impulse responses and anechoically<br />

at modulation frequencies between 6 and 24 Hz. The first impulse response consisted <strong>of</strong> the direct<br />

sound and a single ideal reflection arriving at a different time in each ear, and the other impulse responses<br />

were from a simulation <strong>of</strong> a classroom and a recording from a concert hall. Monaurally, the thresholds<br />

measured with the impulse responses could be predicted within 1.5 dB from the threshold <strong>of</strong> the<br />

anechoic condition and the magnitude <strong>of</strong> the MTF. However, with each <strong>of</strong> the impulse responses there<br />

was a significant improvement in the modulation detection thresholds for binaural listening compared<br />

with monaural listening at a modulation frequency where there was a large interaural modulation phase<br />

difference. The results suggest a way for the STI to incorporate binaural cues and improve on predictions<br />

<strong>of</strong> binaural speech intelligibility in reverberant environments.<br />

Audiology<br />

Audiology is concerned with the study <strong>of</strong> impaired hearing and the possibilities <strong>of</strong> compensation<br />

by means, e.g., <strong>of</strong> a hearing aid. A major topic in this area is to develop new or better diagnostic tools in<br />

order to distinguish between different kinds <strong>of</strong> hearing losses. Traditional descriptions in terms <strong>of</strong> the<br />

audiogram, i.e., pure tone thresholds, have been shown to be far from satisfying. Better diagnostic tools<br />

are essential for the fitting <strong>of</strong> advanced signal processing hearing aids, thus allowing the hearingimpaired<br />

user to obtain the full benefit <strong>of</strong> the device. Identifying and quantifying factors that describe<br />

hearing loss is therefore one <strong>of</strong> the main goals <strong>of</strong> the research in audiology.<br />

Open-plan <strong>of</strong>fice environment; a laboratory experiment on human perception, comfort and <strong>of</strong>fice<br />

work performance<br />

Torben Poulsen<br />

At International Centre for Indoor Environment and Energy, DTU, a laboratory investigation has<br />

been conducted on the effect <strong>of</strong> <strong>of</strong>fice noise and temperature on human perception, comfort and <strong>of</strong>fice<br />

work performance. The project is organised by Geo Clausen, Department <strong>of</strong> Mechanical Engineering,<br />

DTU, and has been carried out in cooperation with Ivana Balazova, Indoor Air, and Jens Holger Rindel,<br />

Odeon.<br />

Hearing in the communication society (HEARCOM)<br />

Torben Poulsen and Torsten Dau<br />

The general focus <strong>of</strong> the HearCom project has been on the identification and characterisation <strong>of</strong><br />

limitations for auditory communication and on modelling and evaluation <strong>of</strong> ambient conditions that<br />

limit auditory communication in everyday situations. In <strong>2008</strong> DTU has produced binaural room impulse<br />

responses to be used by one <strong>of</strong> the other partners. Furthermore a review <strong>of</strong> various reports (deliverables)<br />

from other partners has taken place.<br />

Musicians’ health, sound and hearing<br />

Torben Poulsen<br />

A project on musicians’ sound exposure and their risk <strong>of</strong> hearing impairment takes place at Department<br />

<strong>of</strong> Occupational and Environmental Medicine, Odense University Hospital. Torben Poulsen<br />

serves as an external supervisor on a PhD study by Jesper Schmidt. The work is concerned with both<br />

classical and non-classical musicians. The project is related to Centre <strong>of</strong> Musicians’ Health in Odense,<br />

and is managed by Jesper Bælum and based on a grant from the Working Environment Research Fund.<br />

Transparent hearing protector for musicians<br />

Torben Poulsen<br />

The idea <strong>of</strong> this work, supported by a grant from the Working Environment Research Fund, is to<br />

develop a device that can prevent musicians from being exposed to excessive loud sounds but at the<br />

same time allow them to hear sounds at lower levels without attenuation. The device is based on the<br />


signal processing capabilities <strong>of</strong> a modern hearing aid. The work is carried out in cooperation with Ture<br />

Andersen, University <strong>of</strong> Southern Denmark.<br />

Music and hearing loss<br />

Torben Poulsen and Anders Christian Gade<br />

Three short guidelines have been produced for the music and entertainment sector. The guidelines<br />

aim at hearing conservation and limitation <strong>of</strong> sound exposure for symphony orchestras, pop-jazz<br />

groups and music clubs/discotheques. The guidelines may be downloaded from http://www.barservice.dk/.<br />

The work is organised by Per Møberg Nielsen, Akustik Aps.<br />

Guidelines (in Danish) for symphony orchestras, pop-jazz groups and music clubs/discotheques.<br />

Audio-Visual Speech and Auditory Neuroscience<br />

The research in the field <strong>of</strong> audio-visual speech is concerned with the integration process <strong>of</strong> visual<br />

cues and auditory cues performed in the brain. It includes synchronised audio-visual speech generation<br />

and investigations on neural synchronisation in the brain. The central goal is to design a computerbased<br />

‘language laboratory’ for lip-reading that can be used by people with a sudden pr<strong>of</strong>ound hearing<br />

loss to quickly re-gain communicational skills. Visual speech information is also useful for humanmachine<br />

communication in noisy environments. One important aim <strong>of</strong> computational auditory neuroscience<br />

is to model human auditory perception with the help <strong>of</strong> biological neural networks.<br />

Audio-visual speech analysis<br />

Hans-Heinrich Bothe<br />

The work is part <strong>of</strong> a framework to design and implement a multilingual language laboratory for<br />

speech- or lip-reading. The goal <strong>of</strong> the project was to model the dynamics <strong>of</strong> mouth movements during<br />

the speech articulation process for an open vocabulary. Video films with one speaker and emotion-free<br />

articulation were analysed and the lips movements were modelled in the temporal context <strong>of</strong> English<br />

speech production. The dynamic model is related to characteristic single images <strong>of</strong> the video clip that<br />

represent the sounds and on transitions between neighbouring images. A general dynamic model employing<br />

parameters that state the importance <strong>of</strong> specific changes <strong>of</strong> the lip contours for the correct articulation<br />

<strong>of</strong> the sounds was used; it is, for example, necessary to close the lips completely in order to<br />

articulate a /p/, /b/, or /m/ correctly, whereas correct articulation <strong>of</strong> a vowel as /a/ or /e/ is ambiguous<br />

and depends on the context. The project is ongoing. The momentary results can predict the courses <strong>of</strong><br />

the feature sets with specific artifacts when only the text input is given. In the future, an improved<br />

model will be developed, implemented and evaluated in audio-visual speech intelligibility tests.<br />


Objective Measures <strong>of</strong> the Auditory Function<br />

The field <strong>of</strong> objective measures <strong>of</strong> the auditory function comprises studies <strong>of</strong> auditory evoked potentials<br />

and otoacoustic emissions. Auditory evoked potentials represent electrical fields recorded from<br />

the surface <strong>of</strong> the head in response to sound. Otoacoustic emissions are low-level sounds generated by<br />

the inner ear in response to sound or spontaneously and can be recorded in the ear canal. Otoacoustic<br />

emissions and evoked potentials can provide insights about the physiological state <strong>of</strong> the ear and the<br />

brain. Research projects range from basic scientific interpretations <strong>of</strong> the objective measures themselves<br />

to applied technical work on improving signal quality and robustness under noisy conditions.<br />

Cochlear delay obtained with auditory brainstem and steady-state responses<br />

Gilles Pigasse<br />

Supervisors: James Harte and Torsten Dau<br />

A great deal <strong>of</strong> the processing <strong>of</strong> incoming sounds to the auditory system occurs within the inner<br />

ear, the cochlea. The cochlea has different mechanical properties along its length. Its stiffness is at<br />

maximum at the base and decreases towards the apex, resulting in locally resonant behaviour. High frequencies<br />

have maximal response at the base and low frequencies at the apex. The wave travelling along<br />

the basilar membrane has a longer travel time for low-frequency stimuli than for high-frequency stimuli.<br />

In order to obtain an objective estimate <strong>of</strong> the cochlear delay in humans, the present project investigated<br />

several non-invasive methods. These methods included otoacoustic emissions, auditory brainstem<br />

responses and auditory steady-state responses. A comparison between the three methods was made. Below<br />

2 kHz the otoacoustic emission delay was twice the cochlear delay, as if the travelling wave went<br />

back and forth in the cochlea. However, this relation did not hold for higher frequencies, calling into<br />

question the physical relation between the delay estimates obtained from otoacoustic emission and those<br />

obtained from the evoked responses.<br />

This PhD project was defended successfully on 3 December <strong>2008</strong>.<br />

Optimised stimuli for auditory evoked potentials and otoacoustic emissions<br />

James Harte and Torsten Dau<br />

Spectrogram for (a) click- and (b) chirp-evoked otoacoustic emissions. The stimulus can be seen in the left<br />

part <strong>of</strong> each spectrogram, recognisable by its high level. The otoacoustic emissions occur later and have a level <strong>of</strong><br />

about 40 dB below the stimulus level. The chirp was designed to compensate for cochlear travel times across frequency.<br />

As a result, when analysed in time and frequency, the pattern <strong>of</strong> the chirp evoked emission was close to a<br />

straight line between 1.6 and 4.5 kHz, thus indicating stronger synchronisation.<br />

Auditory evoked potentials and otoacoustic emissions (OAEs) are routinely used as tools for<br />

audiometric investigation. This project attempted to improve recording techniques by optimising the<br />

stimuli. In terms <strong>of</strong> evoked potentials, rising broadband frequency chirps have traditionally been inves-<br />


tigated to compensate for the dispersion along the cochlea. Here, narrowband chirp stimuli were considered<br />

to obtain frequency specific responses. A classic transient-evoked otoacoustic emission is relatively<br />

widely dispersed over time, has a chirp-like structure starting at high frequencies, and rings down<br />

to low-frequencies. Chirp stimuli were investigated with the opposite direction frequency glide to produce<br />

emissions recorded in the ear canal that should be more ‘click-like’ than the classical recordings.<br />

The results were interpreted in terms <strong>of</strong> a model for OAE generation.<br />

Characterising temporal nonlinear processes in the human cochlea using otoacoustic emissions<br />

Sarah Verhulst<br />

Supervisors: James M. Harte and Torsten Dau<br />

Otoacoustic emissions are measured to investigate cochlear nonlinearities in the human inner ear.<br />

Temporal suppression, which is linked to nonlinear active processes in the inner ear, was examined by<br />

use <strong>of</strong> click-evoked otoacoustic emissions. The temporal suppression-effect is created when a suppressor-click<br />

is presented close in time to a test-click. However, the methods used to obtain temporal suppression<br />

are highly critical if one wishes to link this phenomenon with changes in cochlear compression.<br />

This project has demonstrated the existence <strong>of</strong> a phase and magnitude component in the suppression<br />

measure, a finding that has consequences for the interpretation <strong>of</strong> this suppression. The data also<br />

show that the mechanisms underlying this suppression can work either in a compressive or expansive<br />

state. These findings point to a temporal adaptation mechanism in the human outer hair cells. Results<br />

for four subjects show that compression as well as augmentation are subject dependent. The next stage<br />

<strong>of</strong> the project involves nonlinear time domain modelling <strong>of</strong> the cochlea.<br />

MSc projects<br />

Modelling the complex transfer function <strong>of</strong> human auditory filters<br />

Leise Borg<br />

Supervisors: Torsten Dau and Morten Jepsen<br />

Many auditory masking phenomena in human listeners can be described on the basis <strong>of</strong> the<br />

processing <strong>of</strong> sound through the inner ear. This project investigated the properties <strong>of</strong> cochlear filtering,<br />

such as tuning characteristics, phase response, and compressive behaviour, using perceptual masking<br />

experiments. Tonal signals and specific complex maskers, called Schroeder phase tone complexes, were<br />

used as the stimuli. The auditory processing <strong>of</strong> these stimuli was investigated using several cochlear<br />

models and a modelling framework that qualitatively simulates masked signal thresholds. It was concluded<br />

that the curvatures <strong>of</strong> the filter phase responses in a state-<strong>of</strong>-the-art cochlear model should stay<br />

negative above the centre frequency and that the amount <strong>of</strong> compression is too little in the model compared<br />

to the compression in the ‘real’ system to account for the dynamic range in the thresholds <strong>of</strong> the<br />

human masking data.<br />

Speech intelligibility prediction <strong>of</strong> linearly and nonlinearly processed speech in noise<br />

Claus Christiansen<br />

Supervisor: Torsten Dau<br />

In this study, a speech intelligibility measure based on a model <strong>of</strong> auditory perception was developed<br />

to predict the intelligibility <strong>of</strong> linear and nonlinearly processed speech in noise. The predicted<br />

speech intelligibility is based on the perceptual similarity between a reference signal and the processed<br />

signal. This perceptual similarity is determined by applying the auditory pre-processing to both signals<br />

and calculating the linear cross-correlation. The performance <strong>of</strong> the proposed model was compared to<br />

the classical speech intelligibility method (SII) and the speech transmission index method (STI). The<br />

three models were evaluated for speech in additive noise and speech-in-noise mixtures processed by a<br />

nonlinear technique termed ideal time-frequency segregation (ITFS). In ITFS processing, a binary mask<br />

is applied to the time-frequency representation <strong>of</strong> a mixture signal and eliminates the parts <strong>of</strong> the signal<br />

that are below a local signal-to-noise-ratio threshold. All three models were adequate for predictions <strong>of</strong><br />

speech-in-noise, whereas the results for the ITFS processed mixtures were mixed. The SII method was a<br />


poor predictor and completely failed to predict the noise vocoding ability <strong>of</strong> the binary mask. The STI<br />

method worked well for most <strong>of</strong> the binary masks, except for sparsely distributed masks where speech<br />

intelligibility was strongly overestimated. The proposed model was able to account for both the noise<br />

vocoding effect and the rapid drop in performance for the sparsest masks.<br />

The project was carried out at Oticon with Michael Syskind as the local supervisor.<br />

Schematic <strong>of</strong> the model. First the clean and processed speech is transformed to an internal psychological representation<br />

by an auditory model. The cross- correlation between the two representations is calculated in short<br />

frames, which all are level classified based the RMS level. A final score is calculated for each level by averaging<br />

the cross-correlation values <strong>of</strong> the corresponding level.<br />

Audio-visual speech analysis<br />

Torir Hrafn Hardarson<br />

Supervisor: Hans-Heinrich Bothe<br />

This work is a part <strong>of</strong> a framework to design and implement a multilingual language laboratory<br />

for speech- or lip-reading. The goal <strong>of</strong> this project was to model the dynamics <strong>of</strong> mouth movements<br />

during the speech articulation process for an open vocabulary. Video films with one speaker and (relatively)<br />

emotion-free articulation were analysed, and the lip movements were modelled. The dynamic<br />

model is related to characteristic single images <strong>of</strong> the video clip that represent the phonemes and on<br />

transitions between neighbouring images. A general dynamic model employing parameters that state the<br />

importance <strong>of</strong> specific changes <strong>of</strong> the lip contours for the correct articulation <strong>of</strong> the sounds was used.<br />

The momentary results can predict the courses <strong>of</strong> the feature sets with specific artifacts when only the<br />

text input is given.<br />

Modelling the sharpening <strong>of</strong> tuning in nonsimultaneous masking<br />

Marton Marschall<br />

Supervisors: Jörg Buchholz and Torsten Dau<br />

Frequency selectivity refers to the ability <strong>of</strong> the auditory system to resolve sounds in the spectral<br />

domain. In humans, frequency selectivity can be measured behaviourally with masking experiments.<br />

Results from such experiments show greater frequency selectivity when nonsimultaneous masking<br />

techniques are used as opposed to simultaneous ones. The differences in tuning have been suggested to<br />

occur because <strong>of</strong> suppression, a nonlinear interaction between the masker and the signal, whose effects<br />

are only observed when the two stimuli overlap in time. Suppression is thought to be a consequence <strong>of</strong><br />

the compressive behaviour <strong>of</strong> the cochlea. As a result <strong>of</strong> suppression, the increased frequency selectivity<br />

in nonsimultaneous masking may enhance the perception <strong>of</strong> dynamic signals, such as speech. This<br />

project investigated the relationships between frequency selectivity, compression, and suppression using<br />

a modelling approach. A computational auditory signal processing model was used to simulate two behavioural<br />

measures <strong>of</strong> frequency selectivity: psychophysical tuning curves and the notched-noise method.<br />

As important part <strong>of</strong> the signal processing, the model included a dual-resonance nonlinear filterbank<br />

to mimic the processing in the cochlea. An ‘optimised’ version <strong>of</strong> the model was able to account<br />


for instantaneous changes in frequency selectivity due to suppression effects.<br />

Illustration <strong>of</strong> a model <strong>of</strong> suppression. Suppression is seen as the difference between the case when only a tone (S)<br />

is introduced (1), and the case when a masker (M) is added (2). The bars correspond to the powers <strong>of</strong> the stimuli<br />

indicated.<br />

Comparison <strong>of</strong> speech intelligibility measurements with Odeon model simulations<br />

Sebastian Alex Dalgas Oakley<br />

Supervisor: Torben Poulsen<br />

This project compared speech intelligibility measurements made in real rooms with speech intelligibility<br />

measurements made in equivalent Odeon-simulated rooms. No such direct comparison between<br />

speech intelligibility in real rooms and in the corresponding simulated rooms has been made. Fifteen<br />

normal hearing test subjects participated using the DANTALE 1 and DANTALE 2 speech material.<br />

A small and a larger lecture room were used. Comparisons <strong>of</strong> speech transmission index values in<br />

the real rooms and the Odeon-simulated rooms were also made. Good agreement was found for most<br />

listening positions, but for positions close to the sound source the intelligibility in the real room was<br />

better than in the simulated room<br />

Left: individual intelligibility results in a real room (blue lines, black: mean) and in the same room simulated<br />

in Odeon (red lines, green: mean). Right: psychometric functions fitted to the data.<br />

Effects <strong>of</strong> compression in hearing aids on the envelope <strong>of</strong> the speech signal<br />

Justyna Walaszek<br />

Supervisor: Torsten Dau<br />

The influence <strong>of</strong> the compression in hearing aids on speech intelligibility was investigated.<br />

Speech signals were presented together with either a speech-shaped noise background or another speech<br />

interferer. Compression in the hearing-aid signal processing affects the temporal envelope <strong>of</strong> the signals<br />


in different ways depending on parameters such as the attack and release time constants <strong>of</strong> the compressor.<br />

The effects <strong>of</strong> compression on speech intelligibility were measured in normal-hearing listeners. The<br />

measured data were compared with simulations based on (i) the amount <strong>of</strong> correlation between signal<br />

and interferer as well as (ii) the change <strong>of</strong> the signal-to-interferer ratio at the output <strong>of</strong> the compression<br />

system. The main results <strong>of</strong> the perception experiments were well correlated with the results <strong>of</strong> the<br />

above ‘signal-based’ predictors. Specifically, it was found that the effect <strong>of</strong> compression on speech intelligibility<br />

was largest in the case <strong>of</strong> fast-acting compression, leading to the worst speech intelligibility<br />

both in the measurements and the predictions. The results could be useful for further investigating the<br />

effects <strong>of</strong> compression in state-<strong>of</strong>-the-art multi-channel hearing aids on speech intelligibility in challenging<br />

acoustic environments.<br />

The project was carried out at Oticon Research Centre Eriksholm with René Burmand Johannesson<br />

as the local supervisor.<br />


Journal Papers<br />

A. Rupp, N. Sieroka, A. Gutschalk and T. Dau: Representation <strong>of</strong> auditory filter phase characteristics in the cortex<br />

<strong>of</strong> human listeners. Journal <strong>of</strong> Neurophysiology 99, <strong>2008</strong>, 1152-1162.<br />

M.L. Jepsen, S.D. Ewert and T. Dau: A computational model <strong>of</strong> auditory signal processing and perception. Journal<br />

<strong>of</strong> the Acoustical Society <strong>of</strong> America 124, <strong>2008</strong>, 422-438.<br />

T. Poulsen and S.V. Legarth: Reference hearing threshold levels for short duration signals. International Journal<br />

<strong>of</strong> Audiology 47, <strong>2008</strong>, 665-674.<br />

T. Poulsen and H. Laitinen: Questionnaire investigation <strong>of</strong> musicians’ use <strong>of</strong> hearing protectors, self reported<br />

hearing disorders, and their experience <strong>of</strong> their working environment. International Journal <strong>of</strong> Audiology 47,<br />

<strong>2008</strong>, 160-168.<br />

E.R. Thompson and T. Dau: Binaural processing <strong>of</strong> modulated interaural level differences. Journal <strong>of</strong> the Acoustical<br />

Society <strong>of</strong> America 123, <strong>2008</strong>, 1017-1029.<br />

S. Verhulst, J.M. Harte and T. Dau: Temporal suppression and augmentation <strong>of</strong> click-evoked otoacoustic emissions.<br />

Hearing Research 246, <strong>2008</strong>, 23-35.<br />

Theses<br />

G. Pigasse: Deriving cochlear delays in humans using otoacoustic emissions and auditory evoked potentials. Department<br />

<strong>of</strong> Electrical Engineering, Technical University <strong>of</strong> Denmark, ISBN 978-87-911-8489-5. PhD thesis,<br />

<strong>2008</strong>.<br />

Chapters in Books<br />

T. Dau: Auditory processing models. Chapter 12 (pp. 175-196) in Handbook <strong>of</strong> Signal Processing in <strong>Acoustics</strong>,<br />

eds. D. Havelock, S. Kuwano and M. Vorländer. Springer Verlag, New York, <strong>2008</strong>.<br />

Conference Papers<br />

H.-H. Bothe: Human computer interaction and communication aids for hearing-impaired, deaf and deaf-blind<br />

people. Proceedings <strong>of</strong> 11th International Conference ICCHP <strong>2008</strong> (Computers Helping People with Special<br />

Needs), Linz, Austria, <strong>2008</strong>, Springer-Verlag, Berlin-Heidelberg, <strong>2008</strong>, pp. 605-608.<br />

T. Hardarson and H.-H. Bothe: A model for the dynamics <strong>of</strong> articulatory lip movements. Proceedings <strong>of</strong> International<br />

Conference on Auditory-Visual Speech Processing <strong>2008</strong>, Tangalooma, Australia, <strong>2008</strong>, pp.209-214.<br />


J.M. Buchholz and P. Kerketsos: Spectral integration effects in auditory detection <strong>of</strong> coloration. Proceedings <strong>of</strong><br />

DAGA’08, In Fortschr. Akust., Deutsche Ges. Akust., Dresden, Germany, <strong>2008</strong>, pp. 177-178.<br />

I. Balazova, G. Clausen, J.H. Rindel, T. Poulsen and D.P. Wyon: Open-plan <strong>of</strong>fice environments: A laboratory<br />

experiment to examine the effect <strong>of</strong> <strong>of</strong>fice noise and temperature on human perception, comfort and <strong>of</strong>fice work<br />

performance. IndoorAir <strong>2008</strong>, DTU, Lyngby, Denmark, <strong>2008</strong>.<br />

O. Strelcyk and T. Dau: Impaired auditory functions and degraded speech perception in noise. Proceedings <strong>of</strong><br />

DAGA’08, In Fortschr. Akust., Deutsche Ges. Akust., Dresden, Germany, <strong>2008</strong>, pp. 179-180.<br />

Abstracts<br />

J.M. Buchholz and P. Kerketsos: Across frequency processes involved in auditory detection <strong>of</strong> coloration. Journal<br />

<strong>of</strong> the Acoustical Society <strong>of</strong> America 123, <strong>2008</strong>, p. 3867.<br />

S. Ewert, J. Volmer, T. Dau and J. Verhey: Amplitude modulation depth discrimination in hearing-impaired and<br />

normal-hearing listeners. Journal <strong>of</strong> the Acoustical Society <strong>of</strong> America 123, <strong>2008</strong>, p. 3859.<br />

S. Favrot and J. Buchholz: A virtual auditory environment for investigating the auditory signal processing <strong>of</strong> realistic<br />

sounds. Journal <strong>of</strong> the Acoustical Society <strong>of</strong> America 123, <strong>2008</strong>, p. 3835.<br />

M. Jepsen and T. Dau: Estimating the basilar-membrane input/output-function in normal-hearing and hearingimpaired<br />

listeners using forward masking. Journal <strong>of</strong> the Acoustical Society <strong>of</strong> America 123, <strong>2008</strong>, p. 3859.<br />

E.R. Thomson and T. Dau: A binaural advantage in the subjective modulation transfer function with simple impulse<br />

responses. Journal <strong>of</strong> the Acoustical Society <strong>of</strong> America 123, <strong>2008</strong>, p. 3296.<br />

O. Strelcyk and T. Dau: Fine structure processing, frequency selectivity and speech perception in hearing impaired<br />

listeners. Journal <strong>of</strong> the Acoustical Society <strong>of</strong> America 123, <strong>2008</strong>, p. 3712.<br />

N.W. Adelman-Larsen, E.R. Thompson: The importance <strong>of</strong> bass clarity in pop and rock venues. Journal <strong>of</strong> the<br />

Acoustical Society <strong>of</strong> America 123, <strong>2008</strong>, p. 3090.<br />

S. Verhulst, J. Harte and T. Dau: Temporal suppression and augmentation <strong>of</strong> click-evoked otoacoustic emissions.<br />

Journal <strong>of</strong> the Acoustical Society <strong>of</strong> America 123, <strong>2008</strong>, p. 3854.<br />



The Danish system for tuition at a university level prescribes close connections between research<br />

and teaching. Consequently, university scientists must devote time for teaching students at undergraduate<br />

and graduate level. As a rule, their teaching will be related to their personal research activities; thus,<br />

nearly all the research projects described in chapter 1 are reflected in the basic and advanced tuition<br />

described below.<br />

At the Technical University <strong>of</strong> Denmark, BSc, MSc and PhD degrees will be awarded to successful<br />

students after nominal periods <strong>of</strong> study <strong>of</strong> three, two and three years. Undergraduate courses are<br />

scheduled to fit into the University’s existing tuition modules, thus permitting the students to compose a<br />

sensible curriculum encompassing their pr<strong>of</strong>essional requests. DTU also <strong>of</strong>fers a BEng degree, and an<br />

increasing number <strong>of</strong> BEng students follow the courses on acoustics.<br />

In 2000 DTU launched a series <strong>of</strong> two-year international MSc programmes. Together, Acoustic<br />

Technology and Hearing Systems, Speech and Communication, Department <strong>of</strong> Electrical Engineering,<br />

<strong>of</strong>fer an international MSc programme in Engineering <strong>Acoustics</strong>, coordinated by Mogens Ohlrich. See<br />

www.msc.dtu.dk for information about the application procedure.<br />

Tuition related to preparation for academic degrees (MSc and PhD) is individual, based on the<br />

expertise <strong>of</strong> an academic member <strong>of</strong> the staff as adviser to each student, <strong>of</strong>ten supplemented with assistance<br />

from industry or other research institutions. Twenty-three students carried out their MSc thesis<br />

work in <strong>2008</strong>. Supervision was received by a total <strong>of</strong> seventeen PhD students enrolled at DTU and four<br />

visiting PhD students from other institutions. These projects have been presented in chapter 1.<br />


Acoustic Technology and Hearing Systems, Speech and Communication <strong>of</strong>fer a series <strong>of</strong> courses<br />

every year. Most courses are supported by written material prepared for the purpose by staff members.<br />

Each course comprises a series <strong>of</strong> lectures and laboratory exercises and sometimes excursions. The<br />

courses are listed and briefly described below. Detailed descriptions <strong>of</strong> all the courses on acoustics are<br />

available on the internet (http://www.elektro.dtu.dk/English/research/at/at_courses.aspx). 1<br />

In addition ‘special courses’ may be given to individuals or small groups <strong>of</strong> students.<br />


Fundamentals <strong>of</strong> <strong>Acoustics</strong> and Noise Control (Finn Jacobsen)<br />

The purpose <strong>of</strong> this course is to introduce the students to fundamental acoustic concepts (e.g., the<br />

sound pressure and sound power), simple sound fields, wave phenomena such as reflection and interference,<br />

acoustic measurements, fundamental properties <strong>of</strong> our hearing, room acoustics, building acous-<br />

1 Enquiries from students interested in the courses are encouraged. It can be recommended to contact the relevant<br />

teacher directly (email addresses are available at http://www.elektro.dtu.dk/English/research/at/staff.aspx). Rules<br />

for admission are available at the address http://www.elektro.dtu.dk/English/education/master_programmes<br />

/engineering_acoustics/admission_requirements.aspx. A completed Guest Student Form should be sent to DTU’s<br />

International Affairs:<br />

International Affairs,<br />

http://www.dtu.dk/English/education/International_Affairs.aspx<br />

Technical University <strong>of</strong> Denmark, Building 101A,<br />

DK-2800 Kgs. Lyngby, Denmark<br />

Tel.: +45 4525 1023<br />

Fax: +45 4587 0216<br />

E-mail: intcouns@adm.dtu.dk<br />


tics, and structureborne sound, and thus to give the necessary background for the more advanced and<br />

specialised courses in acoustics. There is one single laboratory exercise.<br />

ECTS credit points: 5.<br />

Building <strong>Acoustics</strong> (Jonas Brunskog and Cheol-Ho Jeong)<br />

The goal <strong>of</strong> this introductory course specially designed for students with a background in civil<br />

engineering is to provide the participants with knowledge about the effects <strong>of</strong> noise on human beings,<br />

the principles <strong>of</strong> noise control in buildings and in the environment, and the acoustic requirements that<br />

must be fulfilled in a typical building design project.<br />

ECTS credit points: 5.<br />

Environmental <strong>Acoustics</strong> (Torben Poulsen)<br />

This 3-week course is intended for students with an interest in sound as an environmental factor,<br />

wishing to become able to solve problems as they appear in traffic, in industry and under domestic conditions.<br />

Course work includes noise predictions and field measurements. From 2009 the course will be<br />

given by Cheol-Ho Jeong and Jonas Brunskog.<br />

ECTS credit points: 5.<br />

Signals and Linear Systems in Discrete Time (Hans-Heinrich Bothe)<br />

This course provides fundamental knowledge <strong>of</strong> digital signal processing including classical and<br />

adaptive system analysis and filter design techniques. It is elective for students <strong>of</strong> electrical engineering<br />

or acoustics in the fifth semester <strong>of</strong> the bachelor study.<br />

ECTS credit points: 5.<br />


Acoustic Communication (Torben Poulsen, Jörg Buchholt and Thomas Ulrich Christiansen)<br />

The purpose <strong>of</strong> this course is to provide the students with a fundamental knowledge about the<br />

elements <strong>of</strong> acoustic communication. The course comprises topics such as hearing (anatomy, physiology),<br />

hearing loss, perception <strong>of</strong> sound (psychoacoustics), speech, and speech intelligibility. The aim is<br />

to provide knowledge and comprehension <strong>of</strong> these topics and their mutual relations for both normal<br />

hearing and hearing impaired persons. The special psychoacoustic measurements methods are applied<br />

in five laboratory exercises. Group work and writing <strong>of</strong> reports are part <strong>of</strong> the course objectives.<br />

ECTS credit points: 10.<br />

Technical Audiology (James Harte, Torsten Dau and Torben Poulsen)<br />

The purpose <strong>of</strong> this 3-week course is to train the students in experimental methods used in hearing<br />

research and audiology. The goal is also to train the participants in oral presentation <strong>of</strong> scientific<br />

material and to produce, present and discuss a poster.<br />

ECTS credit points: 5.<br />

Auditory Signal Processing and Perception (Torsten Dau)<br />

The purpose <strong>of</strong> this course is to obtain an understanding <strong>of</strong> the processing mechanisms in the auditory<br />

system and the perceptual consequences, to learn about functional relationships between the<br />

physical attributes <strong>of</strong> sound and their associated percepts using a system’s approach, to study sensory<br />

and brain processing and their locations using objective methods such as auditory evoked potentials,<br />

and to learn about clinical and technical applications by applying auditory-model based processing<br />

techniques.<br />

ECTS credit points: 10.<br />


Advanced Topics in Hearing Research (Torsten Dau, Jörg Buchholz, James Harte, Thomas<br />

Christiansen, Hans-Heinrich Bothe)<br />

The purpose <strong>of</strong> this PhD course is to develop the students’ self-learning skills, and to increase<br />

awareness <strong>of</strong> the recent literature within their own research areas and related topics. The course also<br />

attempts to improve the students’ scientific communication skills and promote timely discussions <strong>of</strong><br />

current topics in the field <strong>of</strong> auditory and audio-visual signal processing and speech processing and perception.<br />

From Biology to Technical Neural Systems (Hans-Heinrich Bothe)<br />

This course gives an introduction to the signal processing <strong>of</strong> biological systems and to biologically<br />

inspired models for neurons, muscles fibres, and the functionality <strong>of</strong> neural networks. The topics<br />

include neuroprotheses, computer-brain-interfaces and electronic technologies for artificial hearing and<br />

vision.<br />

ECTS credit points: 10.<br />

Architectural <strong>Acoustics</strong> (Jonas Brunskog and Cheol-Ho Jeong)<br />

The purpose <strong>of</strong> this course is to give student a pr<strong>of</strong>ound knowledge <strong>of</strong> the theories and methods<br />

<strong>of</strong> room acoustics and sound insulation. The topics include reflection and absorption <strong>of</strong> sound, design <strong>of</strong><br />

absorbers and <strong>of</strong> rooms for speech and music, structureborne sound and sound insulation <strong>of</strong> building<br />

constructions, building codes and test methods.<br />

ECTS credit points: 10.<br />

Electroacoustic Transducers and Systems (Finn Agerkvist)<br />

This course provides the student with knowledge <strong>of</strong> the fundamentals <strong>of</strong> acoustic transducers.<br />

Analogies between mechanical, acoustic and electrical systems are considered and applied to loudspeakers,<br />

microphones and communication systems. The student will also gain insight into the fundamental<br />

principles <strong>of</strong> sound recording and reproduction, as well as audio coding techniques. Laboratory<br />

exercises are a substantial part <strong>of</strong> the course.<br />

ECTS credit points: 10<br />

Advanced Loudspeaker Models (Finn Agerkvist)<br />

This 3-week course introduces advanced elements in loudspeaker models in order to improve the<br />

validity <strong>of</strong> the models at high frequencies and/or high levels. The main part <strong>of</strong> the course is related to<br />

measurement and modelling <strong>of</strong> the dominant sources <strong>of</strong> distortion in the loudspeaker and also to introducing<br />

methods for compensation <strong>of</strong> the distortion. The course also deals with micro-loudspeakers as<br />

used in mobile phones and headsets. The final element in the course is the modelling <strong>of</strong> ‘break-up’ vibrations<br />

in the loudspeaker diaphragm, which are modelled by finite element calculations.<br />

ECTS credit points: 5<br />

Advanced <strong>Acoustics</strong> (Finn Jacobsen)<br />

This course is intended to give the student an insight into the fundamental methods <strong>of</strong> theoretical<br />

acoustics. The topics include sound fields in ducts, modelling <strong>of</strong> silencers, the Green’s function, radiation<br />

<strong>of</strong> sound, classical room acoustic modal analysis, statistical room acoustics, numerical methods <strong>of</strong><br />

sound field calculation (the finite element method and the boundary element method), fundamentals <strong>of</strong><br />

active noise control, outdoor sound propagation, and sound intensity and other advanced acoustic measurement<br />

techniques such as beamforming and near field acoustic holography. Laboratory exercises and<br />

simulation studies are a substantial part <strong>of</strong> the course.<br />

ECTS credit points: 10.<br />

Sound and Vibration (Mogens Ohlrich, Jonas Brunskog and Cheol-Ho Jeong)<br />

By introducing the principles and laws governing the generation, transmission and radiation <strong>of</strong><br />

structureborne sound, the course will enable the student to analyse technical noise and vibration problems,<br />

design practical measures for reduction <strong>of</strong> structureborne noise in machines, vehicles and build-<br />


ings, and apply advanced measurement techniques in the field. Mini-projects, laboratory exercises and<br />

reports are a substantial part <strong>of</strong> the course.<br />

ECTS credit points: 10.<br />

LECTURE NOTES ISSUED IN <strong>2008</strong><br />

M. Ohlrich: Structure-borne sound and vibration: Introduction to vibration and waves in solid structures at audible<br />

frequencies, Part 1a. Acoustic Technology, Department <strong>of</strong> Electrical Engineering, Technical University <strong>of</strong> Denmark,<br />

Note no 7016, 5 th print, <strong>2008</strong>. (80 pp.)<br />


Appendix A: Extramural Appointments<br />

Finn Agerkvist<br />

Member <strong>of</strong> the Board <strong>of</strong> Danish Acoustical Society as well as the its Technical Committee on Electroacoustics<br />

Salvador Barrera-Figueroa<br />

Contact person <strong>of</strong> EURAMET’s Technical Committee <strong>of</strong> <strong>Acoustics</strong>, Ultrasound and Vibration.<br />

Member (via DFM) <strong>of</strong> CCAUV, Consultative Committee for <strong>Acoustics</strong>, Ultrasound and Vibration (<strong>of</strong><br />

the BIPM)<br />

Hans-Heinrich Bothe<br />

Member <strong>of</strong> the International Academic Advisory Council, Natural and Artificial Intelligence Systems<br />

Organization (NAISO), Canada<br />

Member <strong>of</strong> the Editorial Board <strong>of</strong> International Journal <strong>of</strong> Computer Research, NOVA Science Publishers,<br />

Commack, NY, USA<br />

Senior member <strong>of</strong> Institute <strong>of</strong> Electrical and Electronics Engineers (IEEE)<br />

Torsten Dau<br />

Member <strong>of</strong> the Technical Committee for ‘Hörakustik’ in the German Acoustical Society (DEGA)<br />

Member <strong>of</strong> the Scientific Advisory board <strong>of</strong> the ‘Hanse Institute for Advanced Study’ in Germany<br />

Member <strong>of</strong> the board <strong>of</strong> Danavox Jubilee (Organiser <strong>of</strong> ISAAR symposia, International Symposium on<br />

Auditory and Audiological Research)<br />

Editor <strong>of</strong> EURASIP Journal on Advances in Signal Processing (Special Issue on Digital Signal Processing<br />

for Hearing Instruments)<br />

Anders Christian Gade<br />

The Rockwool Price, Member <strong>of</strong> the Board<br />

Finn Jacobsen<br />

Member <strong>of</strong> the Editorial Board <strong>of</strong> International Journal <strong>of</strong> <strong>Acoustics</strong> and Vibration<br />

Section leader <strong>of</strong> Handbook <strong>of</strong> Signal Processing in <strong>Acoustics</strong>, Springer Verlag, ed. by D. Havelock, S.<br />

Kuwano and M. Vorländer<br />

Member <strong>of</strong> the Scientific Committee for Fifteenth International Congress on Sound and Vibration,<br />

Daejon, Korea, July <strong>2008</strong><br />

Member <strong>of</strong> the Scientific Committee for Sixteenth International Congress on Sound and Vibration,<br />

Krakow, Poland, July 2009<br />

Member <strong>of</strong> the Board <strong>of</strong> Directors <strong>of</strong> International Institute <strong>of</strong> <strong>Acoustics</strong> and Vibration<br />

Reviewer <strong>of</strong> proposal for the Czech Science Foundation<br />

Midterm reviewer <strong>of</strong> EU project<br />

Mogens Ohlrich<br />

Member <strong>of</strong> ISO/TC 43/SC 1/WG 22, ‘Characterisation <strong>of</strong> machines as sources <strong>of</strong> structure-borne<br />

sound’<br />

Torben Poulsen<br />

Member <strong>of</strong> ISO/TC 43/WG l, ‘Threshold <strong>of</strong> Hearing’<br />

Convener <strong>of</strong> ISO/TC 43/SC 1/WG 17, ‘Methods <strong>of</strong> measurements <strong>of</strong> sound attenuation <strong>of</strong> hearing protectors’<br />

Chairman <strong>of</strong> the board <strong>of</strong> Danavox Jubilee Foundation (Organiser <strong>of</strong> ISAAR symposia, International<br />

Symposium on Auditory and Audiological Research)<br />


Knud Rasmussen<br />

Chairman <strong>of</strong> IEC Technical Committee 29, ‘Electroacoustics’<br />

Convener <strong>of</strong> IEC TC29/WG5, ‘Measurement microphones’<br />

Member (via DFM) <strong>of</strong> CCAUV, Consultative Committee for <strong>Acoustics</strong>, Ultrasound and Vibration (<strong>of</strong><br />

the BIPM)<br />

Contact person <strong>of</strong> EURAMET’s sub Technical Committee ‘<strong>Acoustics</strong>’<br />

Technical Manager <strong>of</strong> DPLA, Danish Primary Laboratory <strong>of</strong> <strong>Acoustics</strong><br />

Chairman <strong>of</strong> S529, ‘Electroacoustics’, Danish Standards Association<br />

Technical assessor for United Kingdom Accreditation Service (UKAS)<br />

Technical assessor for Swedish Board for Accreditation (SWEDAC)<br />

Technical assessor for National Agency for Testing and Accreditation (NATA), Australia<br />

Life member <strong>of</strong> IEEE (Institute <strong>of</strong> Electrical and Electronic Engineers)<br />


Appendix B: Principal Intramural Appointments<br />

Torsten Dau<br />

Head <strong>of</strong> Hearing Systems, Speech and Communication, one <strong>of</strong> the eight groups at Department <strong>of</strong> Electrical<br />

Engineering (from 1 August <strong>2008</strong>), and head <strong>of</strong> CAHR<br />

Member <strong>of</strong> DTU’s PhD Programme Committee Electronics, Communication and Space Research<br />

Finn Jacobsen<br />

Head <strong>of</strong> Acoustic Technology, one <strong>of</strong> the eight groups at Department <strong>of</strong> Electrical Engineering (from 1<br />

August <strong>2008</strong>)<br />

Mogens Ohlrich<br />

Head <strong>of</strong> Acoustic Technology, one <strong>of</strong> the sections <strong>of</strong> Department <strong>of</strong> Electrical Engineering (until 1 August<br />

<strong>2008</strong>)<br />

Coordinator <strong>of</strong> the International MSc Programme in Engineering <strong>Acoustics</strong><br />

Torben Poulsen<br />

Coordinator <strong>of</strong> pedagogical education at Department <strong>of</strong> Electrical Engineering<br />


Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!