21.03.2016 Views

technologies

brochure_R_D_EN

brochure_R_D_EN

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

esearch and development – 2016<br />

SIGNAL MODELS<br />

SDIF FORMAT<br />

Sound Interchange Standard Format<br />

—<br />

Teams Involved: Sound Analysis & Synthesis, Sound Music Movement<br />

Interaction, Musical Representations<br />

This file format standard, independent from computer<br />

platforms, extendable and freely accessible, details very<br />

precisely the types of audio signal description data and their<br />

representation.<br />

Once their inputs and outputs conform to the same standard,<br />

it enables different pieces of software to communicate<br />

immediately. It also makes the maintenance of data files<br />

much easier thanks to annexed information carried in the<br />

file and enables pieces of heterogeneous data to co-exist<br />

in one file. A library of reading/writing C functions and<br />

applications have been developed and licensed in open<br />

source (http://sdif.sourceforge.net).<br />

PROCESSING BY PHASE VOCODER<br />

—<br />

Team Involved: Sound Analysis & Synthesis<br />

The phase vocoder, one of the most effective techniques<br />

for the analysis and transformation of sounds, represents<br />

the foundation of the SuperVP software program. With<br />

the phase vocoder, it is possible to transpose, stretch, or<br />

shorten sounds; it is possible to apply a practically limitless<br />

number of filters to sounds. By the same token, the level<br />

of sound quality of the transformed signals is extremely<br />

high when applied to speech. Numerous improvements and<br />

extensions have been introduced, for example:<br />

• Reassigned spectrum<br />

• Estimation of the spectral envelope via ‘true envelope’<br />

transposition with the preservation of the spectral<br />

envelope transposition with the ‘shape invariant’ model<br />

• Generalized cross synthesis enabling the synthesis of<br />

hybrid sounds<br />

• Several methods for estimating the fundamental<br />

frequency (pitch) of a signal<br />

• Classification by nature of the spectral, sinusoidal (voiced)<br />

or non-sinusoidal (non-voiced sounds or noises) peaks<br />

segmentation of the time/frequency zones into transitory<br />

and non-transitory regions and the increase or decrease<br />

of transitory sections<br />

• Processing the sinusoidal, non-sinusoidal, and transitory<br />

time/frequency zones<br />

• The LF model of a glottal source, making it possible to<br />

transform a voice, etc.<br />

These different modules of analysis, synthesis, and<br />

processing are used in several software programs on the<br />

market today.<br />

CORPUS-BASED CONCATENATIVE<br />

SYNTHESIS<br />

—<br />

Team Involved: Sound Music Movement Interaction<br />

Corpus-based concatenative synthesis uses a database<br />

of recorded sounds and a unit selection algorithm that<br />

chooses the segments from the database that best suit<br />

the musical sequence that we would like to synthesize by<br />

concatenation. The selection is based on the characteristics<br />

of the recording obtained through signal analysis and match,<br />

for example, the pitch, energy, or spectrum. The habitual<br />

methods for musical synthesis are based on a model of a<br />

sound signal, but it is very difficult to establish a model<br />

that conserves the entirety of the details and delicacy of<br />

the sound. However, concatenative synthesis—that uses real<br />

recordings—preserves these details.<br />

Putting the new approach for concatenative synthesis<br />

by corpus in real-time in place enables an interactive<br />

exploration of a sound database and a granular composition<br />

that targets specific sound characteristics. It also makes it<br />

possible for composers and musicians to reach new sounds.<br />

This principle is carried out in the CataRT system. This<br />

system makes it possible to display a 2D projection of<br />

the descriptor space that can be browsed using a mouse<br />

or external controllers. Grains are then selected in the<br />

original recording and performed by geometric proximity,<br />

metronome, in loops, or continuously. It is also possible<br />

to define a perimeter around one’s present position that<br />

selects a sub-group of grains that are then played randomly.<br />

CataRT is used for musical composition, performance, and<br />

in various sound installations.<br />

As this field of research is fairly young, several interesting<br />

research questions have been raised (or will be raised in<br />

the future) concerning the analysis and exploitation of the<br />

information found in the data of a corpus, the visualization,<br />

and real-time interaction.<br />

System of visualization used for sound synthesis by corpus<br />

28

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!