09.12.2012 Views

The Kyma Language for Sound Design, Version 4.5

The Kyma Language for Sound Design, Version 4.5

The Kyma Language for Sound Design, Version 4.5

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

It looks very much like a series of impulse responses (as we saw in the earlier tutorial on filtering).<br />

<strong>The</strong> FrequencyScale works best on sounds that can be modeled as a source of impulses hitting a resonant<br />

filter, <strong>for</strong> example the trombone (flapping lips generate an impulse that is fed into the filter of the body of<br />

the instrument) or the human voice (glottal pulses passing through the resonant cavity of the mouth and<br />

lips). <strong>The</strong> FrequencyScaler looks <strong>for</strong> what it thinks are individual impulse responses in the wave<strong>for</strong>m and<br />

creates little grains by putting an amplitude envelope (shaped like the Window function) on each one.<br />

<strong>The</strong>n, to lower the frequency, it stretches these grains further apart so they occur less often. And to raise<br />

the frequency, it pushes the grains closer together so that they occur at a faster rate. This is the same as<br />

decreasing or increasing the rate of the impulses without changing the impulse response of the filter. If the<br />

impulse-into-filter model is a good match <strong>for</strong> the way the sound was originally produced, then the FrequencyScaler<br />

will do a good job of changing the frequency without changing the duration or moving the<br />

<strong>for</strong>mants around. Why? Because the <strong>for</strong>mant structure is inherent in the filter or the resonator and can be<br />

inferred from the way it responds to a single impulse. Since this method does not change the shape of the<br />

impulse response, it does not change the <strong>for</strong>mants.<br />

<strong>The</strong> FreqTracker input is used to get an idea of where to find the impulses in the Input. <strong>The</strong> Delay is<br />

to compensate <strong>for</strong> the delay through the FrequencyTracker; it delays the Input by the same delay introduced<br />

by the FrequencyTracker, so that the frequency estimate lines up with the current Input. <strong>The</strong><br />

minimum delay you should use is 256 samp and the maximum is 20 ms. Within those boundaries you<br />

should use something close to the period of the lowest frequency you expect to see in the input. Remember<br />

that to convert a frequency to a period, type the frequency in hertz followed by the word “inverse”,<br />

<strong>for</strong> example<br />

4 d sharp hz inverse<br />

<strong>The</strong> quality of the frequency scaled sound depends on the quality of the frequency estimate, so it is important<br />

to start with the best possible frequency estimate. Double-click the FrequencyTracker to see its<br />

parameters. <strong>The</strong> most important parameters to adjust are MinFrequency (the lowest frequency you expect<br />

to see in the Input) and MaxFrequency (the highest frequency). <strong>The</strong> narrower you can make this<br />

range, the better your frequency tracking will be. In this case, we know that the original frequency of the<br />

single trombone tone was 4 d sharp, so the range of 4 d to 4 f is pretty safe!<br />

All of the other parameters are pretty nonlinear (in that tiny changes in these parameters can destroy the<br />

frequency tracking), so it is recommended that you do not change these. You can experiment with more<br />

or fewer Detectors, with the caveat that more is not necessarily better in terms of the effect this will<br />

have on the tracking.<br />

This technique has one more advantage (even beyond the fact that it does not affect the duration or the<br />

<strong>for</strong>mant structure), because it can work in real time on live input. For example, play the <strong>Sound</strong> called<br />

scale w/Frequency and try various settings of the !Frequency fader. If you put !Frequency low<br />

enough, you will hear the individual grains. Play it again, this time choosing live input in the Generic-<br />

Source dialog, and try frequency scaling your own voice in real time. You may have to edit the <strong>Sound</strong> to<br />

adjust the MinFrequency and MaxFrequency parameters of the FrequencyTracker to more closely<br />

match your own range.<br />

Wavetable Synthesis<br />

If you model a sound as oscillators reading from wavetables, then you can select durations and frequencies<br />

independently of one another; frequency is controlled by the size of the increment you use in<br />

stepping through the wavetable, and duration is simply how long the oscillator is left on.<br />

For example, wavetable frequency scale uses a GA resynthesis of three different trombone tones mapped<br />

to two different ranges of keys. <strong>The</strong> lower range has 2 c at its low end, 3 c sharp at its high end, and fills<br />

in the intermediate timbres by continuously morphing between those two endpoints. <strong>The</strong> higher range<br />

has 3 c sharp as its low endpoint, 4 d sharp as its high end and, similarly, morphs between the two to<br />

get the intermediate wave<strong>for</strong>ms and envelopes.<br />

See Wavetable Synthesis on page 185 to see how to create a GA analysis/resynthesis from the Tools<br />

menu.<br />

126

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!