09.12.2012 Views

The Kyma Language for Sound Design, Version 4.5

The Kyma Language for Sound Design, Version 4.5

The Kyma Language for Sound Design, Version 4.5

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Timbre<br />

Like an amplitude envelope, a pitch envelope is a feature that takes the human auditory system some<br />

amount of time to integrate and identify. <strong>The</strong> FrequencyTracker uses an algorithm called autocorrelation,<br />

which, as its name implies, is a measure of how correlated a signal is with itself at different delay times.<br />

Imagine that textbook sine wave again. If you were to delay that sine wave by one cycle and then compare<br />

it against itself with no delay, you would have to say that the two signals were extremely well<br />

correlated, if not identical. But if you compared them after a delay time of, say, one-tenth of one cycle,<br />

they would not be as correlated. <strong>The</strong> FrequencyTracker compares the signal against itself at several different<br />

delay times and picks the one delay time at which the original and delayed wave<strong>for</strong>ms are the most<br />

similar. It reasons that this delay is probably the period of one cycle of the wave<strong>for</strong>m, so it can then make<br />

a guess as to the frequency of the signal.<br />

As you can tell from this description, frequency tracking is not a fool-proof or easy task, so the more clues<br />

you can give the FrequencyTracker, the better. If you have some idea of the frequency range of the input,<br />

you should enter that in the MinFrequency and MaxFrequency fields of the FrequencyTracker. <strong>The</strong><br />

more you can narrow-in on the range, the better the FrequencyTracker will do.<br />

Try playing the <strong>Sound</strong> called oscil tracks voice, and then open it up to see how the AmplitudeFollower<br />

and the FrequencyTracker are used to control an oscillator. Notice that there is a Gain on the Amplitude-<br />

Follower (because it tends to have low-amplitude output), and notice that the Frequency field of the<br />

Oscillator is set to:<br />

FreqTrk L * SignalProcessor sampleRate * 0.5 hz<br />

This expression scales the (0,1) output of the FrequencyTracker to the range of DC to one half of the<br />

current sample rate.<br />

Try using your own voice as a source. If the FrequencyTracker has trouble tracking your voice, adjust the<br />

MinFrequency and MaxFrequency to more closely match the range of your speaking or singing voice.<br />

In mouth-controlled filter, the amplitude envelope of the input controls the center frequency of a low<br />

pass filter, and the frequency of the input controls the frequency of a sawtooth oscillator that is fed into<br />

the filter. Listen to the default source, and then switch it over to Live, so you can try controlling it with<br />

your voice. Notice that this <strong>Sound</strong> uses a PeakDetector to follow the amplitude envelope of the source.<br />

Double-click the PeakDetector to look at its parameters. You can think of the PeakDetector as having two<br />

time constants: one <strong>for</strong> reacting to increases in the amplitude of the input, and the other that reacts to decreases<br />

in the amplitude. You can create, <strong>for</strong> example, an amplitude envelope that jumps immediately<br />

upward in response to attacks or onsets in the input, but that decays slowly when the input amplitude<br />

goes to zero.<br />

A PeakDetector is also used in amplitude filter, LFO on attack. Try playing this <strong>Sound</strong>. It is set up as a<br />

drum loop processed by a low pass filter. <strong>The</strong> amplitude envelope of the drum loop controls the cutoff<br />

frequency of the low pass filter. Notice that the attack time of the PeakDetector is modulated, in this case,<br />

by a low frequency oscillator. You could, alternatively, choose two different sources <strong>for</strong> this <strong>Sound</strong>: one<br />

as input to the filter and another to control the cutoff frequency of the filter. Look at the parameters of the<br />

two GenericSources in this <strong>Sound</strong>. One is set to read the right channel and the other is set to read the left<br />

channel of the source, so you could use the two input channels of the Capybara <strong>for</strong> two different audio<br />

signal inputs to this <strong>Sound</strong>, listening to one through the filter and using the other signal to control the<br />

cutoff of that filter.<br />

So far, we have looked at modules <strong>for</strong> tracking the amplitude and the frequency of the signal. What other<br />

parameters does an audio signal have? Some books actually define timbre as those characteristics of a<br />

signal that are not pitch or loudness — sort of a “none of the above” definition. A slightly more useful<br />

(though still insufficient) definition might be: the spectrum, or the time-varying strengths of upper partials<br />

relative to the fundamental. Admittedly this is still vague, but it gives us something to work with.<br />

We know we want to somehow differentiate between the "highs", the "mid-range" and the "lows" or at<br />

least to monitor them independently of one another. So the first thing that springs to mind is filters. We<br />

can use filters to separate a single signal into several frequency bands.<br />

178

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!