09.12.2012 Views

The Kyma Language for Sound Design, Version 4.5

The Kyma Language for Sound Design, Version 4.5

The Kyma Language for Sound Design, Version 4.5

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Classic Vocoder<br />

As a final step, imagine adding the outputs from all three filters together using a mixer:<br />

Source<br />

Noise+Buzz<br />

LPF<br />

BPF<br />

HPF<br />

LPF<br />

BPF<br />

HPF<br />

105<br />

Amp Follow<br />

Amp Follow<br />

Amp Follow<br />

Now play Traditional vocoder-22 bands, which is pretty much what we have just described except that<br />

there are 22 pairs of bandpass filters, not just three filters as shown above.<br />

Use !Frequency to change the frequency of the buzz oscillator. !Noise crossfades between the noise<br />

and the buzz oscillator inputs so that it is purely noise at the top and purely the buzz oscillator when it is<br />

at zero.<br />

!TimeConst is the same as the TimeConstant in the AmplitudeFollower of the previous example. It<br />

controls how quickly the 22 amplitude followers will respond to changes in the filters’ outputs. Higher<br />

values will give a reverberated quality, while very small values will make the speech more intelligible.<br />

!Bandwidth is a control on the bandwidths of all 22 bandpass filters. Set the !Noise fader all the way to<br />

the top, and then gradually decrease the value of !Bandwidth, making the filters ring more and more<br />

until they are almost pure sine waves (because that is all that can fit in such a narrow band). You may<br />

have to boost !InLevel, the level of the noise input, <strong>for</strong> very narrow bandwidths because less of the signal<br />

can get through when you make the pass band so narrow.<br />

Finally, set the !Live fader all the way to one, and try speaking into the microphone. At this setting, you<br />

control the filters with your voice.<br />

This is an example of imposing the amplitude and spectral content of human speech onto some synthetic<br />

sources like noise or an oscillator. You can use the same concept to impose characteristics of human<br />

speech onto the sound made by an animal or machine. For example, try out AD-mr dophin, selecting Live<br />

when the GenericSource asks <strong>for</strong> the source. Now is your chance to tell off the smart dolphin from the<br />

first example.<br />

Now Add Frequency Deviation<br />

You, the ever astute reader are no doubt asking “Aha, but what about the frequency tracking?” remembering<br />

that the original example had a frequency tracker along with the amplitude follower. It turns out<br />

that if you add frequency tracking within the band of each of the bandpass filters in the previous section,<br />

you end up with the algorithm used by the LiveSpectralAnalysis <strong>Sound</strong> and the Spectral Analysis Tool<br />

(which we will cover in depth in subsequent tutorials). <strong>The</strong> result is a set of amplitude envelopes just as<br />

be<strong>for</strong>e, along with a set of corresponding frequency envelopes. Each amplitude and frequency envelope is<br />

used to control the amplitude and frequency of a single oscillator. <strong>The</strong>n all of the oscillators are added<br />

together <strong>for</strong> the final output.<br />

Try playing piano man. This is an example of resynthesis using the frequency envelopes from one analysis<br />

and the amplitude envelopes from another. More accurately, the amplitude envelopes are gradually<br />

cross-faded from one analysis to the other.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!