TWENTIETH- - Synapse Music
TWENTIETH- - Synapse Music
TWENTIETH- - Synapse Music
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
Timbre and Texture: Electronic 253<br />
sound at a regular rate. Phase vocoding changes the playback rate of these "frames" of<br />
sound. Roger Reynold's Transfigured Wind IV (1985) for flute, and digital audio, uses<br />
phase vocoding to alter recordings of flute gestures, which are then played back as accompaniment<br />
to a li ve fluti st.<br />
Convolution is a type of cross synthesis, which takes the frequency characteristics<br />
of one sound and applies them to the frequency characteristic of another sound. The mathematical<br />
process involves multiplication of frequencies, which means that frequencies present<br />
in both sounds will be enhanced, while frequencies present in only onc sound will be<br />
eliminated. In one respect, it can be thought of as using onc sound to filter another sound.<br />
Other analysis/resynthesis techniques exist and have been used to good musical results.<br />
lonathan Harvey's Mortuos Plango, Vivos Voco (1981) uses an analysis of a large<br />
church bell applied to the recording of a boy's voice. The effect is one of a merged boy and<br />
bell that produces unique and haunting textures. Paul Lansky's Idle Chatter (1985) takes<br />
analyzed vocal sounds and separates the more static portions from the fast-changing transients<br />
(the vowels from the consonants, plosives. and sibilance) to create a rhythmic chorus<br />
of nonsense vocal sounds.<br />
The affordability, power, and versatility of this technology have led. to a resurgence<br />
of interest and compositional activity in the area of concrete music. Composers are able to<br />
alter concrete sound sources digitally to create rich textures more easily and quickJy than<br />
with tape, and there is no loss of signal quality (or added noise) like that associated with<br />
analog techniques.<br />
THE DEVELOPMENT OF MIDI<br />
While early programming languages required massive mainframe computers to synthesize<br />
sound (making access to them very limited) many composers today work with a variety of<br />
open-ended systems and premade systems on personal computers that provide far greater<br />
processing power than those earlier mainframes.<br />
Most premade applications trace their history to the development of the MIDI (<strong>Music</strong>al<br />
Instrument Digital Interface) specification in the early 1980s. MIDI is a digital communication<br />
standard (or language) designed originally to allow the synthesizers of one<br />
manufacturer to transmit performance instructions (such as, "now playa C4, now stop playing<br />
that C4") to synthesizers made by another manufacturer. MIDI made it easily possible<br />
for computers to store and communicate performance instructions, and led to the development<br />
of sequencing programs that allowed composers to organize and edit computer music<br />
scores in more musically intuitive ways than afforded by early programming languages.<br />
Despite MIDI's weaknesses (slow communication speed between devices, limited<br />
resolution of control values, and control parameters defined by keyboard performance<br />
only) the specification has remained largely unchanged since its inception. Even today, almost<br />
all new computer music synthesis programs (premade or open-ended) use MIDI as<br />
the basis for controlling parameters and communicating between applications. MIDI<br />
breaks down most of the common keyboard-based performance actions into a stream of<br />
bits (the smallest unit of binary data, I or O-on or off) arranged in groups of eight to form<br />
a byte. Usually two to three bytes are arranged to form a single MIDI message, with seven