04.07.2013 Views

Download - CCRMA - Stanford University

Download - CCRMA - Stanford University

Download - CCRMA - Stanford University

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

in a specific way to a hit and to the position information stream of one or more axes of control. Pads<br />

ran be grouped into Scenes and the screen of the computer displays the virtual surface and gives visual<br />

feedback to the performer. Performance Pads can control MIDI =.equences. playback of soundhles.<br />

algorithms and real time DSP synthesis. The velocity' of the hits and the position information can be<br />

mapped to different parameters through transfer functions. Control Pads are used to trigger actions<br />

that globally affect the performance.<br />

The architecture of the system has been opened and it is now possible to create interfaces to other MIDI<br />

controllers such as keyboards, pedals, percussion controllers, the Lightning controller and so on. More<br />

than one interface controller can be active at the same time listening to one or more MIDI streams<br />

and each one can map gestures to the triggering and control of virtual pads. The problem of how to<br />

map different simultaneous controllers to the same visible surface has no' been completely resolved at<br />

the time of this writing (having just one controller makes it easy to get simple visual feedback of the<br />

result of the gestures, something that is essential in controlling an improvisation environment). Another<br />

interface that is being currently developed does not depend on MIDI and controls the system through a<br />

standard computer graphics tablet. The surface of the tablet behaves in virtually the same way as the<br />

surface of the Radio Drum, and tablets that have pressure sensitivity open the way to three dimensional<br />

continuous control similar to that of the Radio Drum (but of course not as flexible). The advantage of<br />

this interface is the fact that it does not use MIDI bandwidth and it relies on hardware that is standard<br />

and easy to get.<br />

Performance Pads will have a new category: Algorithmic Pads. These pads can store algorithms that<br />

can be triggered and controlled by gestures of the performer. While a graphical programming interface<br />

has not yet been developed at the time of this writing, the composer can create algorithms easily by<br />

piogramming them in Objective C within the constraints of a built in set of classes and objects that<br />

should be enough for most musical purposes. Any parameter of an algorithm can be linked through<br />

a transfer function to the movement of one of the axes of control. Multiple algorithms can be active<br />

at the same time and can respond in different ways to the same control information making it easy to<br />

transform simple gestures into complicated musical responses. An algorithm can also be the source of<br />

control information that can be used by other algorithms to affect their behavior.<br />

6.1.6 A Dynamic Spatial Sound Movement Toolkit<br />

Fernando Lopez Lezcano<br />

This brief overview describes a dynamic sound movement toolkit implemented within the context of the<br />

CLM software synthesis and signal processing package. Complete details can be found at http://w*wccrma.stanford.edu/<br />

nando/clm/dlocsig/.<br />

diocsig.lisp is a unit generator that dynamically moves a sound source in 2d or 3d space and can be<br />

used as a replacement for the standard locsig in new or existing CLM instruments (this is a completely<br />

rewritten and much improved version of the old dlocsig that I started writing in 1992 while I was working<br />

at Keio <strong>University</strong> in Japan).<br />

The new dlocsig can generate spatial positioning cues for any number of speakers which can be arbitrarily<br />

arranged in 2d or 3d space. The number of output channels of the current output stream (usually defined<br />

by the xhannels keyword in the enclosing with-sound) will determine which speaker arrangement is used.<br />

In pieces which can be recompiled from scratch this feature allows the composer to easily create several<br />

renditions of the same piece, each one optimized for a particular number, spatial configuration of speakers<br />

and rendering technique.<br />

dlocsig can render the output soundfile with different techniques. The default is to use amplitude<br />

panning between adyacent speakers (between two speakers in 2d space or three speaker groups in 3d<br />

space), dlocsig can also create an Ambisonics encoded four channel output soundfile suitable for feeding<br />

into an appropriate decoder for multiple speaker reproduction. Or it can decode the Ambisonics encoded<br />

information to an arbitrary number of output channels if the speaker configuration is known in advance.<br />

In the near future dlocsig will also be able to render to stereo soundfiles with hrtf generated cues for<br />

30

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!