Brainwave Music - Media Arts and Technology - University of ...
Brainwave Music - Media Arts and Technology - University of ...
Brainwave Music - Media Arts and Technology - University of ...
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
1. Introduction<br />
1<br />
MAT 200B, Winter 2010<br />
<strong>Brainwave</strong> <strong>Music</strong><br />
History, Overview, Artistic <strong>and</strong> Clinical Aspects<br />
Ivana Anđelković<br />
<strong>University</strong> <strong>of</strong> California, Santa Barbara<br />
Term brainwave is used to describe electrical activity measured along the scalp, produced by<br />
firing <strong>of</strong> neurons in the brain. Sonification <strong>of</strong> such activity is being attempted since the beginning <strong>of</strong><br />
20 th century in both scientific <strong>and</strong> artistic pursuits. This paper reviews the history <strong>of</strong> brainwave music,<br />
rough categorization <strong>and</strong> description <strong>of</strong> methods for generating such music <strong>and</strong> its application in<br />
clinical environment.<br />
2. History <strong>of</strong> brainwave music<br />
Following initial experiments on animals in the 19 th century, human brainwaves were first<br />
recorded <strong>and</strong> measured by a German psychiatrist Hans Berger in 1924, who named these electrical<br />
measurements the "electroencephalogram" (EEG). Berger’s instrument for neural activity detection was<br />
a rather crude Edelmann string galvanometer used to record electrocardiograms. His research on both<br />
patients with skull imperfections or trepanations, <strong>and</strong> patients with intact skulls continued over next<br />
five years <strong>and</strong> results were first published in 1929. in the article entitled “On the<br />
Electroencephalogram <strong>of</strong> Man". However, this outst<strong>and</strong>ing scientific contribution did not gain publicity<br />
until E.D. Adrian <strong>and</strong> B.H.C. Matthews verified Berger’s results in 1934. Furthermore, Adrian <strong>and</strong><br />
Mathews immediately recognized the benefits <strong>of</strong> sonifying electrical brain signals <strong>and</strong> were the first
2<br />
MAT 200B, Winter 2010<br />
ones to successfully conduct such an experiment [2]: ”It is sometimes an advantage to be able to hear<br />
as well as see the rhythm 1 ; although its frequency is only 10 a second it can be made audible by using<br />
a horn loud speaker with the diaphragm set very close to the pole pieces <strong>and</strong> a condenser in parallel<br />
to cut out high frequencies.”<br />
It was not until 1965. that brain waves were used in artistic pursuits, when Alvin Lucier, an<br />
American composer, presented “<strong>Music</strong> for Solo Performer” at the Rose Art Museum (Waltham,<br />
Massachusetts) with encouragement from John Cage <strong>and</strong> technical assistance from physicist Edmond<br />
Dewan. The piece was realized by amplifying alpha waves <strong>and</strong> using resulting signals to control <strong>and</strong><br />
play percussion instruments. Lucier, however, did not continue to use EEG in his compositions.<br />
Advent <strong>of</strong> voltage controlled devices <strong>and</strong> in particular popularization <strong>of</strong> Moog synthesizer in the<br />
1960s provided an easier way to control <strong>and</strong> modify sound through biophysical interfaces. Richard<br />
Teitelbaum, one <strong>of</strong> the most notable composers <strong>of</strong> that period who used EEG controlled synthesizers,<br />
performed “Spacecraft” for the first time in 1967. with his electronic music group <strong>Music</strong>a Elettronica<br />
Viva (MEV). In this piece, various biological signals were used as sound control sources.<br />
In 1970, David Rosenboom’s brainwave controlled interactive performance “Ecology <strong>of</strong> the<br />
Skin”, took place in Automation House, New York. In pursuit <strong>of</strong> cybernetic bi<strong>of</strong>eedback artistic systems<br />
research, Rosenboom founded Laboratory <strong>of</strong> Experimental Aesthetics at York <strong>University</strong> in Toronto,<br />
which served as a workplace for prominent artists such as John Cage, David Behrman, LaMonte Young,<br />
<strong>and</strong> Marian Zazeela. Whereas Teitelbaum’s <strong>and</strong> Lucier’s works relied on idiosyncratic EEG controlled<br />
music instruments, Rosenboom extended the idea into the musical syntax by addressing meaningful<br />
feature extraction from EEG data [3]. Up to date, he has conducted extensive research on<br />
psychological <strong>and</strong> neurological foundation <strong>of</strong> aesthetic experience, computer music s<strong>of</strong>tware <strong>and</strong> brain-<br />
computer interfaces.<br />
During the 1970s, Manford Eaton was building electronic circuits to experiment with biological<br />
signals at Orcus Research in Kansas City. In France, scientist Roger Lafosse <strong>and</strong> one <strong>of</strong> the pioneers <strong>of</strong><br />
musique concrète Pierre Henry, built the device Corticalart <strong>and</strong> used it in series <strong>of</strong> performances. The<br />
1 Term Berger Rhythm is used for alpha brain waves which fall within a frequency range from 8Hz to<br />
13Hz
3<br />
MAT 200B, Winter 2010<br />
device, similar to EEG, transmited brain waves to seven generators <strong>of</strong> electronic sounds which Pierre<br />
Henry manually manipulated to create musical improvisations.<br />
Significant milestone occured during the 1970s when UCLA computer scientist Jaques Vidal first<br />
coined the expression brain-computer interface (BCI) for his research in biocybernetics <strong>and</strong> human-<br />
computer interaction. Nevertheless, after the initial burst <strong>of</strong> creative activity, field <strong>of</strong> brainwave art<br />
laid dormant until the 1990s, when sufficently powerful computers allowed further experiments.<br />
In the 1990s, scientists Benjamin Knapp <strong>and</strong> Hugh Lusted built BioMuse – a system that receives<br />
<strong>and</strong> processes signals from main sources <strong>of</strong> bodily electrical activity: muscles, eye movement, heart<br />
<strong>and</strong> brainwaves. Although the technology was primarily indented to solve real-world problems, new<br />
media artist Atau Tanaka was commissioned in 1992 to use it music composition <strong>and</strong> performance.<br />
3. Approaches to translating neural activity into music<br />
According to their frequency, brain waves are classified into five main categories:<br />
Type: Frequency: Normally occur in:<br />
Delta up to 4 Hz deep sleep, babies<br />
Theta 4 Hz – 8 Hz young children, drowsiness, hypnosis<br />
Alpha 8 Hz – 12 Hz relaxed, alert state <strong>of</strong> consciousness, eyes closed<br />
Betta 12 Hz – 30Hz active, busy or anxious thinking<br />
Gamma 30 Hz – 80Hz higher cognitive activity, motor functions
3.1. <strong>Music</strong> generating methods<br />
4<br />
MAT 200B, Winter 2010<br />
In clinical usage, it can be sufficient to examine raw EEG data <strong>and</strong> reveal brain malfunctions.<br />
When these raw waves are made audible by a simplest method used in the first experiments in early<br />
20 th century, which is by allowing the waves to vibrate the membrane <strong>of</strong> loudspeakers, nothing but a<br />
filtered noise can be perceived. Therefore, to be utilized in brainwave music <strong>and</strong> BCI research, signals<br />
have to undergo sophisticated quantitative analysis. Techniques commonly used are power spectrum<br />
analysis, spectral centroid, Hjorth, event related potential (ERP) <strong>and</strong> correlation, to name but a few.<br />
Once EEG signals are processed, various methods for generating EEG dependant music can be<br />
applied. Those can be classified into two categories:<br />
1 – Parameter mapping approach<br />
This approach entails usage <strong>of</strong> mathematical <strong>and</strong> statistical techniques to translate<br />
physiological electrical signals into sounds characterized by pitch, duration, timbre <strong>and</strong> intensity. To do<br />
so, it is important to underst<strong>and</strong> the nature <strong>and</strong> meaning <strong>of</strong> signals i.e. neural processes, <strong>and</strong> the<br />
nature <strong>of</strong> music that is to be produced. An extremely difficult problem, not yet solved - synthesizing<br />
melody by means <strong>of</strong> simply thinking <strong>of</strong> one, would require implementation <strong>of</strong> parameter mapping. Much<br />
research in the field <strong>of</strong> cognitive science is needed in order to unambiguously interpret biological<br />
signals. Consequently, there is currently no set <strong>of</strong> formal rules in the usage <strong>of</strong> this approach, <strong>and</strong> the<br />
decisions made in the process depend on the desired outcome.<br />
A simple example <strong>of</strong> parameter mapping approach usage is Alvin Lucier’s “<strong>Music</strong> for Solo<br />
Performer” where strong alpha waves were translated into increased sound intensity <strong>and</strong> temporal<br />
density. More recently, a group <strong>of</strong> Chinese scientists proposed sonification rules based on the scale –<br />
free 2 phenomenon embodied in both neural networks <strong>and</strong> sound [5]. According to these rules, period <strong>of</strong><br />
EEG waveform is mapped to the duration <strong>of</strong> the note, average power to the music intensity <strong>and</strong> the<br />
amplitude to the music pitch.<br />
2 Scale-free phenomenon occurs in networks where degree distribution follows a power law which<br />
describes a relationship between two quantities – frequency <strong>and</strong> size <strong>of</strong> an event. The relationship has<br />
a power-law distribution when the frequency <strong>of</strong> the event decreases at a greater rate than the size<br />
increases.
2 – Event triggering approach<br />
5<br />
MAT 200B, Winter 2010<br />
In this approach, EEG data is continuously scanned for changes in the characteristic features <strong>of</strong><br />
a signal which, when detected, act as event triggers in music generation process. Some <strong>of</strong> those<br />
characteristics commonly being employed are local voltage maxima <strong>and</strong> minima, time <strong>and</strong> voltage<br />
differences between them. This approach allows detection <strong>of</strong> singular events like spikes on one side<br />
<strong>and</strong> repetitive activity on the other – the latter being appealing for translation into musical form since<br />
it can be perceived as a rhythmic musical pattern. In practice, event triggering method is generally<br />
used in order to control brain-computer music instruments <strong>and</strong> sometimes combined with parameter<br />
mapping to enrich acoustical content.<br />
An example <strong>of</strong> the method is a study by Mick Grierson [6] (Goldsmith College, UK) in which he<br />
allowed series <strong>of</strong> notes to be displayed on a computer screen, while the subject is focused on a single<br />
note at a time. When that particular note appears on the screen, subject’s brain reacts <strong>and</strong> a change in<br />
EEG signal is detected, which triggers the note to be played.<br />
In the Brain Controlled Piano System by Eduardo Mir<strong>and</strong>a [4], the most prominent frequency<br />
<strong>and</strong> the complexity <strong>of</strong> the signal extracted from processed EEG data served as event triggering<br />
information. The first one activated generative rules for algorithmic music composition while the latter<br />
controlled tempo <strong>of</strong> the music.<br />
3.2. Bi<strong>of</strong>eedback paradigm<br />
One <strong>of</strong> the variations in methodology used to musically express one’s neural activity is the<br />
reliance on bi<strong>of</strong>eedback. Bi<strong>of</strong>eedback is a process in which bodily functions are measured, information<br />
is conveyed to the subject, which then raises his awareness <strong>and</strong> gives the possibility <strong>of</strong> conscious<br />
control over the same bodily functions.<br />
Bi<strong>of</strong>eedback signal generated as a response to an event can be positive if it amplifies the input<br />
signal or negative if it dampens it. In clinical applications, goal <strong>of</strong> bi<strong>of</strong>eedback is <strong>of</strong>ten calming the<br />
subject’s activity <strong>and</strong> therefore negative feedback is desirable while positive can possibly lead to<br />
highly dangerous situations. In contrast, from the musical perspective, unstable activity with
6<br />
MAT 200B, Winter 2010<br />
unpredictable outcome is <strong>of</strong>ten preferred over calming one, since it introduces greater dynamics into<br />
musical narrative.<br />
Although in musical practice performers <strong>of</strong>ten do respond to event triggered feedbacks, goal-<br />
oriented connotations <strong>of</strong> bi<strong>of</strong>eedback are lost. Instead, such practice simply serves as a basis to control<br />
a sound source. However, Rosenboom’s work on the subject implies the existence <strong>of</strong> potential for<br />
wider <strong>and</strong> more effective use <strong>of</strong> bi<strong>of</strong>eedback in music, by viewing it as a dynamic system that allows<br />
organism to evolve rather than a static system that leads to equilibrium. [8]<br />
4. Bi<strong>of</strong>eedback music therapy<br />
The power <strong>of</strong> music to heal has been recognized for centuries in various civilizations, <strong>and</strong> music<br />
therapy is known to improve psychological <strong>and</strong> physiological health <strong>of</strong> individual. However, scientific<br />
research on the neurological basis <strong>of</strong> music therapy is a nascent field, with a great growth potential<br />
considering the advancements in development <strong>of</strong> instruments such as fMRI, used for evaluation <strong>of</strong> brain<br />
activity. Up to date research shows that five factors contribute to the effect <strong>of</strong> music therapy:<br />
attention modulation, emotion, cognition, behavior <strong>and</strong> communication modulation [7].<br />
Furthermore, bi<strong>of</strong>eedback therapy in which patient learns to control his brain activity as a<br />
response to the real-time feedback, reportedly achieves positive results in treatments <strong>of</strong> anxiety,<br />
attention deficit disorder, epilepsy, autism <strong>and</strong> more. Commonly, feedback information is conveyed to<br />
a patient in a form <strong>of</strong> visual <strong>and</strong> auditory displays combined. Influence on brain rhythms via solely<br />
auditory feedback has been explored in only a few cases, but a study by Hinterberger <strong>and</strong> Baier [9]<br />
suggests it is possible.<br />
Relatively new approach that combines traditional music therapy <strong>and</strong> auditory bi<strong>of</strong>eedback is<br />
Brain <strong>Music</strong> Treatment (BMT) [11] developed in 1990s at the Moscow Medical Academy. Group <strong>of</strong><br />
neurophysiologists, clinicians <strong>and</strong> mathematicians led by Dr. Ya I. Levin developed an algorithm for<br />
translating brain waves into music, which experimentally provided optimal therapeutic results. Specific<br />
underlying mechanisms <strong>of</strong> such therapy are yet to be discovered, but positive initial results have been
7<br />
MAT 200B, Winter 2010<br />
reported in patients with insomnia, where sleep patterns were improved by reducing anxiety. This<br />
method involves converting aspects <strong>of</strong> person’s EEG activity into music files recorded on a CD, which<br />
patient then plays on a regular basis for the duration <strong>of</strong> a treatment over several months.<br />
5. Conclusion<br />
It is common to scientifically observe natural phenomena via visual perception. However,<br />
auditory system is more sensitive to temporal changes, <strong>and</strong> multidimensional datasets can be perceived<br />
simultaneously when presented both visually <strong>and</strong> audibly. As an illustration, notable contribution to the<br />
art <strong>of</strong> scientific listening was the invention <strong>of</strong> stethoscope in 1816, an instrument that is still<br />
extensively used as a diagnosis tool in medicine.<br />
Moreover, a large number <strong>of</strong> clinical studies have shown striking evidence that auditory rhythm<br />
<strong>and</strong> music can be effectively harnessed for specific therapeutic purposes. Considering the<br />
demonstrated effectiveness <strong>of</strong> both traditional music therapy <strong>and</strong> relatively new bi<strong>of</strong>eedback therapy,<br />
combination <strong>of</strong> the two approaches could also yield positive results. Yet, there has been very little<br />
evidence regarding the effectiveness <strong>of</strong> a brain music therapy, <strong>and</strong> much research needed in the field<br />
<strong>of</strong> music cognition depends on development <strong>of</strong> sophisticated instruments for examining the neural<br />
activity.
References<br />
8<br />
MAT 200B, Winter 2010<br />
1. Andrew Brouse (2004), A Young Person's Guide to <strong>Brainwave</strong> <strong>Music</strong> - Forty years <strong>of</strong> audio from the<br />
human EEG<br />
2. Adrian E, Matthews B (1934), The Berger rhythms: potential changes from the occipital lobes in<br />
man<br />
3. Simon Emmerson (2007), Living Electronic <strong>Music</strong><br />
4. Eduardo Mir<strong>and</strong>a, Andrew Brouse (2005), Toward Direct Brain-Computer <strong>Music</strong>al Interfaces<br />
5. Dan Wu, Chao-Yi Li, De-Zhong Yao (2009), Scale-Free <strong>Music</strong> <strong>of</strong> the Brain<br />
6. Mick Grierson (2008), Composing With <strong>Brainwave</strong>s: Minimal Trial P300 Recognition as an indication<br />
<strong>of</strong> Subjective Preference for the Control <strong>of</strong> a <strong>Music</strong>al Insturment<br />
7. Stefan Koelsch (2009), A Neuroscientific Perspective on <strong>Music</strong> Therapy<br />
8. David Rosenboom (1997), Extended <strong>Music</strong>al interface with the Human Nervous System<br />
9. Thilo Hinterberger, Gerold Baier (2005), Parametric Orchestral Sonification <strong>of</strong> EEG in Real Time<br />
10. Gerold Baier, Thomas Hermann, Sonification: listen to brain activity<br />
11. Galina Mindlin, James R. Evans (2008), Chapter 9: “Brain <strong>Music</strong> Treatment” <strong>of</strong> a book Introduction<br />
to Quantitative EEG <strong>and</strong> Neur<strong>of</strong>eedback