CREATION OF SYMPATHETIC MEDIA CONTENTStéphane Perrinperrin.japan@gmail.comGiuseppe RivaCatholic Univ. of Milangiuseppe.riva@unicatt.itAlvaro CassinelliUniversity of Tokyocassinelli.alvaro@gmail.comABSTRACTTaking ground in the enactive view, a recent trend in cognitivescience, we propose a framework for the creation ofsympathetic media content. The notion of sympathetic mediacontent is based on two concepts: synesthetic media andempathic media transmission.Synesthetic media is media that make use of multiple andalternative senses. The approach is to reconsider traditionalmedia content from a different perceptual point of view withthe goal of creating more immersive and affective mediacontent. Empathic media transmission will consist in encodingthe emotional content of media into multi-sensorysignals. The encoded emotions are then mediated to the audiencethrough actuators that provide the physical manifestationof the multi-sensory information.The two points, synesthetic media and empathic transmission,are addressed through the study of the relation betweensenses and emotions and the development of suitable methodsfor encoding emotions into multiple senses, in the frameof an efficacious transmission of emotions to the audience.The extraction of emotional information from media and theconception of a wearable, unobtrusive device are consideredtoo. It is claimed that such a framework will help the creationof a new type of media content, ease the access to moreimmersive and affective media, and find applications in numerousfields.Author Keywordsmedia, enaction, emotion, sensors, perception, sensesACM Classification KeywordsH.5.1 Multimedia Information Systems — Artificial, augmented,and virtual realities,H.5.2 User Interfaces — User/Machine SystemsINTRODUCTIONAn emerging trend in cognitive science is the enactive view[6, 7] of sensorimotor knowledge. In this approach, perceivingis to understand how sensory stimulation varies as weact. In particular it implies the common coding theory [8]:actions are coded in terms of the perceivable effects theyshould generate. More in detail, when an effect is intended,the movement that produces this effect as perceptual input isautomatically activated, because actions and their effects arestored in a common representational domain.Copyright is held by the author/owner(s). <strong>UbiComp</strong> ’08 Workshop W1 –Devices that Alter Perception (DAP 2008), September 21st, 2008.This position paper is not an official publication of <strong>UbiComp</strong> ’08.The underlying process is the following [9, 10]: first, commonevent representations become activated by the perceptualinput; then, there is an automatic activation of the motorcodes attached to these event representations; finally, theactivation of the motor codes results in a prediction of theaction results in terms of expected perceptual events. Theenactive view and its corollaries support the concepts of:• Synesthetic media,• Empathic media transmission.After a detailed definition of the new notion of sympatheticmedia as synesthetic media combined to an empathic transmission,its practical implementation is discussed. Namely,the way to encode emotions and to transmit them throughmulti-sensory channels is presented as well as the design ofa device to achieve this aim.DEFINITION OF SYMPATHETIC MEDIASympathetic media is the combination of synesthetic mediaand empathic transmission.Synesthetic Media:In cognitive science, synesthesia (Greek, syn = together +aisthesis = perception) is the involuntary physical experienceof a cross-modal association. That is, the stimulation ofone sensory modality reliably causes a perception in one ormore different senses. Specifically it denotes the rare capacityto hear colors, taste shapes, or experience other equallystartling sensory blendings whose quality seems difficult formost of us to imagine. A synesthete might describe the color,shape, and flavor of someone’s voice, or seeing the colorred, a synesthete might detect the ”scent” of red as well.Transmission of emotions (for an enactive view on emotions,see [11]; on vision, see [12]), tones, moods or feelings intrinsicallycontained in media or that a creator intends to transmitvia a media to an audience relies heavily on only twosenses, the audition (music or speech) and the vision (imagesor text). On the contrary, human communication relieson a wide range of senses. Moreover, this reliance on onlytwo senses fails in some cases to convey sufficient informationto break cultural barriers or to reach audiences with sensorydisabilities. The efficiency of information transmission,including emotions [13], can be limited due to an overloadingof the visual and aural channels, for example by textualinformation such as subtitles that is perceived through visionand imply a cognitive effort. The idea of using alternativesensory channels to create more immersive, affectiveor realistic context and content for the audience is not new,22
especially in the fields of Ambient Intelligence [14], Immersive,Perceptual or Affective Computing [15], and Human-Computer Interaction [16]. To take an example, most VirtualReality (VR) rooms include several kinds of sensory outputs(wind [17], scent, force, haptic or temperature [18]) otherthan vision or audition. Nonetheless, most of these worksremain not easily accessible to audiences. Either becauseof their bulky nature (dedicated spaces for VR) or becausethere is no seamless integration of the extra sensory informationin the media that contains them. Moreover, these worksare mostly dedicated, and somehow limited, to re-create perceptualsensations identical to the ones that are virtually embeddedin a media (for example, the vibration of some gamecontrollers for simulating shocks). Few works try to reconsidera given media [19] from a totally different perceptualpoint of view.Empathic Media Transmission:In cognitive science empathy is the recognition and understandingof the states of mind, beliefs, desires, and particularly,emotions of others. It is often characterized as theability to ”put oneself into another’s shoes”, or experiencingthe outlook or emotions of another being within oneself;a sort of emotional resonance. For instance, if an observersees another person who is sad and in response feels sad,that individual is experiencing empathy. Empathy can occurin response to cues of positive emotion as well as negativeemotion. To qualify as empathy, the empathizer must recognize,at least on some level, that the emotion she or heis experiencing is a reflection of the other’s emotional, psychological,or physical state. In addition to widen the sensorybandwidth, it is necessary to develop empathic mediatransmission, able to embed emotional cues in sensory-basedcoding of perceived events. The encoding is done directlyinto the physical expression of these additional senses by usingsuitable actuators integrated in a wearable, non-intrusivedevice. The audience who receives this multi-sensory informationthrough the device and is thus in a state of partialsensorimotor immersion, will decode it for inducing emotionsthat ought to be identical to the emotions the creator ofthe media content intended to transmit.Thanks to the synesthetic property of the newly defined media,and the empathic transmission of emotions that are containedin media, a more emotional link between the mediaand the audience is created. This is something alreadyachieved for example in cinema through the background musicor visual clues. The goal here is to improve the empathicrelation to the media. How much this empathic link can bereinforced without breaking the duality audience / media isan interesting subject going beyond the scope of this presentation.SENSES TO EMOTIONSFollowing our definition of sympathetic media as a combinationof synesthetic media and empathic transmission thetranslation / encoding of virtual emotional information intoreal / physical sensory information transmitted to the audiencethrough actuators must be addressed. Two ways areproposed.The first way to adress the problem of inducing emotionsinto the audience, is by extending the classical approach.This is done by adding sensory channels to the alreadypresent ones, usually sound and image. The relation ofsenses to emotions is studied for determining the most efficientways to induce emotions from multi-sensory content.This includes the study of the attainable richness that a givensense or combination of senses can provide to encode virtualemotional content embedded in media content.The second way to achieve the transmission of emotions isbased on the enactive view and is thus favored. The methodis to induce the emotional cause from its physiological consequencesas perceived by the experiencing person. There issome evidence that this afferent feedback can modulate emotions(this is at the basis of the somatic-marker hypothesis [4]as well as the facial feedback hypothesis [1]). For example,a person experiencing stress or shame might have the feelingof a rise of the temperature. In the right context, rising effectivelythe temperature might help to induce the intendedemotions, here stress or shame. Another technique is to useactuators to divert attention or generate subtle changes ofemotional disposition [3]. Techniques such as surveys mighthelp for this study by determining the best sensory channelsand types of signals to use for inducing given emotions, within mind works in cognitive science (enactive view), psychologyand physiology.To better see the difference in these two approaches, that arenot exclusive, a second example is proposed. An emotionlike sadness could be induced through visual (in a movie,dark atmosphere, rain, faces of the actors, etc...) and auditory(use of a certain type of music) clues. This is theclassical approach. Sadness might be induced too by, for example,lowering the temperature and exercising slight pressuresat appropriate locations on the body of the audience.While it can be argue that the first approach is already doinga good job at transmitting emotions, even without wideningthe sensory bandwidth; the second approach might be usedin case of the absence of given sensory channels (for example,a radio program), the absence of a right context (lookingat a movie on a portable device) or for audience with sensorydisabilities.SOFT AND HARDWARE FOR SYMPATHETIC MEDIATIONExisting media can be manually annotated or the emotionsbeing automatically extracted. Given the difficulty to automaticallyextract emotional content from a given media, especiallyin the case of real-time applications, manual encodingwill the first step in the creation of sympathetic media.Emotional tags could be considered to annotate the media ina way quite similar to the subtitles tracks on a DVD. An encodingmodule must be developed that encodes emotions tosenses thanks to sets of rules and algorithms.The hardware can be separated into two elements. One thatsupports the processing unit (notably the encoding module)and a transmitter and is interfaced with the media. Thisfirst element communicates with a second element that is awearable device constituted of a receiver and the actuators.For this hardware part, we propose to design and conceivea wearable [22], unobtrusive, non-invasive, multi-actuatorsdevice, that will bring sympathetic media into homes ina similar way new technologies have brought cinema into23
- Page 3 and 4: We would also like to extend a spec
- Page 5 and 6: ∙ USE‐03 Clinical Proof‐of‐
- Page 7 and 8: Ubiquitous Sustainability: Citizen
- Page 9 and 10: Devices that Alter Perception (DAP
- Page 11 and 12: incorporated into the body; if it d
- Page 13 and 14: vise. Beginning improvisers typical
- Page 15 and 16: Location-based Social Networking Sy
- Page 17 and 18: PrivacyWith any social networking a
- Page 19 and 20: BOXED EGO INSTALLATIONA pair of cam
- Page 21 and 22: the natural egocentric visuospatial
- Page 23 and 24: HUMAN SENSES AND ABSTRACT DANGERSOu
- Page 25 and 26: Gesture recognition as ubiquitous i
- Page 27 and 28: een developed to overcome this. For
- Page 29: 16. Myvu Corporation. Myvu Crystal.
- Page 33 and 34: 14. G. Riva (Editor), F. Vatalaro (
- Page 35 and 36: system, reflex arcs, and even muscl
- Page 37 and 38: 5. R. Cytowic. Synesthesia: Phenome
- Page 39 and 40: RELATED WORKLet us consider the loc
- Page 41 and 42: (a) Actual traces.(a) Actual traces
- Page 43 and 44: Figure 2. Schematic picture of the
- Page 45 and 46: notion of partial/total immersion w
- Page 47 and 48: A Quantitative Evaluation Model of
- Page 49 and 50: a group of users. We, therefore, in
- Page 51 and 52: During the above discussion, user g
- Page 53 and 54: Usability Study of Indoor Mobile Na
- Page 56 and 57: 11%3%0%0%30%19%5%3%0%11%56%Notifyed
- Page 58 and 59: CONCLUSIONFor this study, we have d
- Page 60 and 61: are suited to provide sufficient cl
- Page 62 and 63: attention to your blood pressure re
- Page 64 and 65: invested in final development and c
- Page 66 and 67: 4. rhythms - the highly predictable
- Page 68 and 69: technologies, especially their coll
- Page 70 and 71: Situvis: Visualising Multivariate C
- Page 72 and 73: all. A user selecting a range withi
- Page 74 and 75: constraints to include traces that
- Page 76 and 77: Simulation Framework in Second Life
- Page 78 and 79: VirtualEmitterEstimatedPositionVirt
- Page 80 and 81:
odies) then our approach would faci
- Page 82 and 83:
Design and Integration Principles f
- Page 84 and 85:
asis for interaction, it focuses on
- Page 86 and 87:
to support POST and GET messages as
- Page 88 and 89:
senor node’s functionalities in a
- Page 90 and 91:
service model and provide paradigms
- Page 92 and 93:
Virtualization of resources will fa
- Page 94 and 95:
Figure 3. Heating and lighting cont
- Page 96 and 97:
sat down at the same time” can be
- Page 98 and 99:
10!; !< !=!"#$%&’"()%* +$(%,-’#
- Page 100 and 101:
posite event operators such as conj
- Page 102 and 103:
objects? If these particles were sm
- Page 104 and 105:
Over-the-air-programmingOTAP (Over-
- Page 106 and 107:
Randomised Collaborative Transmissi
- Page 108 and 109:
Figure 2. Illustration of periodic
- Page 110 and 111:
Figure 3. Illustration of the recei
- Page 112 and 113:
Experimental Wired Co-operation Arc
- Page 114 and 115:
The structure of network is matrix
- Page 116 and 117:
Altera Quartus II v7.2SP3 FPGA soft
- Page 118 and 119:
Using smart objects as the building
- Page 120 and 121:
To the ambient ecology concepts des
- Page 122 and 123:
event. A more complicated approach
- Page 124 and 125:
Multi-Tracker: Interactive Smart Ob
- Page 126 and 127:
interfacing with whole space, but c
- Page 128 and 129:
operate interactions by many partic
- Page 130 and 131:
An Augmented Book and Its Applicati
- Page 132 and 133:
its withy material. So, the movemen
- Page 134 and 135:
Table 1. The Performance of Page Fl
- Page 136 and 137:
Ambient Information SystemsWilliam
- Page 138 and 139:
We, too, believe there is a certain
- Page 140 and 141:
management system and forwarded to
- Page 142 and 143:
CONCLUSIONWe have presented the des
- Page 144 and 145:
Ambient interface design for a Mobi
- Page 146 and 147:
as ‘sensitive’ and filtered awa
- Page 148 and 149:
Ambient Life: Interrupted Permanent
- Page 150 and 151:
The log files revealed the actual r
- Page 152 and 153:
Stay-in-touch: a system for ambient
- Page 154 and 155:
Figure 1. The Stay-in-touch display
- Page 156 and 157:
User Generated Ambient PresenceGerm
- Page 158 and 159:
Figure 3: Cross-platform system tra
- Page 160 and 161:
The Invisible Display - Design Stra
- Page 162 and 163:
On a more general level Mimikry cre
- Page 164 and 165:
Rand in fact few examples of public
- Page 166 and 167:
Ambient Displays in Academic Settin
- Page 168 and 169:
usefulProfiles 22 (37%)Time and dat
- Page 170 and 171:
UTILIZE THE POTENTIAL TO FULLEST: D
- Page 172 and 173:
A notification system for a landmin
- Page 174 and 175:
As for the investigation phase, whe
- Page 176 and 177:
Ubiquitous Sustainability: Citizen
- Page 178 and 179:
Live Sustainability: A System for P
- Page 180 and 181:
(a) (b) (c)Figure 3. Screenshot for
- Page 182 and 183:
Motivating Sustainable BehaviorIan
- Page 184 and 185:
dialog with policy makers and servi
- Page 186 and 187:
5. Fogg, B. J. (2002). “Persuasiv
- Page 188 and 189:
on mode of transportation such as t
- Page 190 and 191:
In order to be useful, PET requires
- Page 192 and 193:
provides a relevant description and
- Page 194 and 195:
Star, L. S., The Ethnography of Inf
- Page 196 and 197:
door environment, the accuracy of G
- Page 198 and 199:
Figure 5. Variations of “Parasiti
- Page 200 and 201:
can view where it has been, who ans
- Page 202 and 203:
Nevermind UbiquityJeff BurkeCenter
- Page 204 and 205:
innovating within existing capacity
- Page 206 and 207:
Since the Brundlandt report a serie
- Page 208 and 209:
our current research in mobile gami
- Page 210 and 211:
conceive only of human-computer int
- Page 212 and 213:
human behaviour will encounter. The
- Page 214 and 215:
mental models of the world are test
- Page 216 and 217:
alternative, the place that could h
- Page 218 and 219:
elow the mode button indicate the c
- Page 220 and 221:
212
- Page 222 and 223:
digital collection of features to a
- Page 224 and 225:
immediately followed by a subset of
- Page 226 and 227:
forecast the future accesses to tho
- Page 228 and 229:
Enhanced and Continuously Connected
- Page 230 and 231:
than two application windows execut
- Page 232 and 233:
Secure and Dynamic Coordination ofH
- Page 234 and 235:
AudioProducer/ConsumerVideoConsumer
- Page 236 and 237:
228
- Page 238 and 239:
MotivationsMotivations occurred on
- Page 240:
Kray Christian ··········