You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
01 april<br />
2012<br />
Augmented ReAlity, <strong>AR</strong>t And technology<br />
intRoducing<br />
Added woRlds<br />
Yolande Kolstee<br />
the<br />
technology<br />
Behind<br />
Augmented<br />
ReAlity<br />
Pieter Jonker<br />
Re-intRoducing<br />
mosquitos<br />
Maarten Lamers<br />
how did we do it<br />
Wim van Eck
2<br />
<strong>AR</strong>[t]<br />
Magazine about Augmented<br />
Reality, art and technology<br />
APRil 2012<br />
3
CoLoPHoN<br />
issn numBeR<br />
2213-2481<br />
contAct<br />
The Augmented Reality <strong>Lab</strong> (<strong>AR</strong> <strong>Lab</strong>)<br />
Royal Academy of Art, The Hague<br />
(Koninklijke Academie van Beeldende Kunsten)<br />
Prinsessegracht 4<br />
2514 AN The Hague<br />
The Netherlands<br />
+31 (0)70 3154795<br />
www.arlab.nl<br />
info@arlab.nl<br />
editoRiAl teAm<br />
Yolande Kolstee, Hanna Schraffenberger,<br />
Esmé Vahrmeijer (graphic design)<br />
and Jouke Verlinden.<br />
contRiButoRs<br />
Wim van Eck, Jeroen van Erp, Pieter Jonker,<br />
Maarten Lamers, Stephan Lukosch, Ferenc Molnár<br />
(photography) and Robert Prevel.<br />
coVeR<br />
‘George’, an augmented reality headset designed<br />
by Niels Mulder during his Post Graduate Course<br />
Industrial Design (KABK), 2008<br />
www.arlab.nl<br />
TABLE oF CoNTENTS<br />
32<br />
welcome<br />
to <strong>AR</strong>[t]<br />
intRoducing Added woRlds<br />
Yolande Kolstee<br />
inteRView with<br />
helen PAPAgiAnnis<br />
Hanna Schraffenberger<br />
the technology Behind <strong>AR</strong><br />
Pieter Jonker<br />
Re-intRoducing mosquitos<br />
Maarten Lamers<br />
lieVen VAn VelthoVen —<br />
the RAcing st<strong>AR</strong><br />
Hanna Schraffenberger<br />
how did we do it<br />
Wim van Eck<br />
Pixels wAnt to Be fReed!<br />
intRoducing Augmented<br />
ReAlity enABling h<strong>AR</strong>dw<strong>AR</strong>e<br />
technologies<br />
Jouke Verlinden<br />
07 60<br />
<strong>AR</strong>tist in Residence<br />
PoRtRAit: m<strong>AR</strong>inA de hAAs<br />
Hanna Schraffenberger<br />
A mAgicAl leVeRAge —<br />
in se<strong>AR</strong>ch of the<br />
killeR APPlicAtion.<br />
Jeroen van Erp<br />
the Positioning<br />
of ViRtuAl oBjects<br />
Robert Prevel<br />
mediAted ReAlity foR cRime<br />
scene inVestigAtion<br />
Stephan Lukosch<br />
die wAlküRe<br />
Wim van Eck, <strong>AR</strong> <strong>Lab</strong> Student Project<br />
4 5<br />
08<br />
12<br />
20<br />
66<br />
70<br />
28 72<br />
30<br />
36<br />
42<br />
36<br />
76
WELCoME...<br />
to the first issue of <strong>AR</strong>[t],<br />
the magazine about<br />
Augmented Reality, art<br />
and technology!<br />
Starting with this issue, <strong>AR</strong>[t] is an aspiring<br />
magazine series for the emerging <strong>AR</strong> community<br />
inside and outside the Netherlands. The<br />
magzine is run by a small and dedicated team<br />
of researchers, artists and lecturers of the <strong>AR</strong><br />
<strong>Lab</strong> (based at the Royal Academy of Arts, The<br />
Hague), Delft University of Technology (TU<br />
Delft), Leiden University and SME. In <strong>AR</strong>[t], we<br />
share our interest in Augmented Reality (<strong>AR</strong>),<br />
discuss its applications in the arts and provide<br />
insight into the underlying technology.<br />
At the <strong>AR</strong> <strong>Lab</strong>, we aim to understand, develop,<br />
refine and improve the amalgamation of the<br />
physical world with the virtual. We do this<br />
through a project-based approach and with the<br />
help of research funding from RAAK-Pro. In the<br />
magazine series, we invite writers from the industry,<br />
interview artists working with Augmented<br />
Reality and discuss the latest technological<br />
developments.<br />
It is our belief that <strong>AR</strong> and its associated<br />
technologies are important to the field of new<br />
media: media artists experiment with the intersection<br />
of the physical and the virtual and probe<br />
the limits of our sensory perception in order to<br />
create new experiences. Managers of cultural<br />
heritage are seeking after new possibilities for<br />
worldwide access to their collections. Designers,<br />
developers, architects and urban planners<br />
are looking for new ways to better communicate<br />
their designs to clients. Designers of games and<br />
theme parks want to create immersive experiences<br />
that integrate both the physical and the<br />
virtual world. Marketing specialists are working<br />
with new interactive forms of communication.<br />
For all of them, <strong>AR</strong> can serve as a powerful tool<br />
to realize their visions.<br />
Media artists and designers who want to acquire<br />
an interesting position within the domain of new<br />
media have to gain knowledge about and experience<br />
with <strong>AR</strong>. This magazine series is intended to<br />
provide both theoretical knowledge as well as a<br />
guide towards first practical experiences with <strong>AR</strong>.<br />
our special focus lies on the diversity of contributions.<br />
Consequently, everybody who wants<br />
to know more about <strong>AR</strong> should be able to find<br />
something of interest in this magzine, be they<br />
art and design students, students from technical<br />
backgrounds as well as engineers, developers,<br />
inventors, philosophers or readers who just<br />
happened to hear about <strong>AR</strong> and got curious.<br />
We hope you enjoy the first issue and invite you<br />
to check out the website www.arlab.nl to learn<br />
more about Augmented Reality in the arts and<br />
the work of the <strong>AR</strong> <strong>Lab</strong>.<br />
www.arlab.nl<br />
6 7
intRoducing Added woRlds:<br />
Augmented ReAlity is heRe!<br />
By yolande kolstee<br />
Augmented Reality is a relatively recent computer<br />
based technology that differs from the earlier<br />
known concept of Virtual Reality. Virtual Reality is<br />
a computer based reality where the actual, outer<br />
world is not directly part of, whereas Augmented<br />
Reality can be characterized by a combination of<br />
the real and the virtual.<br />
Augmented Reality is part of the broader concept of<br />
Mixed Reality: environments that consist of the real<br />
and the virtual. To make these differences and relations<br />
more clear, industrial engineer Paul Milgram<br />
and Fumio Kishino introduced the Mixed Reality<br />
Continuum diagram in 1994, in which the real world<br />
is placed on the one end and the virtual world is<br />
placed on the other end.<br />
Real<br />
environment<br />
Augmented<br />
Reality (<strong>AR</strong>)<br />
mixed ReAlity (mR)<br />
Virtuality continuum by Paul Milgram and Fumio Kishino (1994)<br />
8<br />
Augmented<br />
Virtuality (AV)<br />
Virtual<br />
environment<br />
A shoRt oVeRView of <strong>AR</strong><br />
We define Augmented Reality as integrating 3-D<br />
virtual objects or scenes into a 3-D environment<br />
in real time (cf. Azuma, 1997).<br />
WHERE 3D VIRTUAL oBJECTS<br />
oR SCENES CoME FRoM<br />
What is shown in the virtual world, is created<br />
first. There are three ways of creating virtual<br />
objects:<br />
1. By hand: using 3d computer graphics<br />
Designers create 3D drawings of objects, game<br />
developers create 3D drawings of (human) figures,<br />
(urban) architects create 3D drawings of buildings<br />
and cities. This 3D modeling by (product)<br />
designers, architects, and visual artists is done<br />
by using specific software. Numerous software<br />
programs are developed. While some software<br />
packages can be <strong>download</strong>ed for free, others<br />
are pretty expensive. Well known examples are<br />
Maya, Cinema 4D, Studio Max, Blender, Sketch up,<br />
Rhinoceros, Solidworks, Revit, Zbrush, AutoCad,<br />
Autodesk. By now at least 170 different software<br />
programs are available.<br />
2. By computer controlled imaging<br />
equipment/3d scanners.<br />
We can distinguish different types of threedimensional<br />
scanners – the ones used in the<br />
bio-medical world and the ones used for other<br />
purposes – although there is some overlapping.<br />
Inspecting a piece of medieval art or inspecting a<br />
living human being is different but somehow also<br />
alike. In recent years we see a vigorous expansion<br />
of the use of image-producing bio-medical<br />
equipment. We owe these developments to the<br />
of engineer sir Godfrey Hounsfield and physicist<br />
Allan Cormack, among others, who were<br />
jointly awarded the Nobel Prize in 1979 for their<br />
pioneering work on X-ray computed tomography<br />
(CT). Another couple of Nobel Prize winners are<br />
Paul C. Lauterbur and Peter Mansfield, who won<br />
the prize in 2003 for their discoveries concerning<br />
magnetic resonance imaging (MRI). Although<br />
their original goals were different, in the field of<br />
Augmented Reality one might use the 3D virtual<br />
models that are produced by such systems. However,<br />
they have to be processed prior to use in<br />
<strong>AR</strong> because they might be too heavy. A 3D laser<br />
scanner is a device that analyses a real-world object<br />
or environment to collect data on its shape<br />
and its appearance (i.e. colour). The collected<br />
data can then be used to construct digital, three<br />
dimensional models. These scanners are sometimes<br />
called 3D digitizers. The difference is that<br />
the above medical scanners are looking inside to<br />
create a 3D model while the laser scanners are<br />
creating a virtual image from the reflection of<br />
the outside of an object.<br />
3. Photo and/or film images<br />
It is possible to use a (moving) 2D image like a<br />
picture as a skin on a virtual 3D model. In this<br />
way the 2D model gives a three-dimensional<br />
impression.<br />
INTEGRATING 3-D VIRTUAL oB-<br />
JECTS IN THE REAL WoRLD IN<br />
REAL TIME<br />
There are different ways of integrating the virtual<br />
objects or scenes into the real world. For all<br />
three we need a display possibility. This might<br />
be a screen or monitor, small screens in <strong>AR</strong><br />
glasses, or an object on which the 3D images are<br />
projected. We distinguish three types of (visual)<br />
Augmented Reality:<br />
display type i : screen based<br />
<strong>AR</strong> on a monitor, for example on a flatscreen or<br />
on a smart phone (using e.g. LAY<strong>AR</strong>). With this<br />
technology we see the real world and added at<br />
the same time on a computer screen, monitor,<br />
smartphone or tablet computer, the virtual<br />
object. In that way, we can, for example,<br />
9
Artist: k<strong>AR</strong>olinA soBeckA | http://www.gravitytrap.com<br />
add information to a book, by looking at the<br />
book and the screen at the same time.<br />
display type ii:<br />
<strong>AR</strong> glasses (off-screen)<br />
A far more sophisticated but not yet consumer<br />
friendly method uses <strong>AR</strong> glasses or a head<br />
mounted display (HMD), also called a head-up<br />
display. With this device the extra information is<br />
mixed with one’s own perception of the world.<br />
The virtual images appear in the air, in the real<br />
world, around you, and are not projected on a<br />
screen. In type II there are two types of mixing<br />
the real world with the virtual world:<br />
Video see-through: a camera captures the real<br />
world. The virtual images are mixed with the<br />
captures (video images) of the real world and<br />
this mix creates an Augmented Reality.<br />
optical see-through: the real world is perceived<br />
directly with one’s own eyes in real time. Via<br />
small translucent mirrors in goggles, virtual<br />
images are displayed on top of the perceived<br />
Reality.<br />
display type iii:<br />
Projection based Augmented Reality<br />
With projection based <strong>AR</strong> we project virtual 3D<br />
scenes or objects on a surface of a building of an<br />
object (or a person). To do this, we need to know<br />
exactly the dimensions of the object we project<br />
<strong>AR</strong> info onto. The projection is seen on the object<br />
or building with remarkable precision. This can<br />
generate very sophisticated or wild projections<br />
on buildings. The Augmented Matter in Context<br />
group, led by Jouke Verlinden at the Faculty of<br />
Industrial Design Engineering, TU-Delft, uses<br />
pro jection-based <strong>AR</strong> for manipulating the appear-<br />
ance of products.<br />
CoNNECTING <strong>AR</strong>T AND<br />
TECHNoLoGY<br />
The 2011 IEEE International Symposium on Mixed<br />
and Augmented Reality (ISM<strong>AR</strong>) was held in<br />
Basel, Switzerland. In the track Arts, Media, and<br />
Humanities, 40 articles were offered discussing<br />
the connection of ‘hard’ physics and ‘soft’ art.<br />
There are several ways in which art and Augmented<br />
Reality technology can be connected:<br />
we can, for example, make art with Augmented<br />
Reality technology, create Augmented Reality<br />
artworks or use Augmented Reality technology<br />
to show and explain existing art (such as a<br />
monument like the Greek Pantheon or paintings<br />
from the grottos of Lascaux). Most of the contributions<br />
of the conference concerned Augmented<br />
Reality as a tool to present, explain or augment<br />
existing art. However, some visual artists use <strong>AR</strong><br />
as a medium to create art.<br />
The role of the artist in working with the emerg-<br />
ing technology of Augmented Reality has been<br />
discussed by Helen Papagiannis in her ISM<strong>AR</strong><br />
paper The Role of the Artist in Evolving <strong>AR</strong> as a<br />
New Medium (2011). In her paper, Helen Papagi-<br />
annis reviews how the use of technology as a cre-<br />
ative medium has been discussed in recent years.<br />
She points out, that in 1988 John Pearson wrote<br />
about how the computer offers artists “new<br />
means for expressing their ideas” (p.73., cited in<br />
Papagiannis, 2011, p.61). According to Pearson,<br />
“Technology has always been, the handmaiden of<br />
the visual arts, as is obvious, a technical means is<br />
always necessary for the visual communication of<br />
ideas, of expression or the development of works<br />
of art—tools and materials are required.” (p. 73)<br />
However, he points out that new technologies<br />
“were not developed by the artistic community<br />
for artistic purposes, but by science and industry<br />
to serve the pragmatic or utilitarian needs of<br />
society.” (p.73., cited in Papagiannis, 2011, p.61)<br />
As Helen Papagiannis concludes, it is then up to<br />
the artist “to act as a pioneer, pushing forward<br />
a new aesthetic that exploits the unique materi-<br />
als of the novel technology” (2011, p.61). Like<br />
Helen, we believe this holds also for the emerging<br />
field of <strong>AR</strong> technologies and we hope, artists will<br />
set out to create exciting new Augmented Reality<br />
art and thereby contribute to the interplay<br />
between art and technology. An interview with<br />
Helen Papagiannis can be found on page 12 of this<br />
magazine. A portrait of the artist Marina de Haas,<br />
who did a residency at the <strong>AR</strong> <strong>Lab</strong>, can be found<br />
on page 60.<br />
RefeRences<br />
■ Milgram P. and Kishino, F., “A Taxonomy of<br />
Mixed Reality Visual Displays,” IEICE Trans.<br />
Information Systems, vol. E77-D, no. 12, 1994,<br />
pp. 1321-1329.<br />
■ Azuma, Ronald T., “A Survey of Augmented<br />
Reality”. In Presence: Teleoperators and<br />
Virtual Environments 6, 4 (August 1997),<br />
pp. 355-385.<br />
■ Papagiannis, H., “The Role of the Artist<br />
in Evolving <strong>AR</strong> as a New Medium”, 2011<br />
IEEE International Symposium on Mixed and<br />
Augmented Reality(ISM<strong>AR</strong>) – Arts, Media, and<br />
Humanities (ISM<strong>AR</strong>-AMH), Basel, Switserland,<br />
pp. 61-65.<br />
■ Pearson, J., “The computer: Liberator or<br />
Jailer of The creative Spirit.” Leonardo,<br />
Supplemental Issue, Electronic Art, 1 (1988),<br />
pp. 73-80.<br />
10 11
BIoGRAPHY -<br />
HELEN PAPAGIANNIS<br />
helen Papagiannis is a designer, artist,<br />
and Phd researcher specializing in Augmented<br />
Reality (<strong>AR</strong>) in toronto, canada.<br />
helen has been working with <strong>AR</strong> since<br />
2005, exploring the creative possibilities<br />
for <strong>AR</strong> with a focus on content development<br />
and storytelling. she is a senior<br />
Research Associate at the Augmented<br />
Reality lab at york university, in the<br />
department of film, faculty of fine Arts.<br />
helen has presented her interactive<br />
artwork and research at global juried<br />
12<br />
conferences and events including tedx<br />
(technology, entertainment, design),<br />
ism<strong>AR</strong> (international society for mixed<br />
and Augmented Reality) and iseA (international<br />
symposium for electronic Art).<br />
Prior to her Augmented life, helen was a<br />
member of the internationally renowned<br />
Bruce mau design studio where she was<br />
project lead on “massive change:<br />
the future of global design." Read more<br />
about helen’s work on her blog and follow<br />
her on twitter: @<strong>AR</strong>stories.<br />
www.augmentedstories.com<br />
INTERVIEW WITH<br />
HELEN PAPAGIANNIS<br />
BY HANNA SCHRAFFENBERGER<br />
What is Augmented Reality?<br />
Augmented Reality (<strong>AR</strong>) is a real-time layering of<br />
virtual digital elements including text, images,<br />
video and 3D animations on top of our existing<br />
reality, made visible through <strong>AR</strong> enabled devices<br />
such as smart phones or tablets equipped with<br />
a camera. I often compare <strong>AR</strong> to cinema when<br />
it was first new, for we are at a similar moment<br />
in <strong>AR</strong>’s evolution where there are currently no<br />
conventions or set aesthetics; this is a time ripe<br />
with possibilities for <strong>AR</strong>’s creative advancement.<br />
Like cinema when it first emerged, <strong>AR</strong> has commenced<br />
with a focus on the technology with<br />
little consideration to content. <strong>AR</strong> content needs<br />
to catch up with <strong>AR</strong> technology. As a community<br />
of designers, artists, researchers and commercial<br />
industry, we need to advance content in <strong>AR</strong><br />
and not stop with the technology, but look at<br />
what unique stories and utility <strong>AR</strong> can present.<br />
So far, <strong>AR</strong> technologies are still<br />
new to many people and often<br />
<strong>AR</strong> works cause a magical experience.<br />
Do you think <strong>AR</strong> will lose<br />
its magic once people get used to<br />
the technology and have developed<br />
an understanding of how <strong>AR</strong><br />
works? How have you worked with<br />
this ‘magical element’ in your<br />
work ‘The Amazing Cinemagician’?<br />
I wholeheartedly agree that <strong>AR</strong> can create a<br />
magical experience. In my TEDx 2010 talk, “How<br />
Does Wonderment Guide the Creative Process”<br />
(http://youtu.be/ScLgtkVTHDc), I discuss how<br />
<strong>AR</strong> enables a sense of wonder, allowing us to see<br />
our environments anew. I often feel like a magician<br />
when presenting demos of my <strong>AR</strong> work live;<br />
astonishment fills the eyes of the beholder questioning,<br />
“How did you do that?” So what happens<br />
when the magic trick is revealed, as you ask,<br />
when the illusion loses its novelty and becomes<br />
habitual? In Virtual Art: Illusion to Immersion<br />
(2004), new media art-historian oliver Grau<br />
discusses how audiences are first overwhelmed<br />
by new and unaccustomed visual experiences,<br />
but later, once “habituation chips away at the<br />
illusion”, the new medium no longer possesses<br />
“the power to captivate” (p. 152). Grau writes<br />
that at this stage the medium becomes “stale<br />
and the audience is hardened to its attempts<br />
at illusion”; however, he notes, that it is at this<br />
stage that “the observers are receptive to content<br />
and media competence” (p. 152).<br />
When the initial wonder and novelty of the<br />
technology wear off, will it be then that <strong>AR</strong> is<br />
explored as a possible media format for various<br />
content and receive a wider public reception as<br />
a mass medium? or is there an element of wonder<br />
that need exist in the technology for it to<br />
be effective and flourish?<br />
13
Picture: PiPPin lee<br />
“Pick a card. Place it here.<br />
Prepare to be amazed and<br />
entertained.”<br />
14<br />
I believe <strong>AR</strong> is currently entering the stage of<br />
content development and storytelling, however,<br />
I don’t feel <strong>AR</strong> has lost its “power to captivate”<br />
or “become stale”, and that as artists, designers,<br />
researchers and storytellers, we continue to<br />
maintain wonderment in <strong>AR</strong> and allow it to guide<br />
and inspire story and content. Let’s not forget<br />
the enchantment and magic of the medium. I<br />
often reference the work of French filmmaker<br />
and magician George Méliès (1861-1938) as a<br />
great inspiration and recently named him the<br />
Patron Saint of <strong>AR</strong> in an article for The Creators<br />
Project (http://www.thecreatorsproject.com/<br />
blog/celebrating-georges-méliès-patron-saintof-augmented-reality)<br />
on what would have been<br />
Méliès’ 150th birthday. Méliès was first a stage<br />
magician before being introduced to cinema at<br />
a preview of the Lumiere brothers’ invention,<br />
where he is said to have exclaimed, “That’s<br />
for me, what a great trick”. Méliès became<br />
famous for the “trick-film”, which employed a<br />
stop- motion and substitution technique. Méliès<br />
applied the newfound medium of cinema to<br />
extend magic into novel, seemingly impossible<br />
visualities on the screen.<br />
I consider <strong>AR</strong>, too, to be very much about creat-<br />
ing impossible visualities. We can think of <strong>AR</strong> as<br />
a real-time stop-substitution, which layers content<br />
dynamically atop the physical environment<br />
and creates virtual actualities with shapeshifting<br />
objects, magically appearing and disappearing—<br />
as Méliès first did in cinema.<br />
In tribute to Méliès, my Mixed Reality exhibit,<br />
The Amazing Cinemagician integrates Radio<br />
Frequency Identification (RFID) technology with<br />
the FogScreen, a translucent projection screen<br />
consisting of a thin curtain of dry fog. The<br />
Amazing Cinemagician speaks to technology as<br />
magic, linking the emerging technology of the<br />
FogScreen with the pre-cinematic magic lantern<br />
and phantasmagoria spectacles of the Victorian<br />
era. The installation is based on a card-trick,<br />
using physical playing cards as an interface<br />
to interact with the FogScreen. RFID tags are<br />
hidden within each physical playing card. Part<br />
of the magic and illusion of this project was to<br />
disguise the RFID tag as a normal object, out<br />
of the viewer’s sight. Each of these tags corresponds<br />
to a short film clip by Méliès, which is<br />
projected onto the FogScreen once a selected<br />
card is placed atop the RFID tag reader. The<br />
RFID card reader is hidden within an antique<br />
wooden podium (adding to the aura of the magic<br />
performance and historical time period).<br />
The following instructions were provided to the<br />
participant: “Pick a card. Place it here. Prepare<br />
to be amazed and entertained.” once the<br />
participant placed a selected card atop the designated<br />
area on the podium (atop the concealed<br />
RFID reader), an image of the corresponding<br />
card was revealed on the FogScreen, which was<br />
then followed by one of Méliès’ films. The decision<br />
was made to provide visual feedback of the<br />
participant’s selected card to add to the magic<br />
of the experience and to generate a sense of<br />
wonder, similar to the witnessing and questioning<br />
of a magic trick, with participants asking,<br />
“How did you know that was my card? How did<br />
you do that?” This curiosity inspired further<br />
exploration of each of the cards (and in turn,<br />
Méliès’ films) to determine if each of the participant’s<br />
cards could be properly identified.<br />
You are an artist and researcher.<br />
Your scientific work as well as<br />
your artistic work explores how<br />
<strong>AR</strong> can be used as a creative<br />
medium. What’s the difference<br />
between your work as an artist /<br />
designer and your work as a researcher?<br />
Excellent question! I believe that artists and<br />
designers are researchers. They propose novel<br />
paths for innovation introducing detours into the<br />
usual processes. In my most recent TEDx 2011<br />
talk in Dubai, “Augmented Reality and the Power<br />
of Imagination” (http://youtu.be/7QrB4cYxjmk),<br />
15
Picture: HELEN PAPAGIANNIS<br />
I discuss how as a designer/artist/PhD researcher<br />
I am both a practitioner and a researcher, a maker<br />
and a believer. As a practitioner, I do, create,<br />
design; as a researcher I dream, aspire, hope.<br />
I am a make-believer working with a technology<br />
that is about make-believe, about imagining<br />
possibilities atop actualities. Now, more than<br />
ever, we need more creative adventurers and<br />
make-believers to help <strong>AR</strong> continue to evolve<br />
and become a wondrous new medium, unlike<br />
anything we’ve ever seen before! I spoke to the<br />
importance and power of imagination and makebelieve,<br />
and how they pertain to <strong>AR</strong> at this critical<br />
junction in the medium’s evolution. When<br />
we make-believe and when we imagine, we are<br />
in two places simultaneously; make-believe is<br />
about projecting or layering our imagination<br />
on top of a current situation or circumstance.<br />
In many ways, this is what <strong>AR</strong> is too: layering<br />
imagined worlds on top of our existing reality.<br />
You’ve had quite a success with<br />
your <strong>AR</strong> pop-up book ‘Who’s<br />
Afraid of Bugs?’ In your blog you<br />
talk about your inspiration for<br />
the story behind the book: it was<br />
inspired by <strong>AR</strong> psychotherapy<br />
studies for the treatment of<br />
phobias such as arachnophobia.<br />
Can you tell us more?<br />
Who’s Afraid of Bugs? was the world’s first Augmented<br />
Reality (<strong>AR</strong>) Pop-up designed for iPad2<br />
and iPhone 4. The book combines hand-crafted<br />
paper-engineering and <strong>AR</strong> on mobile devices to<br />
create a tactile and hands-on storybook that<br />
explores the fear of bugs through narrative and<br />
play. Integrating image tracking in the design,<br />
as opposed to black and white glyphs commonly<br />
seen in <strong>AR</strong>, the book can hence be enjoyed alone<br />
as a regular pop-up book, or supplemented with<br />
Augmented digital content when viewed through<br />
a mobile device equipped with a camera. The<br />
book is a playful exploration of fears using <strong>AR</strong> in<br />
a meaningful and fun way. Rhyming text takes<br />
the reader through the storybook where various<br />
‘creepy crawlies’ (spider, ant, and butterfly) are<br />
awaiting to be discovered, appearing virtually<br />
as 3D models you can interact with. A tarantula<br />
attacks when you touch it, an ant hyperlinks to<br />
educational content with images and diagrams,<br />
and a butterfly appears flapping its wings atop<br />
a flower in a meadow. Hands are integrated<br />
throughout the book design, whether its pla cing<br />
one’s hand down to have the tarantula crawl<br />
over you virtually, the hand holding the magnifying<br />
lens that sees the ant, or the hands that popup<br />
holding the flower upon which the butterfly<br />
appears. It’s a method to involve the reader in<br />
the narrative, but also comments on the unique<br />
tactility <strong>AR</strong> presents, bridging the digital with<br />
the physical. Further, the story for the <strong>AR</strong><br />
Pop-up Book was inspired by <strong>AR</strong> psychotherapy<br />
studies for the treatment of phobias such as<br />
arachnophobia. <strong>AR</strong> provides a safe, controlled<br />
environment to conduct exposure therapy<br />
within a patient’s physical surroundings, creating<br />
a more believable scenario with heightened<br />
presence (defined as the sense of really being in<br />
an imagined or perceived place or scenario) and<br />
provides greater immediacy than in Virtual Reality<br />
(VR). A video of the book may be watched at<br />
http://vimeo.com/25608606.<br />
In your work, technology serves<br />
as an inspiration. For example,<br />
rather than starting with a story<br />
which is then adapted to a certain<br />
technology, you start out with<br />
<strong>AR</strong> technology, investigate its<br />
strengths and weaknesses and so<br />
the story evolves. However, this<br />
does not limit you to only use the<br />
strength of a medium.<br />
On the contrary, weaknesses such<br />
as accidents and glitches have<br />
for example influenced your work<br />
‘Hallucinatory <strong>AR</strong>’. Can you tell us<br />
a bit more about this work?<br />
Hallucinatory Augmented Reality (<strong>AR</strong>), 2007,<br />
was an experiment which investigated the<br />
possibility of images which were not glyphs/<strong>AR</strong><br />
trackables to generate <strong>AR</strong> imagery. The projects<br />
evolved out of accidents, incidents in earlier<br />
experiments in which the <strong>AR</strong> software was mistaking<br />
non-marker imagery for <strong>AR</strong> glyphs and<br />
attempted to generate <strong>AR</strong> imagery. This confusion,<br />
by the software, resulted in unexpected<br />
and random flickering <strong>AR</strong> imagery. I decided to<br />
explore the creative and artistic possibilities<br />
of this effect further and conduct experiments<br />
with non-traditional marker-based tracking.<br />
The process entailed a study of what types of<br />
non-marker images might generate such ‘hallucinations’<br />
and a search for imagery that would<br />
evoke or call upon multiple <strong>AR</strong> imagery/videos<br />
from a single image/non-marker.<br />
Upon multiple image searches, one image<br />
emerged which proved to be quite extraordinary.<br />
A cathedral stained glass window was<br />
able to evoke four different <strong>AR</strong> videos, the only<br />
instance, from among many other images, in<br />
which multiple <strong>AR</strong> imagery appeared. Upon close<br />
examination of the image, focusing in and out<br />
with a web camera, a face began to emerge in<br />
the black and white pattern. A fantastical image<br />
of a man was encountered. Interestingly, it<br />
was when the image was blurred into this face<br />
using the web camera that the <strong>AR</strong> hallucinatory<br />
imagery worked best, rapidly multiplying and<br />
appearing more prominently. Although numerous<br />
attempts were made with similar images,<br />
no other such instances occurred; this image<br />
appeared to be unique.<br />
The challenge now rested in the choice of what<br />
types of imagery to curate into this hallucinatory<br />
viewing: what imagery would be best suited to<br />
this phantasmagoric and dream-like form?<br />
My criteria for imagery/videos were like-form<br />
and shape, in an attempt to create a collage-like<br />
set of visuals. As the sequence or duration of<br />
the imagery in Hallucinatory <strong>AR</strong> could not be<br />
predetermined, the goal was to identify imagery<br />
16 17
that possessed similarities, through which the<br />
possibility for visual synchronicities existed.<br />
Themes of intrusions and chance encounters are<br />
at play in Hallucinatory <strong>AR</strong>, inspired in part by<br />
Surrealist artist Max Ernst. In What is the Mechanism<br />
of Collage? (1936), Ernst writes:<br />
one rainy day in 1919, finding myself on a village<br />
on the Rhine, I was struck by the obsession<br />
which held under my gaze the pages of an illustrated<br />
catalogue showing objects designed for<br />
anthropologic, microscopic, psychologic, mineralogic,<br />
and paleontologic demonstration. There<br />
I found brought together elements of figuration<br />
so remote that the sheer absurdity of that collection<br />
provoked a sudden intensification of<br />
the visionary faculties in me and brought forth<br />
an illusive succession of contradictory images,<br />
double, triple, and multiple images, piling up<br />
on each other with the persistence and rapidity<br />
which are particular to love memories and visions<br />
of half-sleep (p. 427).<br />
of particular interest to my work in exploring<br />
and experimenting with Hallucinatory <strong>AR</strong> was<br />
Ernst’s description of an “illusive succession of<br />
contradictory images” that were “brought forth”<br />
(as though independent of the artist), rapidly<br />
multiplying and “piling up” in a state of “halfsleep”.<br />
Similarities can be drawn to the process<br />
of the seemingly disparate <strong>AR</strong> images jarringly<br />
coming in and out of view, layered atop one<br />
another.<br />
one wonders if these visual accidents are what<br />
the future of <strong>AR</strong> might hold: of unwelcome<br />
glitches in software systems as Bruce Sterling<br />
describes on Beyond the Beyond in 2009; or<br />
perhaps we might come to delight in the visual<br />
poetry of these Augmented hallucinations that<br />
are “As beautiful as the chance encounter of a<br />
sewing machine and an umbrella on an operating<br />
table.” 1<br />
To a computer scientist, these ‘glitches’, as<br />
applied in Hallucinatory <strong>AR</strong>, could potentially<br />
be viewed or interpreted as a disaster, as an<br />
18<br />
example of the technology failing. To the artist,<br />
however, there is poetry in these glitches, with<br />
new possibilities of expression and new visual<br />
forms emerging.<br />
on the topic of glitches and accidents, I’d like to<br />
return to Méliès. Méliès became famous for the<br />
stop trick, or double exposure special effect,<br />
a technique which evolved from an accident:<br />
Méliès’ camera jammed while filming the streets<br />
of Paris; upon playing back the film, he observed<br />
an omnibus transforming into a hearse. Rather<br />
than discounting this as a technical failure, or<br />
glitch, he utilized it as a technique in his films.<br />
Hallucinatory <strong>AR</strong> also evolved from an accident,<br />
which was embraced and applied in attempt<br />
to evolve a potentially new visual mode in the<br />
medium of <strong>AR</strong>. Méliès introduced new formal<br />
styles, conventions and techniques that were<br />
specific to the medium of film; novel styles and<br />
new conventions will also emerge from <strong>AR</strong> artists<br />
and creative adventurers who fully embrace<br />
the medium.<br />
[1] Comte de Lautreamont’s often quoted allegory,<br />
famous for inspiring both Max Ernst and Andrew<br />
Breton, qtd. in: Williams, Robert. “Art Theory: An<br />
Historical Introduction.” Malden, MA: Blackwell<br />
Publishing, 2004: 197<br />
“As beautiful as the chance<br />
encounter of a sewing<br />
machine and an umbrella<br />
on an operating table.”<br />
Comte de Lautréamont<br />
Picture: PiPPin lee<br />
19
THE TECHNoLoGY BEHIND<br />
AUGMENTED REALITY<br />
Augmented Reality (<strong>AR</strong>) is a field that is primarily<br />
concerned with realistically adding computergenerated<br />
images to the image one perceives<br />
from the real world.<br />
<strong>AR</strong> comes in several flavors. Best known is the<br />
practice of using flatscreens or projectors,<br />
but nowadays <strong>AR</strong> can be experienced even on<br />
smartphones and tablet PCs. The crux is that 3D<br />
digital data from another source is added to the<br />
ordinary physical world, which is for example<br />
seen through a camera. We can create this additional<br />
data ourselves, e.g. using 3D drawing<br />
programs such as 3D Studio Max, but we can<br />
also add CT and MRI data or even live TV images<br />
to the real world. Likewise, animated three<br />
dimensional objects (avatars), which then can be<br />
displayed in the real world, can be made using a<br />
visualization program like Cinema 4D. Instead of<br />
displaying information on conventional monitors,<br />
the data can also be added to the vision of the<br />
user by means of a head-mounted display (HMD)<br />
or Head-Up Display. This is a second, less known<br />
form of Augmented Reality. It is already known<br />
to fighter pilots, among others. We distinguish<br />
two types of HMDs, namely: optical See Through<br />
(oST) headsets and Video See Through (VST)<br />
headsets. oST headsets use semi-transparent<br />
mirrors or prisms, through which one can keep<br />
seeing the real world. At the same time, virtual<br />
objects can be added to this view using small<br />
displays that are placed on top of the prisms.<br />
VSTs are in essence Virtual Reality goggles, so<br />
the displays are placed directly in front of your<br />
eyes. In order to see the real world, there are<br />
two cameras attached on the other side of the<br />
little displays. You can then see the Augmented<br />
Reality by mixing the video signal coming from<br />
the camera with the video signal containing the<br />
virtual objects.<br />
20 21
undeRlying technology<br />
screens and glasses<br />
unlike screen-based <strong>AR</strong>, hmds provide depth<br />
perception as both eyes receive an image.<br />
when objects are projected on a 2d screen,<br />
one can convey an experience of depth by<br />
letting the objects move. Recent 3d screens<br />
allow you to view stationary objects in depth.<br />
3d televisions that work with glasses quickly<br />
alternate the right and left image - in sync with<br />
this, the glasses use active shutters which let<br />
the image in turn reach the left or the right<br />
eye. this happens so fast that it looks like you<br />
view both, the left and right image simultaneously.<br />
3d television displays that work without<br />
glasses make use of little lenses which are<br />
placed directly on the screen. those refract<br />
the left and right image, so that each eye can<br />
only see the corresponding image. see for<br />
example www.dimenco.eu/display-technology.<br />
this is essentially the same method as used<br />
on the well known 3d postcards on which a<br />
beautiful lady winks when the card is slightly<br />
turned. 3d film makes use of two projectors<br />
that show the left and right images simultaneously,<br />
however, each of them is polarized in a<br />
different way. the left and right lenses of the<br />
glasses have matching polarizations and only<br />
let through the light of to the corresponding<br />
projector. the important point with screens<br />
is that you are always bound to the physical<br />
location of the display while headset based<br />
techniques allow you to roam freely. this is<br />
called immersive visualization — you are immersed<br />
in a virtual world. you can walk around<br />
in the 3d world and move around and enter<br />
virtual 3d objects.<br />
Video-see-through <strong>AR</strong> will become popular<br />
within a very short time and ultimately become<br />
an extension of the smartphone. this is<br />
because both display technology and camera<br />
technology have made great strides with the<br />
advent of smartphones. what currently still<br />
might stand in the way of smartphone models<br />
22<br />
is computing power and energy consumption.<br />
companies such as microsoft, google, sony,<br />
Zeiss,... will enter the consumer market soon<br />
with <strong>AR</strong> technology.<br />
tracking technology<br />
A current obstacle for major applications which<br />
soon will be resolved is the tracking technology.<br />
the problem with <strong>AR</strong> is embedding the virtual<br />
objects in the real world. you can compare<br />
this with color printing: the colors, e.g., cyan,<br />
magenta, yellow and black have to be printed<br />
properly aligned to each other. what you<br />
often see in prints which are not cut yet, are<br />
so called fiducial markers on the edge of the<br />
printing plates that serve as a reference for<br />
the alignment of the colors. these are also<br />
necessary in <strong>AR</strong>. often, you see that markers<br />
are used onto which a 3d virtual object is<br />
projected. moving and rotating the marker, lets<br />
you move and rotate the virtual object. such<br />
a marker is comparable to the fiducial marker<br />
in color printing. with the help of computer<br />
vision technology, the camera of the headset<br />
can identify the marker and based on it’s size,<br />
shape and position, conclude the relative position<br />
of the camera. if you move your head relative<br />
to the marker (with the virtual object), the<br />
computer knows how the image on the display<br />
must be transformed so that the virtual object<br />
remains stationary. And conversely, if your<br />
head is stationary and you rotate the marker,<br />
it knows how the virtual object should rotate<br />
so that it remains on top of the marker.<br />
<strong>AR</strong> smartphone applications such as layar use<br />
the build in gPs and compass for the tracking.<br />
this has an accuracy of meters and measures<br />
angles of 5-10 degrees. camera-based tracking,<br />
however, is accurate to the centimetre and can<br />
measure angles of several degrees. nowadays,<br />
using markers for the tracking is already out<br />
of date and we use so called “natural feature<br />
tracking” also called “keypoint tracking”.<br />
here, the computer searches for conspicuous<br />
(salient) key points in the left and right camera<br />
image. if, for example, you twist your head, this<br />
shift is determined on the basis of those key points<br />
with more than 30 frames per second. this way, a<br />
3d map of these keypoints can be built and the computer<br />
knows the relationship (distance and angle)<br />
between the keypoints and the stereo camera. this<br />
method is more robust than marker based tracking<br />
because you have many keypoints — widely spread<br />
in the scene — and not just the four corners of the<br />
marker close together in the scene. if someone<br />
walks in front of the camera and blocks some of the<br />
keypoints, there will still be enough keypoints left<br />
and the tracking is not lost. moreover, you do not<br />
have to stick markers all over the world.<br />
collaboration with the Royal Academy of Arts<br />
(kABk) in the hague in the <strong>AR</strong> lab (Royal<br />
Academy, tu delft, leiden university, various<br />
smes) in the realization of applications.<br />
the tu delft has done research on <strong>AR</strong> since 1999.<br />
since 2006, the university works with the art<br />
academy in the hague. the idea is that <strong>AR</strong> is a new<br />
technology with its own merits. Artists are very<br />
good at finding out what is possible with the new<br />
technology. here are some pictures of realized<br />
projects. liseerde projecten<br />
Fig 1. The current technology that replaces the markers<br />
with natural feature tracking or so called keypoint<br />
tracking. Instead of the four corners of the marker, the<br />
computer itself determines which points in the left and<br />
right images can be used as anchor points for calculating<br />
the 3D pose of the camera in 3D space. From top:<br />
1: you can use all points in the left and right images<br />
to slowly build a complete 3D map. Such a map can,<br />
for example, be used to relive your past experience<br />
because you can again walk in the now virtual space.<br />
2: the 3D keypoint space and the trace of the<br />
camera position within it.<br />
3: keypoints (the color indicates the suitability)<br />
4: you can place virtual objects (eyes) on an existing<br />
surface<br />
23
Fig 2. Virtual furniture exhibition at the Salone di Mobile in Milan (2008); students of the Royal<br />
Academy of Art, The Hague show their furnitures by means of <strong>AR</strong> headsets. This saves transportation<br />
costs.<br />
Fig 3. Virtual sculpture exhibition in Kröller-Müller (2009). From left:<br />
1) visitors on adventure with laptops on walkers, 2) inside with a optical see-through headset,<br />
3) large pivotable screen on a field of grass, 4) virtual image.<br />
24<br />
Fig 4. Exhibition in Museum Boijmans van Beuningen (2008-2009). From left: 1) Sgraffitto in 3D;<br />
2) the 3D print version may be picked up by the spectator, 3) animated shards, the table covered<br />
in ancient pottery can be seen via the headset, 4) scanning antique pottery with the CT scanner<br />
delivers a 3D digital image.<br />
Fig 5. The TUD, partially in collaboration with the Royal Academy (with the oldest industrial design<br />
course in the Netherlands), has designed a number of headsets.This design of headsets is an ongoing<br />
activity. From left: 1) first optical see-through headset with Sony headset and self-made inertia<br />
tracker (2000), 2) on a construction helmet (2006), 3) SmartCam and tracker taped on a Cyber Mind<br />
Visette headset (2007); 4) headset design with engines by Niels Mulder, a student at Royal Academy<br />
of Art, The Hague (2007), based on Cybermind technology, 5) low cost prototype based on the Carl<br />
Zeiss Cinemizer headset, 6) future <strong>AR</strong> Vizor?, 7) future <strong>AR</strong> lens?<br />
25
there are many applications<br />
that can be realized using <strong>AR</strong>;<br />
they will find their way in the<br />
coming decades:<br />
1. head-up displays have already been used<br />
for many years in the Air force for fighter<br />
pilots; this can be extended to other<br />
vehicles and civil applications.<br />
2. the billboards during the broadcast of<br />
a football game are essentially also <strong>AR</strong>;<br />
more can be done by also ivolving the<br />
game itself an allowing interaction of teh<br />
user, such as off-side line projection.<br />
3. in the professional sphere, you can, for<br />
example, visualize where pipes under the<br />
street lie or should lie. ditto for designing<br />
ships, houses, planes, trucks and cars.<br />
what’s outlined in a cAd drawing could<br />
be drawn in the real world, allowing<br />
you to see in 3d if and where there is a<br />
mismatch.<br />
4. you can easily find books you are looking<br />
for in the library.<br />
5. you can find out where restaurants are in<br />
a city...<br />
6. you can pimp theater / musical / opera /<br />
pop concerts with (immersive) <strong>AR</strong> decor.<br />
7. you can arrange virtual furniture or curtains<br />
from the ikeA catalog and see how<br />
they look in your home.<br />
8. maintenance of complex devices will<br />
become easier, e.g. you can virtually see<br />
where the paper in the copier is jammed.<br />
9. if you enter a restaurant or the hardware<br />
store, a virtual avatar can show you the<br />
place to find that special bolt or table.<br />
showing the seRRA Room<br />
in museum BoijmAns VAn<br />
Beuningen duRing the<br />
exhiBition sgRAffito in 3d<br />
Picture: joAchim RotteVeel<br />
26 27
28<br />
RE-INTRoDUCING MoSQUIToS<br />
MA<strong>AR</strong>TEN LAMERS<br />
<strong>AR</strong>oUND 2004, MY YoUNGER BRoTHER VALENTIJN INTRoDUCED ME<br />
To THE FASCINATING WoRLD oF AUGMENTED REALITY. HE WAS A<br />
MoBILE PHoNE SALESMAN AT THE TIME, AND SIEMENS HAD JUST<br />
LAUNCHED THEIR FIRST “SM<strong>AR</strong>TPHoNE”, THE BULKY SIEMENS SX1.<br />
THIS PHoNE WAS QUITE M<strong>AR</strong>VELoUS, WE THoUGHT – IT RAN THE<br />
SYMBIAN oPERATING SYSTEM, HAD A BUILT-IN CAMERA, AND CAME<br />
WITH… THREE GAMES.<br />
one of these games was mozzies, a.k.a Virtual<br />
mosquito hunt, which apparently won some<br />
2003 Best mobile game Award and my brother<br />
was eager to show it to me in the store where<br />
he worked at that time. i was immediately<br />
hooked… mozzies lets you kill virtual mosquitos<br />
that fly around superimposed over the<br />
live camera feed. By physically moving the<br />
phone you could chase after the mosquitos<br />
when they attempted to fly off the phone’s<br />
display. those are all the ingredients for<br />
Augmented Reality in my personal opinion:<br />
something that interacts with my perception<br />
and manipulation of the world around me, at<br />
that location, at that time. And mozzies did<br />
exactly that.<br />
now almost eight years later, not much<br />
has changed. whenever people around me<br />
speak of <strong>AR</strong>, because they got tired of saying<br />
“Augmented Reality”, they still refer to bulky<br />
equipment (even bulkier than the siemens<br />
sx1!) that projects stuff over a live camera<br />
feed and lets you interact with whatever<br />
that “stuff” is. in mozzies it was pesky little<br />
mosquitos -- nowadays it is anything from<br />
restaurant information to crime scene data.<br />
But nothing really changed, right?<br />
Right! technology became more advanced,<br />
so we no longer need to hold the phone in<br />
our hand, but get to wear it strapped to our<br />
skull in the form of goggles. But the idea is<br />
unchanged; you look at fake stuff in the real<br />
world and physically move around to deal<br />
with it. you still don’t get the tactile sensation<br />
of swatting a mosquito or collecting<br />
“virtually heavy” information. you still don’t<br />
even hear the mosquito flying around you…<br />
it’s time to focus on those matters also, in my<br />
opinion. let’s take up the challenge and make<br />
<strong>AR</strong> more than visual, exploring interaction<br />
models for other senses. let’s enjoy the full<br />
experience of seeing, hearing, and particularly<br />
swatting mosquitos, but without the<br />
itchy bites.<br />
29
LIEVEN VAN<br />
VELTHoVEN —<br />
THE RACING ST<strong>AR</strong><br />
“IT AIN’T FUN IF IT AIN’T REAL TIME”<br />
BY HANNA SCHRAFFENBERGER<br />
WHEN I ENTER LIEVEN VAN<br />
VELTHoVEN’S RooM, THE PEoPLE<br />
FRoM THE EFTELING HAVE JUST<br />
LEFT. THEY <strong>AR</strong>E INTERESTED IN HIS<br />
‘VIRTUAL GRoWTH’ INSTALLATIoN.<br />
AND THEY <strong>AR</strong>E NoT THE oNLY oNES<br />
INTERESTED IN LIEVEN’S WoRK. IN<br />
THE LAST YE<strong>AR</strong>, HE HAS WoN THE<br />
JURY AW<strong>AR</strong>D FoR BEST NEW MEDIA<br />
PRoDUCTIoN 2011 oF THE INTER-<br />
NATIoNAL CINEKID YoUTH MEDIA<br />
FESTIVAL AS WELL AS THE DUTCH<br />
GAME AW<strong>AR</strong>D 2011 FoR THE BEST<br />
STUDENT GAME. THE WINNING<br />
MIXED REALITY GAME ‘RooM RAC-<br />
ERS’ HAS BEEN SHoWN AT THE<br />
DISCoVERY FESTIVAL, MEDIAMATIC,<br />
THE STRP FESTIVAL AND THE ZKM IN<br />
K<strong>AR</strong>LSRUHE. HIS VIRTUAL GRoWTH<br />
INSTALLATIoN HAS EMBELLISHED<br />
THE STREETS oF AMSTERDAM AT<br />
NIGHT. NoW, HE IS GoING To SHoW<br />
RooM RACERS To ME, IN HIS LIVING<br />
RooM — WHERE IT ALL ST<strong>AR</strong>TED.<br />
The room is packed with stuff and on first sight<br />
it seems rather chaotic, with a lot of random<br />
things laying on the floor. There are a few<br />
plants, which probably don’t get enough light,<br />
because Lieven likes the dark (that’s when his<br />
projections look best). It is only when he turns<br />
on the beamer, that I realize that his room is<br />
actually not chaotic at all. The shoe, magnifying<br />
class, video games, tape and stapler which cover<br />
the floor are all part of the game.<br />
“You create your own race game<br />
tracks by placing real stuff on the<br />
floor”<br />
Lieven tells me. He hands me a controller and<br />
soon we are racing the little projected cars<br />
around the chocolate spread, marbles, a remote<br />
control and a flash light. Trying not to crash the<br />
car into a belt, I tell him what I remember about<br />
when I first met him a few years ago at a Media<br />
Technology course at Leiden University. Back<br />
then, he was programming a virtual bird, which<br />
would fly from one room to another, preferring<br />
the room in which it was quiet. Loud and sudden<br />
sounds would scare the bird away into another<br />
room. The course for which he developed it was<br />
called sound space interaction, and his installation<br />
was solely based on sound. I ask him<br />
whether the virtual bird was his first contact<br />
with Augmented Reality. Lieven laughs.<br />
“It’s interesting that you call it<br />
<strong>AR</strong>, as it only uses sound!”<br />
Indeed, most of Lieven’s work is based on<br />
interactive projections and plays with visual<br />
augmentations of our real environment. But like<br />
the bird, all of them are interactive and work<br />
in real-time. Looking back, the bird was not his<br />
first <strong>AR</strong> work.<br />
“My first encounter with <strong>AR</strong> was<br />
during our first Media Technology<br />
course — a visit to the Ars Electroncia<br />
festival in 2007 — where<br />
I saw Pablo Valbuena’s Augmented<br />
Sculpture. It was amazing. I was<br />
asking myself, can I do something<br />
like this but interactive instead?”<br />
Armed with a bachelor in technical computer<br />
science from TU Delft and the new found possibility<br />
to bring in his own curiosity and ideas at<br />
the Media Technology Master program at Leiden<br />
University, he set out to build his own interactive<br />
projection based works.<br />
30 31
Room RAceRs<br />
Up to four players race their virtual cars around real objects<br />
which are lying on the floor. Players can drop in or out of the<br />
game at any time. Everything you can find can be placed on<br />
the floor to change the route.<br />
Room Racers makes use of projection-based mixed reality.<br />
The structure of the floor is analysed in real-time using a<br />
modified camera and self-written software. Virtual cars are<br />
projected onto the real environment and interact with the<br />
detected objects that are lying on the floor.<br />
The game has won the Jury Award for Best New Media Produc-<br />
tion 2011 of the international Cinekid Youth Media Festival,<br />
and the Dutch Game Award 2011 for Best Student Game. Room<br />
Racers shas been shown at several international media festivals.<br />
You can play Room Racers at the 'Car Culture' exposition<br />
at the Lentos Kunstmuseum in Linz, Austria until 4th of July<br />
2012.<br />
Picture: lieVen VAn VelthoVen, Room RAceRs At Zkm | centeR foR <strong>AR</strong>ts And mediA<br />
in k<strong>AR</strong>lsRuhe, geRmAny on june 19th, 2011<br />
32<br />
33
“The first time, I experimented<br />
with the combination of the real<br />
and the virtual myself was in a<br />
piece called shadow creatures<br />
which I made with Lisa Dalhuijsen<br />
during our first semester in 2007.”<br />
More interactive projections followed in the<br />
next semester and in 2008, the idea for Room<br />
Racers was born. A first prototype was build in<br />
a week: a projected car bumping into real world<br />
things. After that followed months and months<br />
of optimizations. Everything is done by Lieven<br />
himself, mostly at night in front of the computer.<br />
“My projects are never really<br />
finished, they are always work in<br />
progress, but if something works<br />
fine in my room, it’s time to take<br />
it out in the world.”<br />
After having friends over and playing with the<br />
cars until six o’clock in the morning, Lieven<br />
knows it’s time to steer the cars out of his room<br />
and show them to the outside world.<br />
“I wanted to present Room Racers<br />
but I didn’t know anyone, and<br />
no one knew me. There was no<br />
network I was part of.”<br />
Uninhibited by this, Lieven took the initiative<br />
and asked the Discovery Festival if they were<br />
interested in his work. Luckily, they were — and<br />
showed two of his interactive games at the Discovery<br />
Festival 2010. After the festival requests<br />
started coming and the cars kept rolling. When<br />
I ask him about this continuing success he is<br />
divided:<br />
“It’s fun, but it takes a lot of time<br />
— I have not been able to program<br />
as much as I used to.”<br />
His success does surprise him and he especially<br />
did not expect the attention it gets in an art<br />
context.<br />
“I knew it was fun. That became<br />
clear when I had friends over and<br />
we played with it all night. But I<br />
did not expect the awards. And I<br />
did not expect it to be relevant<br />
in the art scene. I do not think<br />
it’s art, it’s just a game. I don’t<br />
consider myself an artist. I am a<br />
developer and I like to do interactive<br />
projections. Room Racers is<br />
my least arty project, nevertheless<br />
it got a lot of response in the<br />
art context.”<br />
A piece which he actually considers more of an<br />
artwork is Virtual Growth: a mobile installation<br />
which projects autonomous growing structures<br />
onto any environment you place it in, be it<br />
buildings, people or nature.<br />
“For me <strong>AR</strong> has to take place in<br />
the real world. I don’t like screens.<br />
I want to get away from them. I<br />
have always been interested in<br />
other ways of interacting with<br />
computers, without mice, without<br />
screens. There is a lot of screen<br />
based <strong>AR</strong>, but for me <strong>AR</strong> is really<br />
about projecting into the real<br />
world. Put it in the real world,<br />
identify real world objects, do it in<br />
real-time, thats my philosophy. It<br />
ain’t fun if it ain’t real-time. One<br />
day, I want to go through a city<br />
with a van and do projections on<br />
buildings, trees, people and whatever<br />
else I pass.”<br />
For now, he is bound to a bike but that does<br />
not stop him. Virtual Growth works fast and<br />
stable, even on a bike. That has been witnessed<br />
in Amsterdam, where the audiovisual bicycle<br />
project ‘Volle Band’ put beamers on bikes and<br />
invented Lieven to augmented the city with his<br />
mobile installation. People who experienced<br />
Virtual Growth on his journeys around Amsterdam,<br />
at festivals and parties, are enthusiastic<br />
about his (‘smashing!’) entertainment-art. As the<br />
virtual structure grows, the audience members<br />
not only start to interact with the piece but also<br />
with each other.<br />
“They put themselves in front<br />
of the projector, have it projecting<br />
onto themselves and pass on<br />
the projection to other people<br />
by touching them. I don’t explain<br />
anything. I believe in simple<br />
ideas, not complicated concepts.<br />
The piece has to speak for itself.<br />
If people try it, immediately get<br />
it, enjoy it and tell other people<br />
about it, it works!”<br />
Virtual Growth works, that becomes clear from<br />
the many happy smiling faces the projection<br />
grows upon. And that’s also what counts for<br />
Lieven.<br />
“At first it was hard, I didn’t get<br />
paid for doing these projects. But<br />
when people see them and are enthusiastic,<br />
that makes me happy.<br />
If I see people enjoying my work,<br />
and playing with it, that’s what<br />
really counts.”<br />
I wonder where he gets the energy to work that<br />
much alongside being a student. He tells me,<br />
what drives him, is that he enjoys it. He likes to<br />
spend the evenings with the programming language<br />
C#. But the fact that he enjoys working on<br />
his ideas, does not only keep him motivated but<br />
also has caused him to postpone a few courses<br />
at university. While talking, he smokes his<br />
cigarette and takes the ashtray from the floor.<br />
With the road no longer blocked by it, the cars<br />
take a different route now. Lieven might take a<br />
different route soon as well. I ask him, if he will<br />
still be working from his living room, realizing<br />
his own ideas, once he has graduated.<br />
“It’s actually funny. It all started<br />
to fill my portfolio in order to get<br />
a cool job. I wanted to have some<br />
things to show besides a diploma.<br />
That’s why I started realizing my<br />
ideas. It got out of control and<br />
soon I was realizing one idea after<br />
the other. And maybe, I’ll just<br />
continue doing it. But also, there<br />
are quite some companies and<br />
jobs I’d enjoy working for. First<br />
I have to graduate anyway.”<br />
If I have learned anything about Lieven and his<br />
work, I am sure his graduation project will be<br />
placed in the real world and work in in realtime.<br />
More than that, it will be fun. It ain’t<br />
Lieven, if it ain’t’ fun.<br />
name: lieven van Velthoven<br />
Born: 1984<br />
study: media technology msc,<br />
leiden university<br />
Background: computer science,<br />
tu delft<br />
selected <strong>AR</strong> works: Room Racers,<br />
Virtual growth<br />
watch: http://www.youtube.com/<br />
user/lievenvv<br />
34 35
HoW DID WE Do IT:<br />
ADDING VIRTUAL SCULPTURES<br />
AT THE KRöLLER-MüLLER MUSEUM<br />
By Wim van Eck<br />
ALWAYS WANTED To CREATE YoUR oWN AUGMENTED REALITY PRo-<br />
JECTS BUT NEVER KNEW HoW? DoN’T WoRRY, <strong>AR</strong>[T] IS GoING To<br />
HELP YoU! HoWEVER, THERE <strong>AR</strong>E MANY HURDLES To TAKE WHEN<br />
REALIZING AN AUGMENTED REALITY PRoJECT. IDEALLY YoU SHoULD<br />
BE A SKILLFUL 3D ANIMAToR To CREATE YoUR oWN VIRTUAL oB-<br />
JECTS, AND A GREAT PRoGRAMMER To MAKE THE PRoJECT TECHNI-<br />
CALLY WoRK. PRoVIDING YoU DoN’T JUST WANT To MAKE A FANCY<br />
TECH-DEMo, YoU ALSo NEED To CoME UP WITH A GREAT CoNCEPT!<br />
My name is Wim van Eck and I work at the <strong>AR</strong><br />
<strong>Lab</strong>, based at the Royal Academy of Art. one of<br />
my tasks is to help art-students realize their Augmented<br />
Reality projects. These students have<br />
great concepts, but often lack experience in 3d<br />
animation and programming. Logically I should<br />
tell them to follow animation and programming<br />
courses, but since the average deadline for their<br />
projects is counted in weeks instead of months<br />
or years there is seldom time for that... In the<br />
coming issues of <strong>AR</strong>[t] I will explain how the <strong>AR</strong><br />
<strong>Lab</strong> helps students to realize their projects and<br />
how we try to overcome technical boundaries,<br />
showing actual projects we worked on by example.<br />
Since this is the first issue of our magazine<br />
I will give a short overview of recommendable<br />
programs for Augmented Reality development.<br />
We will start with 3d animation programs, which<br />
we need to create our 3d models. There are<br />
many 3d animation packages, the more well<br />
known ones include 3ds Max, Maya, Cinema 4d,<br />
Softimage, Lightwave, Modo and the open source<br />
Blender (www.blender.org). These are all great<br />
programs, however at the <strong>AR</strong> <strong>Lab</strong> we mostly use<br />
Cinema 4d (image 1) since it is very user friendly<br />
and because of that easier to learn. It is a shame<br />
that the free Blender still has a steep learning<br />
curve since it is otherwise an excellent program.<br />
You can <strong>download</strong> a demo of Cinema 4d at<br />
http://www.maxon.net/<strong>download</strong>s/demo-version.html,<br />
these are some good tutorial sites to<br />
get you started:<br />
http://www.cineversity.com<br />
http://www.c4dcafe.com<br />
http://greyscalegorilla.com<br />
image 1<br />
image 2 image 3 | Picture by klaas A. mulder image 4<br />
In case you don’t want to create your own 3d<br />
models you can also <strong>download</strong> them from various<br />
websites. Turbosquid (http://www.turbosquid.com),<br />
for example, offers good quality but<br />
often at a high price, while free sites such as<br />
Artist-3d (http://artist-3d.com) have a more varied<br />
quality. When a 3d model is not constructed<br />
properly it might give problems when you import<br />
it or visualize it. In coming issues of <strong>AR</strong>[t] we<br />
will talk more about optimizing 3d models for<br />
Augmented Reality usage. To actually add these<br />
3d models to the real world you need Augmented<br />
Reality software. Again there are many<br />
options, with new software being added continuously.<br />
Probably the easiest to use software is<br />
Build<strong>AR</strong> (http://www.buildar.co.nz) which is<br />
available for Windows and oSX. It is easy to<br />
import 3d models, video and sound and there is<br />
a demo available. There are excellent tutorials<br />
on their site to get you started. In case you want<br />
to develop for ioS or Android the free Junaio<br />
(http://www.junaio.com) is a good option. Their<br />
online GLUE application is easy to use, though<br />
their preferred .m2d format for 3d models is<br />
not the most common. In my opinion the most<br />
powerful Augmented Reality software right now<br />
is Vuforia (https://developer.qualcomm.com/<br />
develop/mobile-technologies/Augmented-reality)<br />
in combination with the excellent game-engine<br />
Unity (www.unity3d.com). This combination<br />
offers high-quality visuals with easy to script<br />
interaction on ioS and Android devices<br />
Sweet summer nights<br />
at the Kröller-Müller<br />
Museum.<br />
As mentioned before in the introduction we<br />
will show the workflow of <strong>AR</strong> <strong>Lab</strong> projects with<br />
these ‘How did we do it’ articles. In 2009 the <strong>AR</strong><br />
<strong>Lab</strong> was invited by the Kröller-Müller Museum to<br />
present during the ‘Sweet Summer Nights’, an<br />
evening full of cultural activities in the famous<br />
sculpture garden of the museum. We were asked<br />
to develop an Augmented Reality installation<br />
aimed at the whole family and found a diverse<br />
group of students to work on the project. Now<br />
the most important part of the project started,<br />
brainstorming!<br />
our location in the sculpture garden was in-<br />
between two sculptures, ‘Man and woman’, a<br />
stone sculpture of a couple by Eugène Dodeigne<br />
(image 2) and ‘Igloo di pietra’, a dome shaped<br />
sculpture by Mario Merz (image 3). We decided<br />
to read more about these works, and learned<br />
that Dodeigne had originally intended to create<br />
two couples instead of one, placed together in a<br />
wild natural environment. We decided to virtually<br />
add the second couple and also add a more<br />
wild environment, just as Dodeigne initially had<br />
in mind. To be able to see these additions we<br />
placed a screen which can rotate 360 degrees<br />
between the two sculptures (image 4).<br />
36 37
A webcam was placed on top of the screen,<br />
and a laptop running <strong>AR</strong>Toolkit (http://www.<br />
hitl.washington.edu/artoolkit) was mounted<br />
on the back of the screen. A large marker was<br />
placed near the sculpture as a reference point<br />
for <strong>AR</strong>Toolkit.<br />
Now it was time to create the 3d models of the<br />
extra couple and environment. The students<br />
working on this part of the project didn’t have<br />
much experience with 3d animation, and there<br />
wasn’t much time to teach them, so manually<br />
modeling the sculptures would be a difficult task.<br />
Soon options such as 3d scanning the sculpture<br />
were opted, but it still needs quite some skill<br />
to actually prepare a 3d scan for Augmented<br />
Reality usage. We will talk more about that in<br />
a coming issue of this magazine.<br />
But when we look carefully at our setup (image<br />
5) we can draw some interesting conclusions.<br />
our screen is immobile, we will always see our<br />
added 3d model from the same angle. So since<br />
we will never be able to see the back of the 3d<br />
model there is no need to actually model this<br />
part. This is a common practice while making 3d<br />
models, you can compare it with set construction<br />
for Hollywood movies where they also only<br />
38<br />
image 5<br />
actually build what the camera will see. This will<br />
already save us quite some work. We can also<br />
see the screen is positioned quite far away from<br />
the sculpture, and when an object is viewed<br />
from a distance it will optically lose its depth.<br />
When you are one meter away from an object<br />
and take one step aside you will see the side of<br />
the object, but if the same object is a hundred<br />
meter away you will hardly see a change in perspective<br />
when changing your position (see image<br />
6). From that distance people will hardly see the<br />
difference between an actual 3d model and a<br />
plain 2d image. This means we could actually use<br />
photographs or drawings instead of a complex 3d<br />
model, making the whole process easier again.<br />
We decided to follow this route.<br />
image 6<br />
image 7<br />
image 8<br />
image 9<br />
image 10<br />
image 11<br />
=<br />
39
original photograph by klaas A. mulder<br />
image 12<br />
To be able to place the photograph of the<br />
sculpture in our 3d scene we have to assign<br />
it to a placeholder, a single polygon, image 7<br />
shows how this could look.<br />
This actually looks quite awful, we see the<br />
statue but also all the white around it from the<br />
image. To solve this we need to make usage of<br />
something called an alpha channel, an option<br />
you can find in every 3d animation package<br />
(image 8 shows where it is located in the material<br />
editor of Cinema 4d). An alpha channel is<br />
a grayscale image which declares which parts<br />
of an image are visible, white is opaque, black<br />
is transparent. Detailed tutorials about alpha<br />
channels are easily found on the internet.<br />
As you can see this looks much better (image 9).<br />
We followed the same procedure for the second<br />
statue and the grass (image 10), using many<br />
separate polygons to create enough randomness<br />
for the grass. As long as you see these models<br />
from the right angle they look quite realistic<br />
(image 11). In this case this 2.5d approach probably<br />
gives even better results than a ‘normal’ 3d<br />
model, and it is much easier to create. Another<br />
advantage is that the 2.5d approach is very easy<br />
to compute since it uses few polygons, so you<br />
don’t need a very powerful computer to run it<br />
or you can have many models on screen at the<br />
same time. Image 12 shows the final setup.<br />
For the iglo sculpture by Mario Merz we used<br />
a similar approach. A graphic design student<br />
imagined what could be living inside the iglo,<br />
and started drawing a variety of plants and<br />
creatures. Using the same 2.5d approach as<br />
described before we used these drawings and<br />
placed them around the iglo, and an animation<br />
was shown of a plant growing out of the iglo<br />
(image 12).<br />
The <strong>Lab</strong> collaborated in this project with students from different departments<br />
of the KABK: Ferenc Molnar, Mit Koevoets, Jing Foon Yu, Marcel<br />
Kerkmans and Alrik Stelling. The <strong>AR</strong> <strong>Lab</strong> team consisted of: Yolande<br />
Kolstee, Wim van Eck, Melissa Coleman en Pawel Pokutycki, supported by<br />
Martin Sjardijn and Joachim Rotteveel.<br />
We can conclude that it is good practice to<br />
analyze your scene before you start making your<br />
3d models. You don’t always need to model all<br />
the detail, and using photographs or drawings<br />
can be a very good alternative. The next issue<br />
of <strong>AR</strong>[t] will feature a new ‘How did we do it’, in<br />
case you have any questions you can contact me<br />
at w.vaneck@kabk.nl<br />
40 41
Pixels wAnt to Be fReed!<br />
intRoducing Augmented ReAlity<br />
enABling h<strong>AR</strong>dw<strong>AR</strong>e technologies<br />
By jouke VeRlinden<br />
1. introduction<br />
From the early head-up display in the movie<br />
“Robocop” to the present, Augmented Reality<br />
(<strong>AR</strong>) has evolved to a manageable ICT environment<br />
that must be considered by product designers<br />
of the 21st century.<br />
Instead of focusing on a variety of applications<br />
and software solutions, this article will discuss<br />
the essential hardware of Augmented Reality<br />
(<strong>AR</strong>): display techniques and tracking techniques.<br />
We argue that these two fields differentiate <strong>AR</strong><br />
from regular human-user interfaces and tuning<br />
these is essential in realizing an <strong>AR</strong> experience.<br />
As often, there is a vast body of knowledge behind<br />
each of the principles discussed below,<br />
hence a large variety of literature references is<br />
given.<br />
Furthermore, the first author of this article<br />
found it important to elude his own preferences<br />
and experiences throughout this discussion.<br />
We hope that this material strikes a chord and<br />
makes you consider employing <strong>AR</strong> in your designs.<br />
After all, why should digital information<br />
always be confined to a dull, rectangular screen?<br />
42 43
2. display technologies<br />
to categorise <strong>AR</strong> display technologies, two<br />
important characteristics should be identified:<br />
imaging generation principle and physical<br />
layout.<br />
generic <strong>AR</strong> technology surveys describe a<br />
large variety of display technologies that support<br />
imaging generation (Azuma, 1997; Azuma<br />
et al., 2001); these principles can be categorised<br />
into:<br />
1. Video-mixing. A camera is mounted some-<br />
where on the product; computer graphics<br />
are combined with captured video frames<br />
in real time. the result is displayed on an<br />
oblique surface, for example, an immersive<br />
head-mounted display (hmd).<br />
2. see-through: Augmentation by this<br />
principle typically employs half-silvered<br />
mirrors to superimpose computer graphics<br />
onto the user’s view, as found in head-up<br />
displays of modern fighter jets.<br />
3. Projector-based systems: one or more<br />
projectors cast digital imagery directly<br />
on the physical environment.<br />
As Raskar and Bimber (2004, p.72) argued, an<br />
important consideration in deploying an Augmented<br />
system is the physical layout of the<br />
image generation. for each imaging generation<br />
principle mentioned above, the imaging<br />
display can be arranged between user and<br />
physical object in three distinct ways:<br />
a) head-attached, which presents digital<br />
images directly in front of the viewer’s<br />
eyes, establishing a personal information<br />
display.<br />
b) hand-held, carried by a user and does not<br />
cover the whole field of view<br />
c) spatial, which is fixed to the environment.<br />
the resulting imaging and arrangement combinations<br />
are summarised in table 1.<br />
1.Video-mixing 2. see-through 3. Projection-based<br />
A. head-attached head-mounted display (hmd)<br />
B. hand-held handheld devices<br />
c. spatial embedded display<br />
Table 1. Image generation principles for Augmented Reality<br />
see-through boards spatial projection-based<br />
when the <strong>AR</strong> image generation and layout principles are combined, the following collection of<br />
display technologies are identified: hmd, handheld devices, embedded screens, see-through<br />
boards and spatial projection-based <strong>AR</strong>. these are briefly discussed in the following sections.<br />
44<br />
2.1 head-mounted display<br />
Head-attached systems refer to HMD solutions,<br />
which can employ either of the three image<br />
generation technologies. Even the first headmounted<br />
displays developed by virtue of the<br />
Virtual Reality already considered a see-through<br />
system with half-silvered mirrors to merge<br />
virtual line drawings with the physical environment<br />
(Sutherland, 1967). Since then, the variety<br />
of head-attached imaging systems has been<br />
expanded and encompasses all three principles<br />
for <strong>AR</strong>: video-mixing, see-through and direct<br />
projection on the physical world (Azuma et al.,<br />
2001). A benefit of this approach is its handsfree<br />
nature. Secondly, it offers personalised<br />
content, enabling each user to have a private<br />
view of the scene with customised and sensitive<br />
data that das not have to be shared. For<br />
most applications, HMDs have been considered<br />
inadequate, both in the case of see-through and<br />
video-mixing imaging. According to Klinker et al.<br />
(2002), HMDs introduce a large barrier between<br />
the user and the object and their resolution is<br />
insufficient for IAP — typically 800 × 600 pixels<br />
for the complete field of view (rendering the<br />
user “legally blind”by American standards).<br />
Similar reasoning was found in Bochenek et al.<br />
(2001), in which both the objective and subjective<br />
assessment of HMDs were less than those of<br />
hand-held or spatial imaging devices. However,<br />
new developments (specifically high-resolution<br />
oLED displays) show promising new devices, specifically<br />
for the professional market (Carl Zeiss)<br />
and enterntainment (Sony), see figure right.<br />
2.2 handheld display<br />
Hand-held video-mixing solutions are based on<br />
smartphones, PDAs or other mobile devices<br />
equipped with a screen and camera. With the<br />
advent of powerful mobile electronics, handheld<br />
Figure 1. Recent heAd mounted disPlAys (ABoVe: kABk the<br />
hAgue And undeR: c<strong>AR</strong>l Zeiss).<br />
Augmented Reality technologies are emerging.<br />
By employing built-in cameras on smartphones<br />
or PDAs, video mixing is enabled while concurrent<br />
use is being supported by communication<br />
45
figure 2. the VesP´R deVice foR undeRgRound infRAstRuctuRe VisuAliZAtion (schAll et Al., 2008).<br />
through wireless networks (Schmalstieg and<br />
Wagner, 2008). The resulting device acts as a<br />
hand-held window of a mixed reality. An example<br />
of such a solution is shown in Figure 2, which<br />
is a combination of an Ultra Mobile Personal<br />
Computer (UMPC), a Global Positioning System<br />
‘such systems<br />
are found in<br />
each modern<br />
smartphone’<br />
(GPS) antenna for global position tracking, a<br />
camera for local position and orientation sensing<br />
along with video mixing. As of today, such sys-<br />
GPS Antenna<br />
Camera + IMU<br />
Joystick Handles<br />
UMPC<br />
tems are found in each modern smartphone,<br />
and apps such as Layar (www.layar.com) and<br />
Junaio (www.junaio.com) offer such functions<br />
for free to the user — allowing different layers of<br />
content to the user (often social-media based).<br />
The advantage of using a video-mixing approach<br />
is that the lag times in processing are less influential<br />
than with the see-through or projector-based<br />
systems — the live video feed is also delayed and,<br />
thus, establishes a consistent combined image.<br />
This hand-held solution works well for occasional,<br />
mobile use. Long-term use can cause strain in the<br />
arms. The challenges in employing this principle<br />
are the limited screen coverage/resolution (typically<br />
with a 4-in diameter and a resolution of 320<br />
× 240 pixels). Furthermore, memory, processing<br />
power and graphics processing is limited to rendering<br />
relatively simple 3D scenes, although these<br />
capabilities are rapidly improving by the upcoming<br />
dual-core and quad-core mobile CPUs.<br />
2.3 embedded display<br />
Another <strong>AR</strong> display option is to include a number<br />
of small LCD screens in the observed object in<br />
order to display the virtual elements directly on<br />
the physical object. Although arguably an augmentation<br />
solution, embedded screens do add<br />
digital information on product surfaces.<br />
This practice is found in the later stages of prototyping<br />
mobile phones and similar information<br />
appliances. Such screens typically have a similar<br />
resolution as that of PDAs and mobile phones,<br />
which is QVGA: 320 × 240 pixels. Such devices<br />
are connected to a workstation by a specialised<br />
cable, which can be omitted if autonomously<br />
components are used, such as a smartphone.<br />
Regular embedded screens can only be used on<br />
planar surfaces and their size is limited while<br />
their weight impedes larger use. With the advent<br />
of novel, flexible e-Paper and organic Light-<br />
Emitting Diode (oLED) technologies, it might<br />
be possible to cover a part of a physical model<br />
figure 3. imPRession of the luminex mAteRiAl<br />
with such screens. To our knowledge, no such<br />
systems have been developed or commercialised<br />
so far. Although it does not support changing<br />
light effects, the Luminex material approximates<br />
this by using an LED/fibreglass based fabric (see<br />
Figure 4). A Dutch company recently presented<br />
a fully interactive light-emitting fabric based on<br />
integrated RGB LEDs labelled ‘lumalive’. These<br />
initiatives can manifest as new ways to support<br />
prototyping scenarios that require a high local<br />
resolution and complete unobstructedness. However,<br />
the fit to the underlying geometry remains<br />
a challenge, as well as embedding the associated<br />
control electronics/wiring. An elegant solution<br />
to the second challenge was given by (Saakes et<br />
al 2010) entitled “the slow display: by temporarily<br />
changing the color of photochromatic paint<br />
properties by UV laser projection. This effect<br />
lasts for a couple of minutes and demonstrates<br />
how fashion and <strong>AR</strong> could meet.<br />
46 47
2.4 see-through board<br />
See-through boards vary in size between desk-<br />
top and hand-held versions. The Augmented<br />
engineering system (Bimber et al., 2001) and<br />
the <strong>AR</strong> extension of the haptic sculpting project<br />
(Bordegoni and Covarrubias, 2007) are examples<br />
of the use of see-through technologies, which<br />
typically employ a half-silvered mirror to mix<br />
virtual models with a physical object (Figure<br />
4). Similar to the Pepper’s ghost phenomenon,<br />
standard stereoscopic Virtual Reality (VR) workbench<br />
systems such as the Barco Baron are used<br />
to project the virtual information. In addition<br />
to the need to wear shutter glasses to view stereoscopic<br />
graphics, head tracking is required to<br />
align the virtual image between the object and<br />
the viewer. An advantage of this approach is<br />
that digital images are not occluded by the users’<br />
hand or environment and that graphics can<br />
be displayed outside the physical object (i.e.,<br />
to display the environment or annotations and<br />
tools). Furthermore, the user does not have to<br />
wear heavy equipment and the resolution of the<br />
projection can be extremely high — enabling a<br />
2.5 spatial projection-based displays<br />
This technique is also known as Shader Lamps<br />
by (Raskar et al., 2001) and was extended in<br />
(Raskar&Bimber, 2004) to a variety of imaging solutions,<br />
including projections on irregular surface<br />
textures and combinations of projections with<br />
(static) holograms. In the field of advertising and<br />
performance arts, this technique recently gained<br />
popularity labelled as Projection Mapping: to<br />
project on buildings, cars or other large objects,<br />
replacing traditional screens as display means, cf.<br />
Figure 5. In such cases, theatre projector systems<br />
are used that are prohibitively expensive (>30.000<br />
euros). The principle of spatial projection-based<br />
technologies is shown in Figure 6. Casting an image<br />
to a physical object is considered comple-<br />
48<br />
compelling display system for exhibits and trade<br />
fairs. However, see-through boards obstruct user<br />
interaction with the physical object. Multiple<br />
viewers cannot share the same device, although<br />
a limited solution is offered by the virtual<br />
showcase by establishing a faceted and curved<br />
mirroring surface (Bimber, 2002).<br />
Figure 4. the Augmented engineeRing see-thRough<br />
disPlAy (BimBeR et Al., 2001).<br />
mentary to constructing a perspective image<br />
of a virtual object by a pinhole camera. If the<br />
physical object is of the same geometry as the<br />
virtual object, a straightforward 3D perspective<br />
transformation (described by a 4 × 4 matrix)<br />
is sufficient to predistort the digital image. To<br />
obtain this transformation, it suffices to indicate<br />
6 corresponding points in the physical world<br />
and virtual world: an algorithm entitled Linear<br />
Camera Calibration can then be applied (see<br />
Appendix). If the physical and virtual shapes differ,<br />
the projection is viewpoint-dependent and<br />
the head position needs to be tracked. Important<br />
projector characteristics involve weight<br />
and size versus the power (in lumens) of the<br />
figure 5. two PRojections on A chuRch chAPel in utRecht (hoeBen, 2010).<br />
49
figure 6. PRojection-BAsed disPlAy PRinciPle<br />
(AdAPted fRom (RAsk<strong>AR</strong> And low, 2001)), on the<br />
Right the dynAmic shAdeR lAmPs demonstRAtion<br />
50<br />
(BAndyoPAdhyAy et Al., 2001)).<br />
projector. There are initiatives to employ LED<br />
lasers for direct holographic projection, which<br />
also decreases power consumption compared to<br />
traditional video projectors and ensures that the<br />
projection is always in focus without requiring<br />
optics (Eisenberg, 2004). Both fixed and handheld<br />
spatial projection-based systems have been<br />
demonstrated. At present, hand-held projectors<br />
measure 10 × 5 × 2 cm and weigh 150 g, including<br />
the processing unit and battery. However,<br />
the light output is little (15–45 lumens).<br />
The advantage of spatial projection-based tech-<br />
nologies is that they support the perception of<br />
all visual and tactile/haptic depth cues without<br />
the need for shutter glasses or HMDs. Furthermore,<br />
the display can be shared by multiple<br />
co-located users. It requires less expensive<br />
equipment, which are often already available at<br />
design studios. Challenges to projector-based <strong>AR</strong><br />
approaches include optics and occlusion. First,<br />
only a limited field of view and focus depth can<br />
be achieved. To reduce these problems, multiple<br />
video projectors can be used. An alternative<br />
so lution is to employ a portable projector, as<br />
proposed in the iLamps and the I/o Pad concepts<br />
(Raskar et al., 2003) (Verlinden et al., 2008).<br />
other issues include occlusion and shadows,<br />
which are cast on the surface by the user or<br />
other parts of the system. Projection on nonconvex<br />
geometries depends on the granularity<br />
and orientation of the projector. The perceived<br />
quality is sensitive to projection errors (also<br />
known as registration errors), especially projection<br />
overshoot (Verlinden et al., 2003b).<br />
A solution for this problem is either to include an<br />
offset (dilatation) of the physical model or introduce<br />
pixel masking in the rendering pipeline. As<br />
projectors are now being embedded in consumer<br />
cameras and smartphones, we are expecting this<br />
type of augmentation in the years to come.<br />
3. input technologies<br />
In order to merge the digital and physical, position<br />
and orientation tracking of the physical<br />
components is required. Here, we will discuss<br />
two different types of input technologies: tracking<br />
and event sensing. Furthermore, we will<br />
briefly discuss other input modalities.<br />
3.1 Position tracking<br />
Welch and Foxlin (2002) presented a comprehensive<br />
overview of the tracking principles<br />
that are currently available. In the ideal case,<br />
the measurement should be as unobtrusive and<br />
invisible as possible while still offering accurate<br />
and rapid data. They concluded that there is<br />
currently no ideal solution (‘silver bullet’) for<br />
position tracking in general, but some respectable<br />
alternatives are available. Table 2 summarises<br />
the most important characteristics of<br />
these tracking methods for Augmented Reality<br />
purposes. The data have been gathered from<br />
commercially available equipment (the Ascension<br />
Flock of Birds, <strong>AR</strong>Toolkit, optotrack,<br />
tracking type<br />
size of<br />
tracker<br />
(mm)<br />
magnetic 16x16x16 2<br />
optical<br />
passive<br />
optical<br />
active<br />
80x80x0.01 >10<br />
10x10x5 >10<br />
ultrasound 20x20x10 1<br />
mechanical<br />
linkage<br />
laser<br />
scanning<br />
defined by<br />
working<br />
envelope<br />
typical<br />
number of<br />
trackers<br />
table 2. summ<strong>AR</strong>y of tRAcking technologies.<br />
1<br />
none infinite<br />
Logitech 3D Tracker, Microscribe and Minolta VI-<br />
900). All these should be considered for object<br />
tracking in Augmented prototyping scenarios.<br />
There are significant differences in the tracker/<br />
marker size, action radius and accuracy. As<br />
the physical model might consist of a number<br />
of parts or a global shape and some additional<br />
components (e.g., buttons), the number of items<br />
to be tracked is also of importance. For simple<br />
tracking scenarios, either magnetic or passive<br />
optical technologies are often used.<br />
In some experiments we found out that a projector<br />
could not be equipped with a standard Flock<br />
of Birds 3D magnetic tracker due to interference.<br />
other tracking techniques should be used<br />
for this paradigm. For example, the <strong>AR</strong>Toolkit<br />
employs complex patterns and a regular webcamera<br />
to determine the position, orientation<br />
and identification of the marker. This is done by<br />
measuring the size, 2D position and perspective<br />
distortion of a known rectangular marker, cf.<br />
Figure 7 (Kato and Billinghurst, 1999).<br />
Passive markers enable a relatively untethered<br />
system, as no wiring is necessary. The optical<br />
markers are obtrusive when markers are visible<br />
to the user while handling the object. Although<br />
computationally intensive, marker-less optical<br />
Action<br />
radius/<br />
accuracy<br />
1.5 m<br />
(1 mm)<br />
3 m<br />
(1 mm)<br />
3 m<br />
(0.5 mm)<br />
1 m<br />
(3 mm)<br />
0.7 m<br />
(0.1 mm)<br />
2 m<br />
( 0.2mm)<br />
dof issues<br />
6 Ferro-magnetic interference<br />
6 line of sight<br />
3 line of sight, wired connections<br />
6 line of sight<br />
5<br />
6<br />
limited degrees of freedom,<br />
inertia<br />
line of sight, frequency, object<br />
recognition<br />
51
figure 7. woRkflow of the <strong>AR</strong>toolkit oPticAl tRAcking AlgoRithm,<br />
http://www.hitl.washington.edu/artoolkit/documentation/userarwork.html<br />
tracking has been proposed (Prince et al.,2002).<br />
The employment of Laser-Based tracking systems<br />
is demonstrated by the illuminating Clay<br />
system by Piper et al. (2002): a slab of Plasticine<br />
acts as an interactive surface — the user<br />
influences a 3D simulation by sculpting the clay,<br />
while the simulation results are projected on the<br />
surface. A laser-based Minolta Vivid 3D scanner<br />
is employed to continuously scan the clay<br />
surface. In the article, this principle was applied<br />
to geodesic analysis, yet it can be adapted to<br />
design applications, e.g., the sculpting of car<br />
bodies. This method has a number of challenges<br />
when used as a real-time tracking means, including<br />
the recognition of objects and their posture.<br />
However, with the emergence of depth cameras<br />
for gaming such as the Kinect (Microsoft), similar<br />
systems are now being devised with a very small<br />
technological threshold.<br />
In particular cases, a global measuring system is<br />
combined with a different local tracking principle<br />
to increase the level of detail, for example, to<br />
track the position and arrangement of buttons on<br />
figure 8. illuminAting clAy system with A PRojectoR/lAseR scAnneR (PiPeR et Al., 2002).<br />
the object’s surface. Such local positioning systems<br />
might have less advanced technical requirements;<br />
for example, the sampling frequency can<br />
be decreased to only once a minute. one local<br />
tracking system is based on magnetic resonance,<br />
as used in digital drawing tablets. The Sensetable<br />
demonstrates this by equipping an altered commercial<br />
digital drawing tablet with custom-made<br />
wireless interaction devices (Patten et al., 2001).<br />
The Senseboard (Jacob et al., 2002) has similar<br />
functions and an intricate grid of RFID receivers<br />
to determine the (2D) location of an RFID tag on<br />
a board. In practice, these systems rely on a rigid<br />
tracking table, but it is possible to extend this to<br />
a flexible sensing grid. A different technology was<br />
proposed by Hudson (2004) to use LED pixels as<br />
light emitters and sensors. By operating one pixel<br />
as a sensor whilst its neighbours are illuminated,<br />
it is possible to detect light reflected from a<br />
fingertip close to the surface. This principle could<br />
be applied to embedded displays, as mentioned<br />
in Section 2.3.<br />
3.2 event sensing<br />
Apart from location and orientation tracking,<br />
Augmented prototyping applications require<br />
inter action with parts of the physical object,<br />
for example, to mimic the interaction with the<br />
artefact. This interaction differs per <strong>AR</strong> scenario,<br />
so a variety of events should be sensed<br />
to cater to these applications.<br />
Physical sensors<br />
The employment of traditional sensors labelled<br />
‘physical widgets’ (phidgets) has been studied<br />
extensively in the Computer-Human Interface<br />
(CHI) community. Greenberg and Fitchett (2001)<br />
introduced a simple electronics hardware and<br />
software library to interface PCs with sensors<br />
(and actuators) that can be used to discern<br />
user interaction. The sensors include switches,<br />
sliders, rotation knobs and sensors to measure<br />
force, touch and light. More elaborate components<br />
like a mini joystick, Infrared (IR) motion<br />
sensor, air pressure and temperature sensor are<br />
commercially available. Similar initiatives are<br />
iStuff (Ballagas et al., 2003), which also hosts a<br />
number of wireless connections to sensors. Some<br />
systems embed switches with short-range wireless<br />
connections, for example, the Switcheroo<br />
and Calder systems (Avrahami and Hudson, 2002;<br />
Lee et al., 2004) (cf. Figure 9). This allows a<br />
greater freedom in modifying the location of the<br />
interactive components while prototyping. The<br />
Switcheroo system uses custom-made RFID tags.<br />
A receiver antenna has to be located nearby<br />
(within a 10-cm distance), so the movement envelope<br />
is rather small, while the physical model<br />
is wired to a workstation. The Calder toolkit<br />
(Lee et al., 2004) uses a capacitive coupling<br />
technique that has a smaller range (6 cm with<br />
small antennae), but is able to receive and transmit<br />
for long periods on a small 12 mm coin cell.<br />
other active wireless technologies would draw<br />
more power, leading to a system that would<br />
only fit a few hours. Although the costs for this<br />
system have not been specified, only standard<br />
electronics components are required to build<br />
such a receiver.<br />
hand tracking<br />
Instead of attaching sensors to the physical<br />
environment, fingertip and hand tracking<br />
technologies can also be used to generate user<br />
events. Embedded skins represent a type of<br />
interactive surface technology that allows the<br />
accurate measurement of touch on the object’s<br />
surface (Paradiso et al., 2000). For example, the<br />
Smartskin by Reikimoto (2002) consists of a flexible<br />
grid of antennae. The proximity or touch of<br />
human fingers changes the capacity locally in the<br />
grid and establishes a multi-finger tracking cloth,<br />
which can be wrapped around an object. Such a<br />
solution could be combined with embedded displays,<br />
as discussed in Section 2.3. Direct electric<br />
52 53
figure 9. mockuP equiPPed with wiReless switches thAt cAn Be RelocAted to exPloRe usABility<br />
(lee et Al., 2004).<br />
contact can also be used to track user interac-<br />
tion; the Paper Buttons concept (Pedersen et<br />
al., 2000) embeds electronics on the objects and<br />
equips the finger with a two-wire plug that supplies<br />
power and allows bidirectional communication<br />
with the embedded components when they<br />
are touched. Magic Touch (Pedersen, 2001) uses<br />
a similar wireless system; the user wears an RFID<br />
reader on his or her finger and can interact by<br />
touching the components, which have hidden<br />
RFID tags. This method has been adapted to<br />
Augmented Reality for design by Kanai et al.<br />
(2007). optical tracking can be used for finger -<br />
tip and hand tracking as well. A simple example<br />
is the light widgets system (Fails and olsen,<br />
2002) that traces skin colour and determines<br />
finger/hand position by 2D blobs. The openNI<br />
library enables hand and body tracking of depth<br />
range cameras such as the Kinect (openNi.org).<br />
A more elaborate example is the virtual drawing<br />
tablet by Ukita and Kidode (2004); fingertips<br />
are recognised on a rectangular sheet by a<br />
head-mounted infrared camera. Traditional VR<br />
gloves can also be used for this type of tracking<br />
(Schäfer et al., 1997).<br />
3.3 other input modalities<br />
Speech and gesture recognition require consideration<br />
in <strong>AR</strong> as well. In particular, pen-based<br />
interaction would be a natural extension to the<br />
expressiveness of today’s designer skills. oviatt<br />
et al. (2000) offer an comprehensive overview of<br />
the so-called Recognition-Based User Interfaces<br />
(RUIs), including the issues and Human Factors<br />
aspects of these modalities. Furthermore,<br />
speech-based interaction can also be useful to<br />
activate operations while the hands are used for<br />
selection.<br />
4. conclusions and further<br />
reading<br />
This article introduces two important hardware<br />
systems for <strong>AR</strong>: displays and input technologies.<br />
To superimpose virtual images onto physical<br />
models, head mounted-displays (HMDs), seethrough<br />
boards, projection-based techniques<br />
and embedded displays have been employed.<br />
An important observation is that HMDs, though<br />
best known by the public, have serious limita-<br />
display imaging principle Video Mixing<br />
display arrangment<br />
tions and constraints in terms of the field of<br />
view and resolution and lend themselves to a<br />
kind of isolation. For all display technologies,<br />
the current challenges include an untethered<br />
interface, the enhancement of graphics capabilities,<br />
visual coverage of the display and improvement<br />
of resolution. LED-based laser projection<br />
and oLEDs are expected to play an important<br />
role in the next generation of IAP devices<br />
because this technology can be employed by<br />
see-through or projection-based displays.<br />
To interactively merge the digital and physical<br />
parts of Augmented prototypes, position and<br />
orientation tracking of the physical components<br />
is needed, as well as additional user input<br />
means. For global position tracking, a variety of<br />
principles exist. optical tracking and scanning<br />
suffer from the issues concerning line of sight<br />
and occlusion. Magnetic, mechanical linkage and<br />
ultrasound-based position trackers are obtrusive<br />
and only a limited number of trackers can be<br />
used concurrently.<br />
The resulting palette of solutions is summarized<br />
in Table 3 as a morphological chart. In devising a<br />
solution for your <strong>AR</strong> system, you can use this as<br />
a checklist or inspiration of display and input.<br />
54 55<br />
input technologies<br />
Position<br />
tracking<br />
event<br />
sensing<br />
Headattached<br />
Projectorbased<br />
Handheld/<br />
wearable<br />
See-through<br />
Spatial<br />
Magnetic Passive<br />
optical<br />
Active 3D laser<br />
markers markers scanning<br />
Physical sensors Virtual<br />
Wired<br />
connection<br />
Wireless Surface<br />
tracking<br />
Table 3. Morphological chart of <strong>AR</strong> enabling technologies.<br />
3D<br />
tracking<br />
Ultrasound Mechanical
further reading<br />
For those interested in research in this area,<br />
the following publication means offer a range of<br />
detailed solutions:<br />
■ International Symposium on Mixed and<br />
Augmented Reality (ISM<strong>AR</strong>) – ACM-sponsored<br />
annual convention on <strong>AR</strong>, covering both specific<br />
applications as emerging technologies.<br />
accesible through http://dl.acm.org<br />
■ Augmented Reality Times — a daily update<br />
on demos and trends in commercial and academic<br />
<strong>AR</strong> systems: http://artimes.rouli.net<br />
■ Procams workshop — annual workshop on<br />
projector-camera systems, coinciding with<br />
the IEEE conference on Image Recognition<br />
and Robot Vision. The resulting proceedings<br />
are freely accessible at http://www.procams.<br />
org<br />
■ Raskar, R. and Bimber, o. (2004) Spatial<br />
Augmented Reality, A.K. Peters, ISBN:<br />
1568812302 – personal copy can be <strong>download</strong>ed<br />
for free at http://140.78.90.140/medien/ar/Spatial<strong>AR</strong>/<strong>download</strong>.php<br />
■ Build<strong>AR</strong> – <strong>download</strong> simple webcam-based<br />
application that uses markers, http://www.<br />
buildar.co.nz/buildar-free-version<br />
Appendix: linear camera<br />
calibration<br />
This procedure has been published in (Raskar<br />
and Bimber, 2004) to some degree, but is slightly<br />
adapted to be more accessible for those with<br />
less knowledge of the field of image processing.<br />
C source code that implements this mathematical<br />
procedure can be found in appendix A1 of<br />
(Faugeras, 1993). It basically uses point correspondences<br />
between original x,y,z coordinates<br />
and their projected u,v, counterparts to resolve<br />
internal and external camera parameters.<br />
In general cases, 6 point correspondences are<br />
sufficient (Faugeras 1993, Proposition 3.11).<br />
Let I and E be the internal and external param-<br />
56<br />
eters of the projector, respectively. Then a point<br />
P in 3D-space is transformed to:<br />
p=[I·E] ·P (1)<br />
where p is a point in the projector’s coordinate<br />
system. If we decompose rotation and translation<br />
components in this matrix transformation<br />
we obtain:<br />
p=[R t] ·P (2)<br />
In which R is a 3x3 matrix corresponding to the<br />
rotational components of the transformation and<br />
t the 3x1 translation vector. Then we split the<br />
rotation columns into row vectors R1, R2, and R3<br />
of formula 3. Applying the perspective division<br />
results in the following two formulae:<br />
(3)<br />
(4)<br />
in which the 2D point pi is split into (ui,vi).<br />
Given n measured point-point correspondences<br />
(p ; P ); (i = 1::n), we obtain 2n equations:<br />
i i<br />
R 1 ·P i – u i ·R 3 ·P i + t x - u i ·t z = 0 (5)<br />
R 2 ·P i – v i ·R 3 ·P i + t y - u i ·t z = 0 (6)<br />
We can rewrite these 2n equations as a matrix<br />
multiplication with a vector of 12 unknown<br />
variables, comprising the original transformation<br />
components R and t of formula 3. Due to measurement<br />
errors, a solution is usually non-singular;<br />
we wish to estimate this transformation with<br />
a minimal estimation deviation. In the algorithm<br />
presented at (Bimber & Raskar, 2004), the minimax<br />
theorem is used to extract these based on<br />
determining the singular values. In a straightforward<br />
matter, internal and external transformations<br />
I and E of formula 1 can be extracted from<br />
the resulting transformation.<br />
References<br />
■ Avrahami, d. and hudson, s.e. (2002)<br />
‘Forming interactivity: a tool for rapid prototyping<br />
of physical interactive products’,<br />
Proceedings of DIS ‘02, pp.141–146.<br />
■ Azuma, R. (1997)<br />
‘A survey of augmented reality’, Presence:<br />
Teleoperators and Virtual Environments,<br />
Vol. 6, No. 4, pp.355–385.<br />
■ Azuma, R., Baillot, y., Behringer, R., feiner,<br />
s., julier, s. and macintyre, B. (2001)<br />
‘Recent advances in augmented reality’, IEEE<br />
Computer Graphics and Applications, Vol. 21,<br />
No. 6, pp.34–47.<br />
■ Ballagas, R., Ringel, m., stone,<br />
m. and Borchers, j. (2003)<br />
‘iStuff: a physical user interface toolkit for<br />
ubiquitous computing environments’, Proceedings<br />
of CHI 2003, pp.537–544.<br />
■ Bandyopadhyay, d., Raskar, R. and fuchs,<br />
h. (2001)<br />
‘Dynamic shader lamps: painting on movable<br />
objects’, International Symposium on Augmented<br />
Reality (ISM<strong>AR</strong>), pp.207–216.<br />
■ Bimber, o. (2002)<br />
‘Interactive rendering for projection-based<br />
augmented reality displays’, PhD dissertation,<br />
Darmstadt University of Technology.<br />
■ Bimber, o., stork, A. and Branco, P. (2001)<br />
‘Projection-based augmented engineering’,<br />
Proceedings of International Conference on<br />
Human-Computer Interaction (HCI’2001),<br />
Vol. 1, pp.787–791.<br />
■ Bochenek, g.m., Ragusa, j.m. and malone,<br />
l.c. (2001)<br />
‘Integrating virtual 3-D display systems into<br />
product design reviews: some insights from<br />
empirical testing’, Int. J. Technology Management,<br />
Vol. 21, Nos. 3–4, pp.340–352.<br />
■ Bordegoni, m. and covarrubias, m. (2007)<br />
‘Augmented visualization system for a haptic<br />
interface’, HCI International 2007 Poster.<br />
■ eisenberg, A. (2004)<br />
‘For your viewing pleasure, a projector in<br />
your pocket’, New York Times, 4 November.<br />
■ faugeras, o. (1993)<br />
‘Three-Dimensional Computer Vision:<br />
a Geometric Viewpoint’, MIT press.<br />
■ fails, j.A. and olsen, d.R. (2002)<br />
‘LightWidgets: interacting in everyday<br />
spaces’, Proceedings of IUI ‘02, pp.63–69.<br />
■ greenberg, s. and fitchett, c. (2001)<br />
‘Phidgets: easy development of physical interfaces<br />
through physical widgets’, Proceedings<br />
of UIST ‘01, pp.209–218.<br />
■ hoeben, A. (2010)<br />
“Using a projected Trompe L’oeil to highlight<br />
a church interior from the outside”, EVA<br />
2010<br />
■ hudson, s. (2004)<br />
‘Using light emitting diode arrays as touchsensitive<br />
input and output devices’, Proceedings<br />
of the ACM Symposium on User Interface<br />
Software and Technology, pp.287–290.<br />
■ jacob, R.j., ishii, h., Pangaro, g. and Patten,<br />
j. (2002)<br />
‘A tangible interface for organizing information<br />
using a grid’, Proceedings of CHI ‘02,<br />
pp.339–346.<br />
57
■ kanai, s., horiuchi, s., shiroma, y., yo-<br />
koyama, A. and kikuta, y. (2007)<br />
‘An integrated environment for testing and<br />
assessing the usability of information appliances<br />
using digital and physical mock-ups’,<br />
Lecture Notes in Computer Science, Vol.<br />
4563, pp.478–487.<br />
■ kato, h. and Billinghurst, m. (1999)<br />
‘Marker tracking and HMD calibration for a<br />
video-based augmented reality conferencing<br />
system’, Proceedings of International<br />
Workshop on Augmented Reality (IW<strong>AR</strong> 99),<br />
pp.85–94.<br />
■ klinker, g., dutoit, A.h., Bauer, m., Bayer,<br />
j., novak, V. and matzke, d. (2002)<br />
‘Fata Morgana – a presentation system for<br />
product design’, Proceedings of ISM<strong>AR</strong> ‘02,<br />
pp.76–85.<br />
■ oviatt, s.l., cohen, P.R., wu, l., Vergo,<br />
j., duncan, l., suhm, B., Bers, j., et al.<br />
(2000)<br />
‘Designing the user interface for multimodal<br />
speech and gesture applications: state-ofthe-art<br />
systems and research directions’,<br />
Human Computer Interaction, Vol. 15, No. 4,<br />
pp.263–322.<br />
■ Paradiso, j.A., hsiao, k., strickon, j.,<br />
lifton, j. and Adler, A. (2000)<br />
‘Sensor systems for interactive surfaces’,<br />
IBM Systems Journal, Vol. 39, Nos. 3–4,<br />
pp.892–914.<br />
■ Patten, j., ishii, h., hines, j. and Pangaro,<br />
g. (2001)<br />
‘Sensetable: a wireless object tracking platform<br />
for tangible user interfaces’, Proceedings<br />
of CHI ‘01, pp.253–260.<br />
■ Pedersen, e.R., sokoler, t. and nelson, l.<br />
(2000)<br />
‘PaperButtons: expanding a tangible user interface’,<br />
Proceedings of DIS ’00, pp.216–223.<br />
58<br />
■ Pederson, t. (2001)<br />
‘Magic touch: a simple object location tracking<br />
system enabling the development of<br />
physical- virtual artefacts in office environ-<br />
ments’, Personal Ubiquitous Comput., Janu-<br />
ary, Vol. 5, No. 1, pp.54–57.<br />
■ Piper, B., Ratti, c. and ishii, h. (2002)<br />
‘Illuminating clay: a 3-D tangible interface<br />
for landscape analysis’, Proceedings of CHI<br />
‘02, pp.355–362.<br />
■ Prince, s.j., xu, k. and cheok, A.d. (2002)<br />
‘Augmented reality camera tracking with<br />
homographies’, IEEE Comput. Graph. Appl.,<br />
November, Vol. 22, No. 6, pp.39–45.<br />
■ Raskar, R., welch, g., low, k-l. and<br />
Bandyopadhyay, d. (2001)<br />
‘Shader lamps: animating real objects with<br />
image based illumination’, Proceedings<br />
of Eurographics Workshop on Rendering,<br />
pp.89–102.<br />
■ Raskar, R. and low, k-l. (2001)<br />
‘Interacting with spatially augmented reality’,<br />
ACM International Conference on Virtual<br />
Reality, Computer Graphics and Visualization<br />
in Africa (AFRIGRAPH), pp.101–108.<br />
■ Raskar, R., van Baar, j., Beardsley, P.,<br />
willwacher, t., Rao, s. and forlines, c.<br />
(2003)<br />
‘iLamps: geometrically aware and self-configuring<br />
projectors’, SIGGRAPH, pp.809–818.<br />
■ Raskar, R. and Bimber, o. (2004)<br />
Spatial Augmented Reality, A.K. Peters,<br />
ISBN: 1568812302.<br />
■ Reikimoto, j. (2002)<br />
‘SmartSkin: an infrastructure for freehand<br />
manipulation on interactive surfaces’,<br />
Proceedings of CHI ‘02, pp.113–120.<br />
■ saakes, d.P., chui, k., hutchison, t.,<br />
Buczyk, B.m., koizumi, n., inami, m.<br />
and Raskar, R. (2010)<br />
’ Slow Display’. In SIGGRAPH 2010 emerging<br />
technologies: Proceedings of the 37th<br />
annual conference on Computer graphics and<br />
interactive techniques, July 2010.<br />
■ schäfer, k., Brauer, V. and Bruns, w.<br />
(1997)<br />
‘A new approach to human-computer<br />
interaction – synchronous modelling in real<br />
and virtual spaces’, Proceedings of DIS ‘97,<br />
pp.335–344.<br />
■ schall, g., mendez, e., kruijff, e., Veas, e.,<br />
sebastian, j., Reitinger, B. and schmalstieg,<br />
d. (2008)<br />
‘Handheld augmented reality for underground<br />
infrastructure visualization’, Journal<br />
of Personal and Ubiquitous Computing,<br />
Springer, DoI 10.1007/s00779-008-0204-5.<br />
■ schmalstieg, d. and wagner, d. (2008)<br />
‘Mobile phones as a platform for augmented<br />
reality’, Proceedings of the IEEE VR 2008<br />
Workshop on Software Engineering and Architectures<br />
for Realtime Interactive Systems,<br />
pp.43–44.<br />
■ sutherland, i.e. (1968)<br />
‘A head-mounted three-dimensional display’,<br />
Proceedings of AFIPS, Part I, Vol. 33,<br />
pp.757–764.<br />
■ ukita, n. and kidode, m. (2004)<br />
‘Wearable virtual tablet: fingertip drawing<br />
on a portable plane-object using an activeinfrared<br />
camera’, Proceedings of IUI 2004,<br />
pp.169–175.<br />
■ Verlinden, j.c., de smit, A., horváth, i.,<br />
epema, e. and de jong, m. (2003)<br />
‘Time compression characteristics of the<br />
augmented prototyping pipeline’, Proceedings<br />
of Euro-uRapid’03, p.A/1.<br />
■ Verlinden, j., horvath, i. (2008)<br />
”Enabling interactive augmented prototyping<br />
by portable hardware and a plugin-based<br />
software architecture” Journal of Mechanical<br />
Engineering, Slovenia, Vol 54(6), pp.<br />
458-470.<br />
■ welch, g. and foxlin, e. (2002)<br />
‘Motion tracking: no silver bullet, but a respectable<br />
arsenal’, IEEE Computer Graphics<br />
and Applications, Vol. 22, No. 6, pp.24–38.<br />
.<br />
59
LIKE RIDING A BIKE. LIKE P<strong>AR</strong>KING A C<strong>AR</strong>.<br />
PoRTRAIT oF THE <strong>AR</strong>TIST IN RESIDENCE<br />
M<strong>AR</strong>INA DE HAAS<br />
BY HANNA SCHRAFFENBERGER<br />
"Hi Marina. Nice to meet you!<br />
I have heard a lot about you."<br />
I usually avoid this kind of phrases. Judging from<br />
my experience, telling people that you have<br />
heard a lot about them makes them feel uncomfortable.<br />
But this time I say it. After all, it’s no<br />
secret that Marina and the <strong>AR</strong> <strong>Lab</strong> in The Hague<br />
share a history which dates back much longer<br />
than her current residency at the <strong>AR</strong> <strong>Lab</strong>. At the<br />
lab, she is known as one of the first students<br />
who overcame the initial resistance of the fine<br />
arts program and started working with <strong>AR</strong>. With<br />
support of the lab, she has realized the <strong>AR</strong> artworks<br />
Out of the blue and Drops of white in the<br />
course of her study. In 2008 she graduated with<br />
an <strong>AR</strong> installation that shows her 3d animated<br />
portfolio. Then, having worked with <strong>AR</strong> for three<br />
years, she decided to take a break from technology<br />
and returned to photography, drawing and<br />
painting. Now, after yet another three years,<br />
she is back in the mixed reality world. Convinced<br />
by her concepts for future works, the <strong>AR</strong> <strong>Lab</strong><br />
has invited her as an Artist in Residence. That is<br />
what I have heard about her, and made me want<br />
to meet her for an artist-portrait. Knowing quite<br />
60 61
a lot about her past, I am interested in what she<br />
is currently working on, in the context of her<br />
residency. When she starts talking, it becomes<br />
clear that she has never really stopped thinking<br />
about <strong>AR</strong>. There’s a handwritten notebook<br />
full of concepts and sketches for future works.<br />
Right now, she is working on animations of two<br />
animals. once she is done animating, she'll use<br />
<strong>AR</strong> technology to place the animals — an insect<br />
and a dove — in the hands of the audience.<br />
“I usually start<br />
out with my own<br />
photographs and a<br />
certain space I want<br />
to augment.”<br />
"They will hold a little funeral<br />
monument in the shape of a tile<br />
in their hands. Using <strong>AR</strong> technol-<br />
ogy, the audience will then see a<br />
dying dove or dying crane fly with<br />
a missing foot.”<br />
Marina tells me her current piece is about impermanence<br />
and mortality, but also about the fact<br />
that death can be the beginning of something<br />
new. Likewise, the piece is not only about death<br />
but also intended as an introduction and beginning<br />
for a forthcoming work. The <strong>AR</strong> <strong>Lab</strong> makes<br />
this beginning possible through financial support<br />
but also provides technical assistance and serves<br />
as a place for mutual inspiration and exchange.<br />
Despite her long break from the digital arts, the<br />
young artist feels confident about working with<br />
<strong>AR</strong> again:<br />
“It’s a bit like biking, once you’ve<br />
learned it, you never unlearn it. It’s<br />
the same with me and <strong>AR</strong>, of course<br />
I had to practice a bit, but I still<br />
have the feel for it. I think working<br />
with <strong>AR</strong> is just a part of me.”<br />
After having paused for three years, Marina is<br />
positively surprised about how <strong>AR</strong> technology<br />
has emerged in the meantime:<br />
“<strong>AR</strong> is out there, it’s alive, it’s<br />
growing and finally, it can be<br />
markerless. I don’t like the use of<br />
markers. They are not part of my<br />
art and people see them, when<br />
they don’t wear <strong>AR</strong> glasses. I am<br />
also glad that so many people<br />
know <strong>AR</strong> from their mobile phones<br />
or at least have heard about it<br />
before. Essentially, I don’t want<br />
the audience to wonder about the<br />
technology, I want them to look at<br />
the pictures and animations I create.<br />
The more people are used to<br />
the technology the more they will<br />
focus on the content. I am really<br />
happy and excited how <strong>AR</strong> has<br />
evolved in the last years!”<br />
I ask, how working with brush and paint differs<br />
from working with <strong>AR</strong>, but there seems to be<br />
surprisingly little difference.<br />
“The main difference is that with<br />
<strong>AR</strong> I am working with a pen-tablet,<br />
a computer and a screen. I<br />
control the software, but if I work<br />
with a brush I have the same kind<br />
of control over it. In the past, I<br />
used to think that there was a<br />
difference, but now I think of the<br />
computer as just another medium<br />
to work with. There is no real<br />
difference between working with<br />
a brush and working with a computer.<br />
My love for technology is<br />
similar to my love for paint.”<br />
Marina discovered her love for technology<br />
at a young age:<br />
“When I was a child I found a<br />
book with code and so I programmed<br />
some games. That was<br />
fun, I just understood it. It’s the<br />
same with creating <strong>AR</strong> works<br />
now. My way of thinking perfectly<br />
matches with how <strong>AR</strong> works. It<br />
feels completely natural to me.”<br />
Nevertheless, working with technology also has<br />
its downside:<br />
“The most annoying thing about<br />
working with <strong>AR</strong> is that you are<br />
always facing technical limitations<br />
and there is so much that<br />
can go wrong. No matter how well<br />
you do it, there is always the risk<br />
that something won’t work.<br />
I hope for technology to get more<br />
stable in the future.”<br />
When realizing her artistic augmentations,<br />
Marina sticks to an established workflow:<br />
“I usually start out with my own<br />
photographs and a certain space<br />
I want to augment. Preferably I<br />
measure the dimensions of the<br />
space, and then I work with that<br />
62 63
oom in my head. I have it in my<br />
inner vision and I think in pictures.<br />
There is a photo register in my<br />
head which I can access. It’s a bit<br />
like parking a car. I can park a car<br />
in a very small space extremely<br />
well. I can feel the car around me<br />
and I can feel the space I want to<br />
put it in. It’s the same with the<br />
art I create. Once I have clear idea<br />
of the artwork I want to create, I<br />
use Cinema4D software to make<br />
3d models. Then I use Build<strong>AR</strong> to<br />
place my 3d models it the real<br />
space. If everything goes well,<br />
things happen that you could not<br />
have imagined.”<br />
A result of this process is, for example, the <strong>AR</strong><br />
installation Out of the blue which was shown<br />
at Today’s Art festival in The Hague in 2007:<br />
“The idea behind ‘Out of the<br />
blue’ came from a photograph<br />
I took in an elevator. I took the<br />
picture so that the lights in the<br />
elevator looked like white ellipses<br />
on a black background. I<br />
took this basic elliptical shape<br />
as a basis for working in a very<br />
big space. I was very curious if I<br />
could use such a simple shape and<br />
still convince the audience that it<br />
really existed in the space. And<br />
it worked — people tried to touch<br />
it with their hands and were very<br />
surprised when that wasn’t possible.”<br />
The fact that people believe in the existence of<br />
her virtual objects is also important for Marina’s<br />
personal understanding of <strong>AR</strong>:<br />
“For me, Augmented Reality<br />
means using digital images to<br />
create something which is not<br />
real. However, by giving meaning<br />
to it, it becomes real and people<br />
realize that it might as well<br />
exist.”<br />
I wonder whether there is a specific place or<br />
space she’d like to augment in the future and<br />
Marina has quite some places in mind. They have<br />
one thing in common: they are all known museums<br />
that show modern art.<br />
“I would love to create works<br />
for the big museums such as the<br />
TATE Modern or MoMa. In the<br />
Netherlands, I’d love to augment<br />
spaces at the Stedelijk Museum in<br />
Amsterdam or Boijmans museum<br />
in Rotterdam. That’s my world.<br />
Going to a museum means a lot<br />
to me. Of course, one can place<br />
<strong>AR</strong> artworks everywhere, also in<br />
public spaces. But it is important<br />
to me that people who experience<br />
my work have actively chosen to<br />
go somewhere to see art. I don’t<br />
want them to just see it by accident<br />
at a bus stop or in a park.”<br />
Rather than placing her virtual models in a specific<br />
physical space, her current work follows a<br />
different approach. This time, Marina will place<br />
the animated dying animals in the hands of the<br />
audiences. The artist has some ideas about how<br />
to design this physical contact with the digital<br />
animals.<br />
“In order for my piece to work,<br />
the viewer needs to feel like he<br />
is holding something in his hand.<br />
Ideally, he will feel the weight of<br />
the animal. The funeral monuments<br />
will therefor have a certain<br />
weight.”<br />
It is still open where and when we will be able<br />
to experience the piece:<br />
“My residency lasts 10 weeks. But<br />
of course that’s not enough time<br />
to finish. In the past, a piece was<br />
finished when the time to work on<br />
it was up. Now, a piece is finished<br />
when it feels complete. It’s something<br />
I decide myself, I want to<br />
have control over it. I don’t want<br />
any more restrictions. I avoid<br />
deadlines.”<br />
Coming from a fine arts background, Marina has<br />
a tip for art students who want to to follow in<br />
her footsteps and are curious about working<br />
with <strong>AR</strong>:<br />
“I know it can be difficult to combine<br />
technology with art, but it is<br />
worth the effort. Open yourself<br />
up to for art in all its possibilities,<br />
including <strong>AR</strong>. <strong>AR</strong> is a chance<br />
to take a step in a direction of<br />
which you have no idea where<br />
you’ll find yourself. You have to<br />
be open for it and look beyond<br />
the technology. <strong>AR</strong> is special —<br />
I couldn’t live without it anymore...”<br />
64 65
BIoGRAPHY -<br />
JERoEN VAN ERP<br />
jeRoen VAn eRP gRAduAted<br />
fRom the fAculty of industRiAl<br />
design At the technicAl<br />
uniVeRsity of delft in<br />
1988. in 1992, he wAs one of<br />
the foundeRs of fABRique<br />
in delft, which Positioned<br />
itself As A multidisciPlin<strong>AR</strong>y<br />
design BuReAu. he estABlished<br />
the inteRActiVe<br />
mediA deP<strong>AR</strong>tment in 1994,<br />
focusing PRim<strong>AR</strong>ily on de-<br />
VeloPing weBsites foR the<br />
woRld wide weB - BRAnd<br />
new At thAt time.<br />
66<br />
under jeroen’s joint leadership, fabrique<br />
has grown through the years into a multi-<br />
faceted design bureau. it currently employs<br />
more than 100 artists, engineers and story-<br />
tellers working for a wide range of customers:<br />
from supermarket chain Albert heijn to the<br />
Rijksmuseum.<br />
fabrique develops visions, helps its clients<br />
think about strategies, branding and innova-<br />
tion and realises designs. Preferably cutting<br />
straight through the design disciplines, so<br />
that the traditional borders between graphic<br />
design, industrial design, spatial design and<br />
interactive media are sometimes barely recognisable.<br />
in the bureau’s vision, this cross<br />
media approach will be the only way to create<br />
apparently simple solutions for complex<br />
and relevant issues. the bureau also opened<br />
a studio in Amsterdam in 2008.<br />
jeroen is currently cco (chief creative<br />
officer) and a partner, and in this role he<br />
is responsible for the creative policy of the<br />
company. he has also been closely involved<br />
in various projects as art director and designer.<br />
he is a guest lecturer for various<br />
courses and is a board member at nAgo<br />
(the netherlands graphic design Archive)<br />
and the design & emotion society.<br />
www.fabrique.nl<br />
A MAGICAL LEVERAGE<br />
IN SE<strong>AR</strong>CH oF THE KILLER APPLICATIoN<br />
BY JERoEN VAN ERP<br />
the moment i was confronted with the technology of Augmented Reality,<br />
back in 2006 at the Royal Academy of Arts in the hague, i was thrilled.<br />
despite the heavy helmet, the clumsy equipment, the shaky images and<br />
the lack of a well-defined purpose, it immediately had a profound impact<br />
on me. from the start, it was clear that this technology had a lot of po-<br />
tential, although at first it was hard to grasp why. Almost six years later,<br />
the fog that initially surrounded this new technology has gradually faded<br />
away. to start with, the technology itself is developing rapidly, as is the<br />
equipment. But more importantly: companies and cul-<br />
tural institutions are starting to understand how they can benefit from<br />
this technology. At the moment there are a variety of applications avail-<br />
able (mainly mobile applications for tablets or smart phones) that create<br />
added value for the user or consumer. this is great, because it not only<br />
allows the audience to gain experience in the field of this still-developing<br />
technology, but also the industry. But to make Augmented Reality a real<br />
success, the next step will be of vital importance.<br />
67
innoVAting oR innoVAting?<br />
let’s have a look at different forms of innovat-<br />
ing in figure 1. on the left we see innovations<br />
with a bottom-up approach, and on the right a<br />
top-down approach to innovating. A bottomup<br />
approach means that we have a promising<br />
new technique, concept or idea although the<br />
exact goal or matching business model aren’t<br />
clear yet. in general, bottom-up developments<br />
are technological or art-based, and are therefore<br />
what i would call autonomous: the means<br />
are clear, but the exact goal has still to be<br />
defined. the usual strategy to take it further<br />
is to set up a start-up company in order to<br />
develop the technique and hopefully to create<br />
a market.<br />
this is not always that simple. innovating from<br />
a top-down approach means that the innovation<br />
is steered on the basis of a more or less<br />
clearly defined goal. in contrast with bottomup<br />
innovations, the goal is well-defined and<br />
the designer or developer has to choose the<br />
right means, and design a solution that fits<br />
the goal. this can be a business goal, but also<br />
figure 1.<br />
a social goal. A business goal is often derived<br />
from a benefit for the user or the consumer,<br />
which is expected to generate an economic<br />
benefit for the company. A marketing specialist<br />
would state that there is already a market.<br />
this approach means that you have to innovate<br />
with an intended goal in mind. A business<br />
goal-driven innovation can be a product innovation<br />
(either physical products, services or a<br />
combination of the two) or a brand innovation<br />
(storytelling, positioning), but always with an<br />
intended economical or social benefit in mind.<br />
As there is an expected benefit, people are<br />
willing to invest.<br />
it’s interesting to note the difference on the<br />
vertical axis between radical innovations and<br />
incremental changes (Robert Verganti – design<br />
drive innovation). incremental changes are<br />
improvements of existing concepts or products.<br />
this is happening a lot, for instance in<br />
the automotive industry. in general, a radical<br />
innovation changes the experience of the<br />
product in a fundamental way, and as a result<br />
of this often changes an entire business.<br />
this is something Apple has achieved several<br />
times, but it has also been achieved by tomtom,<br />
and by Philips and douwe egberts with<br />
their senseo coffee machine.<br />
how ABout <strong>AR</strong>?<br />
what about the position of Augmented Reality?<br />
to start with, the Augmented Reality<br />
technique is not a standalone innovation. it’s<br />
not a standalone product but a technique or<br />
feature that can be incorporated into products<br />
or services with a magical leverage. At its core<br />
it is a technique that was developed — and is<br />
still developing — with specialist purposes in<br />
mind. in principle there was no big demand<br />
from ‘the market’. essentially, it is a bottomup<br />
technological development that needs a<br />
concept, product or service.<br />
you can argue about whether it is an incre-<br />
mental innovation or a radical one. A virtual<br />
reality expert will probably tell you that it is<br />
an improvement (incremental innovation) of<br />
the VR technique. But if you look from an application<br />
perspective, there is a radical aspect<br />
to it. i prefer to keep the truth in the middle.<br />
At this moment in time, <strong>AR</strong> is in the blue area<br />
(figure 1).<br />
it is clear that bottom-up innovation and topdown<br />
innovation are different species. But<br />
when it comes to economic leverage, it is a<br />
challenge to be part of the top-down game.<br />
this provides a guarantee for further development,<br />
and broad acceptation of the technique<br />
and principles. so the major challenge for <strong>AR</strong><br />
is to make the big step to the right part of figure<br />
1 as indicated by the red arrow. Although<br />
the principles of Augmented Reality are very<br />
promising, it’s clear we aren’t there yet. An<br />
example: we recently received a request to<br />
‘do something’ with Augmented Reality. the<br />
idea was to project the result of an <strong>AR</strong> application<br />
onto a big wall. suddenly it occurred to<br />
me that the experience of <strong>AR</strong> wasn’t suitable<br />
at all for this form of publishing. <strong>AR</strong> doesn’t do<br />
well on a projection screen. it does well in the<br />
user’s head, where time, place, reality and<br />
imagination can play an intriguing game with<br />
our senses. it is unlikely that the technique of<br />
Augmented Reality will lead to mass consumption<br />
as in ‘experiencing the same thing with<br />
a lot of people at the same time’. no, by their<br />
nature, <strong>AR</strong> applications are intimate and intense,<br />
and this is one of its biggest assets.<br />
futuRe<br />
we have come a long way, and the things we<br />
can do with <strong>AR</strong> are becoming more amazing by<br />
the day. the big challenge is to make it applicable<br />
in relevant solutions. there’s no discussion<br />
about the value of <strong>AR</strong> in specialist areas,<br />
such as the military industry. institutions in<br />
the field of art and culture have discovered<br />
the endless possibilities, and now it is the<br />
time to make the big leap towards solutions<br />
with social or economic value (the green area<br />
in figure 1). this will give the technique the<br />
chance to develop further in order to flourish<br />
at the end. from that perspective, it wouldn’t<br />
surprise me if the first really good, efficient<br />
and economically profitable application will<br />
emerge for educational purposes.<br />
let’s not forget we are talking about a tech-<br />
nology that is still in its infant years. when i<br />
look back at the websites we made 15 years<br />
ago, i realize the gigantic steps we have made,<br />
and i am aware of the fact that we could<br />
hardly imagine then what the impact of the<br />
internet would be on society today. of course,<br />
it’s hard to compare the concept of Augmented<br />
Reality with that of the internet, but it is<br />
a valid comparison, because it gives the same<br />
powerless feeling of not being able to predict<br />
its future. But it will probably be bigger than<br />
you can imagine.<br />
68 69
THE PoSITIoNING<br />
oF VIRTUAL oBJECTS<br />
RoBERT PREVEL<br />
WHEN USING AUGMENTED<br />
REALITY (<strong>AR</strong>) FoR VISIoN,<br />
VIRTUAL oBJECTS <strong>AR</strong>E ADDED<br />
To THE REAL WoRLD AND DIS-<br />
PLAYED IN SoME WAY To THE<br />
USER; BE THAT VIA A MoNIToR,<br />
PRoJECToR, oR HEAD-MoUNT-<br />
ED DISPLAY (HMD). oFTEN IT IS<br />
DESIRABLE, oR EVEN UNAVoID-<br />
ABLE, FoR THE VIEWPoINT oF<br />
THE USER To MoVE <strong>AR</strong>oUND<br />
THE ENVIRoNMENT (THIS IS<br />
P<strong>AR</strong>TICUL<strong>AR</strong>LY THE CASE IF<br />
THE USER IS WE<strong>AR</strong>ING A HMD).<br />
THIS PRESENTS A PRoBLEM,<br />
REG<strong>AR</strong>DLESS oF THE TYPE oF<br />
DISPLAY USED: HoW CAN THE<br />
VIEWPoINT BE DECoUPLED<br />
FRoM THE AUGMENTED VIR-<br />
TUAL oBJECTS?<br />
To recap, virtual objects are blended with the<br />
real world view in order to achieve an Augmented<br />
world view. From our initial viewpoint<br />
we can determine what the virtual object’s<br />
70<br />
position and orientation (pose) in 3D space,<br />
and its scale, should be. However, if the view<br />
point changes, then how we view the virtual<br />
object should also change. For example, if<br />
I walk around to face the back of a virtual<br />
object, I expect to be able to see the rear<br />
of that object.<br />
The solution to this problem is to keep track<br />
of the user’s viewpoint and, in the event that<br />
the viewpoint changes, to update the pose of<br />
any virtual content accordingly. There are a<br />
number of ways in which this can be achieved,<br />
by using, for example: positional sensors<br />
(such as inertia trackers), a global positioning<br />
system, computer vision techniques, etc. Typically<br />
the best results are those systems that<br />
take the data from many tracking systems and<br />
blend them together.<br />
At TU Delft, we have been researching and<br />
developing techniques to track position using<br />
computer vision techniques. often it is<br />
the case that video cameras are used in <strong>AR</strong><br />
systems; indeed, in the case where the <strong>AR</strong><br />
system uses video see-through, the use of<br />
cameras is necessary. Using computer vision<br />
techniques, we can identify landmarks in the<br />
environment, and, using these landmarks, we<br />
can determine the pose of our camera with<br />
basic geometry. If the camera is not used<br />
directly as the viewpoint (as is the case in<br />
optical see-through systems), then we can still<br />
keep track of the viewpoint by attaching the<br />
camera to it. Say, for example, that we have<br />
an optical see-through HMD with an attached<br />
video camera. Then, if we calculate the pose<br />
of the camera, we can then determine the<br />
pose of the viewpoint, provided that the<br />
camera’s position relative to the viewpoint<br />
remains fixed.<br />
The problem then, has been reduced to<br />
identifying landmarks in the environment.<br />
Historically, this has been achieved by the<br />
use of fiducial markers, which act as points<br />
of reference in the image. Fiducial markers<br />
provide us with a means of determining the<br />
scale of the visible environment, provided<br />
that: enough points of reference are visible,<br />
we know their relative positions, and these<br />
relative positions don’t change. A typical<br />
marker often used in <strong>AR</strong> applications consists<br />
of a card with a black rectangle in the centre,<br />
a white border, and an additional mark to<br />
determine which edge of the card is considered<br />
the bottom. As we know that the corners<br />
of the black rectangle are all 90 degrees, and<br />
we know the distance between corners, we<br />
can identify the marker and determine the<br />
pose of the camera with regard to the points<br />
of reference (in this case the four corners of<br />
the card).<br />
A large number of simple ‘desktop’ <strong>AR</strong> applica-<br />
tions make use of individual markers to track<br />
camera pose, or conversely, to track the position<br />
of the markers relative to our viewpoint.<br />
Larger applications require multiple markers<br />
linked together, normally distinguishable by<br />
a unique pattern or barcode in the centre<br />
of the marker. Typically the more points of<br />
reference that are visible in a scene, the better<br />
the results when determining the camera’s<br />
pose. The key advantage to using markers<br />
for tracking the pose of the camera is that<br />
an environment can be carefully prepared<br />
in advance, and provided the environment<br />
does not change, should deliver the same <strong>AR</strong><br />
experience each time. Sometimes however,<br />
it is not feasible to prepare an environment<br />
with markers. often it is desirable to use an<br />
<strong>AR</strong> application in an unknown or unprepared<br />
environment. In these cases, an alternative<br />
to using markers is to identify the natural<br />
features found in the environment.<br />
The term ‘natural features’ can be used to<br />
describe the parts of an image that stand out.<br />
Examples include: edges, corners, areas of<br />
high contrast, etc. In order to be able to use<br />
the natural features to track the camera position<br />
in an unknown environment, we need to<br />
be able to first identify the natural features,<br />
and then determine their relative positions<br />
in the environment. Whereas you could place<br />
20 markers in an environment and still only<br />
have 80 identifiable corners, there are often<br />
hundreds of natural features in any one image.<br />
This makes using natural features a more robust<br />
solution than using markers, as there are<br />
far more landmarks we can use to navigate,<br />
not all of which need to be visible. one of the<br />
key advantages to using natural features over<br />
markers is that: as we already need to identify<br />
and keep track of those natural features seen<br />
from our initial view point, we can use the<br />
same method to continually update a 3D map<br />
of features as we change our view point.<br />
This allows our working environment to grow,<br />
which could not be achieved in a prepared<br />
environment.<br />
Although we are able to determine the relative<br />
distance between features, the question<br />
remains: how can we determine the absolute<br />
position of features in an environment without<br />
some known measurement? The short answer<br />
is that we cannot; either we need to estimate<br />
the distance or we can introduce a known<br />
measurement. In a future edition we will<br />
discuss the use of multiple video cameras and<br />
how, given the absolute distance between the<br />
cameras, we can determine the absolute position<br />
of our identified features.<br />
71
MEDIATED REALITY FoR CRIME<br />
SCENE INVESTIGATIoN1 STEPHAN LUKoSCH<br />
cRime scene inVestigAtion in the netheRlAnds is PRim<strong>AR</strong>ily<br />
the ResPonsiBility of the locAl Police. foR seVeRe cRimes,<br />
A nAtionAl teAm suPPoRted By the netheRlAnds foRensic<br />
institute (nfi) is cAlled in. initiAlly cAPtuRing All detAils<br />
of A cRime scene is of PRime imPoRtAnce (so thAt eVidence<br />
is not Accidently destRoyed). nfi’s deP<strong>AR</strong>tment of digitAl<br />
imAge AnAlysis uses the infoRmAtion collected foR 3d<br />
cRime scene ReconstRuction And AnAlysis.<br />
within the csi the hague project (http://<br />
www.csithehague.com) several companies and<br />
research institutes cooperate under the guidance<br />
of the netherlands forensic institute in<br />
order to explore new technologies to improve<br />
crime scene investigation by combining different<br />
technologies to digitize, visualize and investigate<br />
the crime scene. the major motivation<br />
for the csi the hague project is that one<br />
can investigate a crime scene only once. if you<br />
do not secure all possible evidence during this<br />
investigation, it will not be available for solving<br />
the committed crime. the digitalization<br />
of the crime scene provides opportunities for<br />
testing hypotheses and witness statements,<br />
but can also be used to train future investigators.<br />
for the csi the hague project, two<br />
groups at the delft university of technology,<br />
systems engineering2 and Biomechanical engineering3<br />
, joined their efforts to explore the<br />
potential of mediated and Augmented Reality<br />
for future crime scene investigation and to<br />
tackle current issues in crime scene investigation.<br />
in Augmented Reality, virtual data is<br />
spatially overlaid on top of physical reality. with<br />
this technology the flexibility of virtual reality<br />
can be used and is grounded in physical reality<br />
(Azuma, 1997). mediated reality refers to the<br />
ability to add to, subtract information from, or<br />
otherwise manipulate one’s perception of reality<br />
through the use of a wearable computer or<br />
hand-held device (mann and Barfield, 2003).<br />
in order to reveal the current challenges for<br />
supporting spatial analysis in crime scene<br />
investigation, structured interviews with five<br />
international experts in the area of 3d crime<br />
scene reconstruction were conducted. the<br />
interviews showed a particular interest for<br />
current challenges in spatial reconstruction<br />
and the interaction with the reconstruction<br />
data. the identified challenges are:<br />
1This article is based upon (Poelman et al., 2012).<br />
2http://www.sk.tbm.tudelft.nl 3 http://3me.tudelft.nl/en/about-the-faculty/departments/biomechanical-engineering/research/dbl-delft-biorobotics-lab/people/<br />
72<br />
figure 1. mediAted ReAlity heAd mounted deVice in use duRing the exPeRiment in the dutch foRensic field lAB.<br />
■ Time needed for reconstruction: data capture,<br />
alignment, data clean-up, geometric<br />
modelling and analyses are manual steps.<br />
■ Expertise required to deploy dedicated software<br />
and secure evidence at the crime scene.<br />
■ Complexity: Situations differ significantly.<br />
■ Time freeze: Data capture is often conducted<br />
once after a scene has been contaminated.<br />
The interview sessions ended with an open<br />
discussion on how mediated reality can support<br />
crime scene investigation in the future. Based<br />
on these open discussions, the following requirements<br />
for a mediated reality system that is to<br />
support crime scene investigation were identified:<br />
■ Lightweight head-mounted display (HMD):<br />
It became clear that the investigators whom<br />
arrive first on the crime scene currently carry<br />
a digital camera. Weight and ease of use are<br />
important design criteria. Experts would like<br />
those close to a pair of glasses.<br />
■ Contactless augmentation alignment (no<br />
markers on the crime scene): The first<br />
investigator who arrives on a crime scene has<br />
to keep the crime scene as untouched as possible.<br />
Technology that involves preparing the<br />
scene is therefore unacceptable.<br />
■ Bare hands gestures for user interface opera-<br />
tion: The hands of the CSIs have to be free to<br />
physically interact with the crime scene when<br />
needed, e.g. to secure evidence, open doors,<br />
climb, etc. Additional hardware such as data<br />
gloves or physically touching an interface<br />
such as a mobile device is not acceptable.<br />
■ Remote connection to and collaboration with<br />
experts: Expert crime scene investigators are<br />
a scarce resource and are not often available<br />
at location on request. Setting up a remote<br />
connection to guide a novice investigator<br />
through the crime scene and to collaboratively<br />
analyze the crime scene has the potential<br />
to improve the investigation quality.<br />
To address the above requirements, a novel<br />
mediated reality system for collaborative spatial<br />
analysis on location has been designed, developed<br />
and evaluated together with experts in the<br />
field and the NFI. This system supports collaboration<br />
between crime scene investigators (CSIs)<br />
on location who wear a HMD (see Figure 1) and<br />
expert colleagues at a distance.<br />
The mediated reality system builds a 3D map of<br />
the environment in real-time, allows remote users<br />
to virtually join and interact together in shared<br />
Augmented space with the wearer of the HMD,<br />
and uses bare hand gestures to operate the 3D<br />
multi-touch user interface. The resulting medi-<br />
73
figure 2. heAd mounted disPlAy, modified<br />
cinemiZeR oled (c<strong>AR</strong>l Zeiss) with two micRosoft<br />
hd-5000 weBcAms.<br />
figure 3. sP<strong>AR</strong>se 3d feAtuRe mAP geneRAted By<br />
the Pose estimAtion module.<br />
figure 4. gRAPhicAl useR inteRfAce oPtions menu.<br />
74<br />
ated reality system supports a lightweight headmounted<br />
display (HMD), contactless augmentation<br />
alignment, and a remote connection to and collaboration<br />
with expert crime scene investigators.<br />
The video see-through of a modified Carl Zeiss<br />
Cinemizer oLED (cf. Figure 2) for displaying<br />
content fulfills the requirement for a lightweight<br />
HMD, as its total weight is ~180 grams. Two<br />
Microsoft HD-5000 webcams are stripped and<br />
mounted in front of the Cinemizer providing a<br />
full stereoscopic 720p resolution pipeline. Both<br />
cameras record at ~30hz in 720p, images are<br />
projected in our engine, and render 720p stereoscopic<br />
images to the Cinemizer.<br />
As for all mediated reality systems, robust realtime<br />
pose estimation is one of the most crucial<br />
parts, as the 3D pose of the camera in the physical<br />
world is needed to render virtual objects<br />
correctly on required positions. We use a heavily<br />
modified version of PTAM (Parallel Tracking and<br />
Mapping) (Klein and Murray, 2007), in which<br />
a single camera setup is replaced by a stereo<br />
camera setup using 3D natural feature matching<br />
and estimation based on natural features. Using<br />
this algorithm, a sparse metric map (cf. Figure<br />
3) of the environment is created. This sparse<br />
metric map can be used for pose estimation in<br />
our Augmented Reality system.<br />
In addition to the sparse metric map, a dense<br />
3D map of the crime scene is created. The<br />
dense metric map provides a detailed copy of<br />
the crime scene enabling detailed analysis and<br />
is created from a continuous stream of disparity<br />
maps that are generated while the user moves<br />
around the scene. Each new disparity map is<br />
registered (combined) using the pose information<br />
from the PE module to construct or extend<br />
the 3D map of the scene. The point clouds are<br />
used for occlusion and collision checks, and for<br />
snapping digital objects to physical locations.<br />
By using an innovative hand tracking system,<br />
the mediated reality system can recognize bare<br />
hands gestures for user interface operation.<br />
This hand tracking system utilizes the stereo<br />
camera rig to detect the hand movements in 3D.<br />
The cameras are part of the HMD and an adaptive<br />
algorithm has been designed to determine<br />
whether to rely on the color, disparity or on both<br />
depending on the lighting conditions. This is the<br />
core technology to fulfill the requirement of<br />
bare hand interfacing. The user interface and<br />
the virtual scene are general-purpose parts of<br />
the mediated reality system. They can be used<br />
for CSI, but also for any other mediated reality<br />
application. The tool set, however, needs to be<br />
tailored for the application domain. The current<br />
mediated reality system supports the following<br />
tasks for CSIs: recording the scene, placing tags,<br />
loading 3D models, bullet trajectories and placing<br />
restricted area ribbons. Figure 4 shows the<br />
corresponding menu attached to a user’s hand.<br />
The mediated reality system has been evalu-<br />
ated on a staged crime scene at the NFI’s <strong>Lab</strong><br />
with three observers, one expert and one<br />
layman with only limited background in CSI.<br />
Within the experiment the layman, facilitated<br />
by the expert, conducted three spatial tasks,<br />
i.e. tagging a specific part of the scene with<br />
information tags, using barrier tape and poles<br />
to spatially secure the body in the crime scene<br />
and analyzing a bullet trajectory analysis with<br />
ricochet. The experiment was analyzed along<br />
seven dimensions (Burkhardt et al., 2007): fluidity<br />
of collaboration, sustaining mutual understanding,<br />
information exchanges for problem<br />
solving, argumentation and reaching consensus,<br />
task and time management, cooperative orientation,<br />
and individual task orientation. The<br />
results show that the mediated reality system<br />
supports remote spatial interaction with the<br />
physical scene as well as collaboration in shared<br />
augmented space while tackling current issues in<br />
crime scene investigation. The results also show<br />
that there is a need for more support to identify<br />
whose turn it is and who wants the next turn,<br />
etc. Additionally, the results show the need to<br />
represent the expert in the scene to increase<br />
the awareness and trust of working in a team<br />
and to counterbalance the feeling of being observed.<br />
Knowing the expert’s focus and current<br />
activity could possibly help to overcome this issue.<br />
Whether traditional patterns for computermediated<br />
interaction (Schümmer and Lukosch,<br />
2007) support awareness in mediated reality<br />
or rather new forms of awareness need to be<br />
designed, will be the subject of future research.<br />
Further tasks for future research include the<br />
design and evaluation of alternative interaction<br />
possibilities, e.g. by using physical objects that<br />
are readily available in the environment, sensor<br />
fusion, image feeds from spectral cameras or<br />
previously recorded laser scans, to provide more<br />
situational awareness and the privacy, security<br />
and validity of captured data. Finally, though<br />
IT is being tested and used for educational<br />
purposes within the CSI <strong>Lab</strong> of the Netherlands<br />
Forensic Institute (NFI), only the application and<br />
test of the mediated reality system in real settings<br />
can show the added value for crime scene<br />
investigation.<br />
RefeRences<br />
■ R. Azuma, A Survey of Augmented Reality,<br />
Presence 6, Vol 4, 1997, 355-385<br />
■ J. Burkhardt, F. Détienne, A. Hébert, L. Perron,<br />
S. Safin, P. Leclercq, An approach to assess the<br />
quality of collaboration in technology-mediated<br />
design situation, European Conference on Cognitive<br />
Ergonomics: Designing beyond the Product - Understanding<br />
Activity and User Experience in Ubiquitous<br />
Environments, 2009, 1-9<br />
■ G. Klein, D. Murray, Parallel Tracking and Mapping<br />
for Small <strong>AR</strong> Workspaces, Proc. International<br />
Symposium on Mixed and Augmented Reality, 2007,<br />
225-234<br />
■ S. Mann, W. Barfield, Introduction to Mediated<br />
Reality, International Journal of Human-Computer<br />
Interaction, 2003, 205-208<br />
■ R. Poelman, o. Akman, S. Lukosch, P. Jonker, As<br />
if Being There: Mediated Reality for Crime Scene<br />
Investigation, CSCW ‘12: Proceedings of the 2012<br />
ACM conference on Computer Supported Cooperative<br />
Work, ACM New York, NY, USA, 2012, 1267-1276,<br />
http://dx.doi.org/10.1145/2145204.2145394<br />
■ T. Schümmer, S. Lukosch, Patterns for Computer-<br />
Mediated Interaction, John Wiley & Sons, Ltd. 2007<br />
75
on fRidAy decemBeR 16th 2011<br />
the symPhony oRchestRA<br />
of the RoyAl conseRVAtoiRe<br />
PlAyed die wAlküRe (Act 1)<br />
By Rich<strong>AR</strong>d wAgneR, At the<br />
BeAutiful conceRt hAll ‘de<br />
VeReeniging’ in nijmegen.<br />
the <strong>AR</strong> lAB wAs inVited By<br />
the RoyAl conseRVAtoiRe to<br />
PRoVide VisuAls duRing this<br />
liVe PeRfoRmAnce. togetheR<br />
with students fRom diffeRent<br />
deP<strong>AR</strong>tments of the RoyAl<br />
AcAdemy of <strong>AR</strong>t, we designed<br />
A scReen consisting of 68<br />
Pieces of tRAnsP<strong>AR</strong>ent cloth<br />
(400x20 cm), hAnging in fouR<br />
lAyeRs ABoVe the oRchestRA.<br />
By PRojecting on this cloth<br />
we cReAted VisuAls giVing the<br />
illusion of dePth.<br />
we chose 7 leitmotiVs (RecuR-<br />
Ring theme, AssociAted with A<br />
P<strong>AR</strong>ticul<strong>AR</strong> PeRson, PlAce, oR<br />
ideA), And cReAted AnimAtions<br />
RePResenting these using<br />
colouR, shAPe And moVement.<br />
these AnimAtions weRe PlAyed<br />
At key-moments of the PeRfoRmAnce.<br />
76 77
CoNTRIBUToRS<br />
wim VAn eck<br />
Royal Academy of Art (KABK)<br />
w.vaneck@kabk.nl<br />
Wim van Eck is the 3D animation specialist of the<br />
<strong>AR</strong> <strong>Lab</strong>. His main tasks are developing Augmented<br />
Reality projects, supporting and supervising<br />
students and creating 3d content. His interests<br />
are, among others, real-time 3d animation, game<br />
design and creative research.<br />
jeRoen VAn eRP<br />
Fabrique<br />
jeroen@fabrique.nl<br />
Jeroen van Erp co-founded Fabrique, a multidisciplinary<br />
design agency in which the different<br />
design disciplines (graphic, industrial, spatial<br />
and new media) are closely interwoven. As a<br />
designer he was recently involved in the flagship<br />
store of Giant Bicycles, the website for the<br />
Dutch National Ballet and the automatic passport<br />
control at Schiphol airport, among others.<br />
PieteR jonkeR<br />
Delft University of Technology<br />
P.P.Jonker@tudelft.nl<br />
Pieter Jonker is Professor at Delft University<br />
of Technology, Faculty Mechanical, Maritime<br />
and Materials Engineering (3ME). His main<br />
interests and fields of research are: real-time<br />
embedded image processing, parallel image<br />
processing architectures, robot vision, robot<br />
learning and Augmented Reality.<br />
yolAnde kolstee<br />
Royal Academy of Art (KABK)<br />
Y.Kolstee@kabk.nl<br />
Yolande Kolstee is head of the <strong>AR</strong> <strong>Lab</strong> since<br />
2006. She holds the post of Lector (Dutch for<br />
researcher in professional universities) in the<br />
field of ‘Innovative Visualisation Techniques in<br />
higher Art Education’ for the Royal Academy of<br />
Art, The Hague.<br />
mA<strong>AR</strong>ten lAmeRs<br />
Leiden University<br />
lamers@liacs.nl<br />
Maarten Lamers is assistant professor at the<br />
Leiden Institute of Advanced Computer Science<br />
(LIACS) and board member of the Media Technology<br />
MSc program. Specializations include social<br />
robotics, bio-hybrid computer games, scientific<br />
creativity, and models for perceptualization.<br />
stePhAn lukosch<br />
Delft University of Technology<br />
S.g.lukosch@tudelft.nl<br />
Stephan Lukosch is associate professor at the<br />
Delft University of Technology. His current<br />
research focuses on collaborative design and<br />
engineering in traditional as well as emerging<br />
interaction spaces such as augmented reality.<br />
In this research, he combines recent results from<br />
intelligent and context-adaptive collaboration<br />
support, collaborative storytelling for knowledge<br />
elicitation and decision-making, and design<br />
patterns for computer-mediated interaction.<br />
feRenc molnáR<br />
Photographer<br />
info@baseground.nl<br />
Ferenc Molnár is a multimedia artist based in<br />
The Hague since 1991. In 2006 he has returned<br />
to the KABK to study photography and that’s<br />
where he started to experiment with <strong>AR</strong>. His<br />
focus is on the possibilities and on the impact of<br />
this new technology as a communication platform<br />
in our visual culture.<br />
RoBeRt PReVel<br />
Delft University of Technology<br />
r.g.prevel@tudelft.nl<br />
Robert Prevel is working on a PhD focusing on<br />
localisation and mapping in Augmented Reality<br />
applications at the Delft Biorobotics <strong>Lab</strong>, Delft<br />
University of Technology under the supervision<br />
of Prof.dr.ir P.P.Jonker.<br />
hAnnA<br />
schRAffenBeRgeR<br />
Leiden University<br />
hkschraf@liacs.nl<br />
Hanna Schraffenberger works as a researcher<br />
and PhD student at the Leiden Institute of<br />
Advanced Computer Science (LIACS) and at<br />
the <strong>AR</strong> <strong>Lab</strong> in The Hague. Her research interests<br />
include interaction in interactive<br />
art and (non-visual) Augmented Reality.<br />
esmé VAhRmeijeR<br />
Royal Academy of Art (KABK)<br />
e.vahrmeijer@kabk.nl<br />
Esmé Vahrmeijer is graphic designer and webmaster<br />
of the <strong>AR</strong> <strong>Lab</strong>. Besides her work at the<br />
<strong>AR</strong> <strong>Lab</strong>, she is a part time student at the Royal<br />
Academy of Art (KABK) and runs her own graphic<br />
design studio ooxo. Her interests are in graphic<br />
design, typography, web design, photography<br />
and education.<br />
jouke VeRlinden<br />
Delft University of Technology<br />
j.c.verlinden@tudelft.nl<br />
Jouke Verlinden is assistant professor at the<br />
section of computer aided design engineering<br />
at the Faculty of Industrial Design Engineering.<br />
With a background in virtual reality and interaction<br />
design, he leads the “Augmented Matter in<br />
Context” lab that focuses on blend between bits<br />
and atoms for design and creativity. Co-founder<br />
and lead of the minor on advanced prototyping<br />
programme and editor of the International<br />
Journal of Interactive Design, Engineering and<br />
Manufacturing.<br />
SPECIAL THANKS<br />
We would like to thank Reba Wesdorp, Edwin<br />
van der Heide, Tama McGlinn, Ronald Poelman,<br />
Karolina Sobecka, Klaas A. Mulder, Joachim<br />
Rotteveel and last but not least the Stichting<br />
Innovatie Alliantie (SIA) and the RAAK (Regionale<br />
Aandacht en Actie voor Kenniscirculatie) initiative<br />
of the Dutch Ministry of Education, Culture<br />
and Science.<br />
NEXT ISSUE<br />
The next issue of <strong>AR</strong>[t] will be out in october<br />
2012.<br />
78 79