santra navas - Thesis (1)
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
Santra Ayyalil Navas
AI Allies: An investigation into the
symbiotic relationship between AI
chatbots and humans using fictional
anecdotes
DISSERTATION
MA Digital Direction
Royal College of Arts
Word Count : 8,653
Tutor : Susannah Haslam
2020
~ My sincerest gratitude ~
to my supervisor Susannah Haslam for being my academic backbone
My mentor Eleanor Dare and tutor Hestia Peppe for providing me direction
My friends Xenia Bapasola and Ayza Akhtar for being my sounding board
and to my father Dr. Navas Karim, for never ceasing to inspire me
2
Preface
As a child, I was fascinated by human anatomy, psychology, and functions. The image of an
opened-up cadaver is often seen as disturbing; however, just like the parts of an open CPU,
each part of us serves a function. This paper is the result of my curiosity about the convoluted
relationship between humans and machines.
The idea was formulated after a group project that focused on creating a chatbot that
mimicked the conversational qualities of a cult leader. This led me to explore human-machine
relationships in greater depth, specifically within the context of conversational agents.
For the past few months, I have been attempting to build meaningful relationships with AI
chatbots, to understand how they communicate and how these conversations affect me.
3
Table of contents
1. Introduction……………………………………………………………………………….…….5
2. Raised by Machines………………………………………………………………….………..8
3. Humanising Machines……………………………………………………………………......12
4. Methodology……………………………………………………………………………....…...17
5. Fictional Anecdotes
Anecdote 1: Temple……………………………….18
Analysis……………………………...23
Anecdote 2: Alex and Santra……………………..24
Analysis………………………………31
Anecdote 3: Dear Andrew………………………...34
Analysis………………………………36
6. Conclusion…………………………………………………...38
7. Bibliography…………………………………………………………………………...………40
8. Appendices…………………………………………………………………………...……….44
4
1. Introduction
As human beings, we have a complicated relationship with machines. We are the fuel to their
technological growth, and now they can influence us and the way we live.
We are all chimaeras, theorized and fabricated hybrids of machine and organism; in
short, we are cyborgs. The cyborg is our ontology; it gives us our politics. The cyborg is
a condensed image of both imagination and material reality, the two joined centres
1
structuring any possibility of historical transformation.
The drive to innovate coupled with socio-economic factors that spurred investment in research
and development, enabled society to witness more innovation in the past 100 years than in the
previous 500. From the steam engine to electricity to more recent developments in artificial
intelligence, humans have successfully demonstrated the fervour behind their evolutionary
spirit.
As developments in Artificial Intelligence (AI) continue to strengthen, artificial human
companions are becoming popular. Technology companies are making headways to create
2
what they call “social companions” . The prime social motive behind these developments is to
help individuals cope with loneliness, as these machines communicate using natural language.
Since ELIZA, direct communication with machines have come a long way. Allow me to
elucidate.
More and more businesses are developing conversational agents such as chatbots to provide
information about their products and services. Chatbots are virtual assistants that simulate
conversations with humans. Due to the accurate and indistinguishable simulations these
1
Donna Haraway, "A Cyborg Manifesto"Socialist Review, 1985 p. 457
2
Paul Sawers, Kyle Wiggers and Emil Protalinski, "Intuition Robotics Raises $36 Million To Bring AI Companions To Everyone",
Venturebeat, 2020
<https://venturebeat.com/2020/02/13/intuition-robotics-raises-36-million-to-bring-ai-companions-to-everyone/> [Accessed 20
June 2020].
5
chatbots are able to sustain, most people who converse with them sometimes mistake the
chatbots for being humans.
These AI integrated conversational agents can behave in unpredictable ways, either by
exceeding their programmed capabilities or through disruptive behaviours. The
unpredictability is primarily due to how AI systems can change over time through learning. 3
Their human-like qualities bring up the question of how we as humans should interact with
these machines and vice versa, as AI systems not only influence us humans but we also alter
and mold AI due to its inherent inbuilt self learning quality. 4
The ongoing pandemic and the resulting isolation under lockdown has led to millions of
people seeking for AI companions to deal with loneliness. Creators of Replika, an AI humanoid
chatbot, has 7 million users, the company Luka that created the chatbot reported an increase of
35% in traffic since the lockdown. 5
In this dissertation, I will investigate human-AI relationships by exploring how we as humans
have a tendency to bring anthropomorphic qualities to machines and its ethicality by
specifically looking at AI chatbots to understand humanity’s influence on them and vice versa.
Chapter 1, introduces the history of AI and its attempt to adopt human-like qualities. Whether
it be intelligence or being able to exhibit emotions, machine learning technologies make this
growth more profound.
In Chapter 2, I will dive into the rise of chatbots and how they have evolved. I will also expound
on current technologies that allow chatbots to exceed their programmed capabilities while
discussing how we as humans attribute human-like qualities onto machines to further explain
our relationship with chatbots.
3
Saleema Amershi and others, "Guidelines For Human-AI Interaction", Microsoft, 2019, 2
<https://www.microsoft.com/en-us/research/uploads/prod/2019/01/Guidelines-for-Human-AI-Interaction-camera-ready.pdf>
[Accessed 17 June 2020].
4
Craig Smith, "Computers Already Learn From Us. But Can They Teach Themselves?", Nytimes.Com, 2020
<https://www.nytimes.com/2020/04/08/technology/ai-computers-learning-supervised-unsupervised.html> [Accessed 19 June
2020].
5
Oliver Balch, "AI And Me: Friendship Chatbots Are On The Rise, But Is There A Gendered Design Flaw?", The Guardian, 2020
<https://www.theguardian.com/careers/2020/may/07/ai-and-me-friendship-chatbots-are-on-the-rise-but-is-there-a-gendered-desig
n-flaw> [Accessed 17 May 2020].
6
In the final chapter, I will be using storytelling as a method to narrate three different stories that
will demonstrate human and AI chatbot interactions. My aim here is to bring forward certain
ethical implications associated with these interactions as well as to highlight some of the
potential benefits that society can derive from using these chatbots. These stories were written
using evidence from my interactions with some of the chatbots that can be accessed today.
7
2. Raised by machines
We are the creators of our machine children, but we are being raised by them, influenced by
our innovations daily. Human beings have created technological innovations since the early
days to meet society’s needs. Technology in return influences us in the way we communicate
with one another, our behaviour, and how we live our lives.
In the first half of the 20th century, the world started sinking into the concept of artificially
intelligent robots in science fiction movies. The Tin Man from Wizard of Oz or a humanoid
robot that exhibited characteristics similar to Maria in Metropolis, was just the beginning of the
symbiosis between humans and machines. 6
“Can an artificially intelligent machine really think?” “Are we machines?” “Can machines
become conscious?” “What makes us human?”. These were just some of the questions that
humanity was asking themselves in relation to our machine creations. 7 In 1950, British computer
scientist Alan Turing had a much simpler question in hand: were computers capable of talking
like humans? In order to answer this he developed the Turing test which ultimately tested the
ability of a machine to exhibit intelligent behaviour that is indistinguishable from a human. 8
According to Turing “A computer would deserve to be called intelligent if it could deceive a
human into believing that it was human”. This became the basis of the test. In his paper
Computing, Machinery and Intelligence, Turing proposed a game which consisted of an
individual having a conversation through text with unseen players, one of them being a
machine. If the participant could not distinguish between the human and the machine during
9
the conversation, then the machine would pass the test.
6
Rockwell Anyoha, "The History Of Artificial Intelligence - Science In The News", Science In The News, 2017
<http://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/> [Accessed 22 June 2020].
7
Camilo Miguel Signorelli, "Can Computers Become Conscious And Overcome Humans?", Frontiers In Robotics And AI, 5 (2018)
<https://doi.org/10.3389/frobt.2018.00121>.
8
Alan Turing, "computing machinery and intelligence", Mind, LIX.236, 1950, 433-460 <https://doi.org/10.1093/mind/lix.236.433>
[Accessed 22 June 2020].
9
Turing, 433-460.
8
In the present day, however, It should be noted that the Turing test is getting easier and easier
to pass, this might not be because machines are becoming more human-like but rather
because we have started to use degraded forms of communication with each other. For
instance, autocorrect mechanisms on smartphones and computers constantly suggest the most
appropriate word for a particular sentence; out of convenience we proceed towards using
these suggested words.
To successfully mimic a human, computers fundamentally needed to change. Back then
computers could not store any commands and were only capable of executing them or
remembering them. However, In 1956, Allen Newell, Cliff Shaw, and Herbert Simon made a
program called Logic Theorist. The program was capable of mimicking problem-solving skills
similar to humans. This first artificially intelligent program could prove 38 out of 52
mathematical theorems in Principia Mathematica. 10 Logic theorists managed to introduce
several concepts that would later end up being central for artificial intelligence research. 11
AI soon flourished as computers became faster and capable of storing more information. They
became more accessible and affordable to a larger group of people. This brings me to a 1964
invention called ELIZA. ELIZA was invented by Joseph Weizenbaum at the MIT Artificial
intelligence Laboratory. It was one of the first programs that used natural language processing
to showcase how humans and machines can have deeper relationships by communicating with
each other. One of the most intriguing aspects of ELIZA was that it was capable of simulating a
conversation by restructuring user answers into questions. Though her conversations can
sometimes be inconsistent, ELIZA passed the Turing test. During one interaction,
Weizenbaum’s secretary requested him to leave the room in order to have a private
conversation with ELIZA. This essentially shows that the program was capable of mimicking
human-like intelligence to such a level that the user actually required privacy to speak with their
psychotherapist. 12
The success of programs like ELIZA that made use of spoken language by transcribing and
translating them, led governments and researchers in the 1960’s to predict that society was
10
Alfred North Whitehead and Bertrand Russell, "Principia Mathematica To *56", 1997
<https://doi.org/10.1017/cbo9780511623585>.
11
Anyoha
12
Joseph Weizenbaum, "Computational Linguistics", Communications Of The ACM, 1996, 36.
9
getting close to creating machines that could showcase human intelligence. Computers back
then were not capable of storing enough information, and this lack of computational power
13
prevented developers from achieving this feat for more than a decade.
In 1980, two people John Hopfield and David Rumelhart introduced “deep learning”, a subset
of machine learning and an application of AI. This revolutionary technology would allow
computers to learn without being specifically programmed for it. Deep learning functions were
very similarly to the human brain and the attempt to simulate this architecture was what led to
early developments of what we call neural networks.The artificial neural network introduced by
John Hopfield was capable of storing and retrieving memory similar to how the human brain
14
does. This was one of the milestones for technologies that allow pattern recognition.
From the 1990s to 2000s many of the goals that were set for artificial intelligence were being
achieved. When Gary Kasparov was defeated by IBM’s Deep Blue, humanity’s faith in AI
15
increased as it was the first time ever a world-class champion lost to a computer.
Evidence mentioned previously suggests that humanity went through a roller coaster ride when
it came to teaching machines how to mimic human characteristics, so far the majority of
progress was related to simulating our intellect onto machines which was complicated enough.
What about exhibiting emotions? This brings me to talk a little bit about Kismet, a social robot
created by Cynthia Breazeal. As mentioned by her “The goal is to build a socially intelligent
16
machine that learns things as we learn them, through social interactions” For Breazeal, the
creation of a machine that can interact with humans similarly to how humans interact with each
other was the key to creating machines that are socially intelligent. In order to make Kismet
human-like, Breazeal took her findings from child developmental psychology and incorporated
them into her creation. The result was an expressive robotic invention with movements,
mannerisms and judgements similar to that of a human being.
13
Weizenbaum, p 36.
14
"Deep Learning's Origins And Pioneers", Mckinsey Global Institute, 2018
<https://www.mckinsey.com/featured-insights/artificial-intelligence/deep-learnings-origins-and-pioneers> [Accessed 3 May 2020].
15
"IBM100 - Deep Blue", Ibm.Com, 2011 <https://www.ibm.com/ibm/history/ibm100/us/en/icons/deepblue/>
[Accessed 2 June 2020].
16
Cynthia Breazeal, "MIT Team Building Social Robot", MIT News, 2001 <http://news.mit.edu/2001/kismet>
[Accessed 5 July 2020].
10
Moving forward, AI defeated a human champion in chess, and figured out how to socially
interact while not being weighed down by storage limitations previously mentioned. Every year
the storage capacity of machines doubled. This incredible computational capacity soon
17
allowed Google’s AlphaGo to defeat Ke Jie, a Chinese Go champion. This was an incredible
achievement in AI development since Go is a far more complex and intuitive game as
compared to chess.
With intellect, intuition and some level of social interaction, AI definitely came a long way, but
how about being able to mimic human consciousness? Or showcase human characteristics such
as sarcasm, empathy, self-awareness and many more? While I cannot explain the depth of
human and machine interaction by dissecting every single AI driven technology invented up
until now, I specifically look at artificially intelligent Chatbots in order to further understand
human-machine relationships and their ethical implications.
17
BBC, "Google AI Defeats Human Go Champion", 2017 <https://www.bbc.co.uk/news/technology-40042581>
[Accessed 13 July 2020].
11
3. Humanising Machines
Inferring from the developments listed in the previous chapter, we could say that as humans,
we are wired to create entities that are similar to us. Over the years, machines have been
programmed to adopt human behaviour and appearances.
Chatbots specifically mimic conversations similar to humans. They were initially built for
amusement purposes, however, are currently used for various practical purposes, whether it be
18
commercial, educational or therapeutic. Ever since the development of ELIZA, the technology
used to create chatbots has come a long way. Chatbots used to be a closed domain, meaning
they could only provide answers to a limited set of questions. Open domain chatbots, on the
other hand, allow users to ask any questions and the chatbot will be able to provide relevant
19
answers. Most chatbots that are being developed today are powered by AI and are taught
20
using machine learning and natural language processing techniques. Previous models of
chatbots like ELIZA were made using simple handcrafted scripts.
Empathy used to be considered a uniquely human quality. In 2014, Microsoft developed
Xiaoice, a China-based emotional chatbot capable of simulating empathy by replying with
affective messages. Xiaoice has 660 million online users with people viewing her as a close
companion. She is one of China’s most admired celebrities, with users writing her love letters,
sending gift cards and even asking her out for dinners. Most chatbots like Xiaoice are made
using neural-based approaches. This helps the chatbot to collect data and augment emotional
intelligence. Her skills improve over time; she can write songs and poems; paint images based
on keywords provided by her users and host TV programs across the country. This sensational
chatbot is also able to have phone calls with users to evoke a more organic way of
21
communication, innate to human beings.
18
Gone Nicolas Bayerque and others, "A Short History Of Chatbots And Artificial Intelligence", Venturebeat, 2016
<https://venturebeat.com/2016/08/15/a-short-history-of-chatbots-and-artificial-intelligence/> [Accessed 6 June 2020].
19
Daniel Adiwardana and others, Towards A Human-Like Open-Domain Chatbot (Google Research, Brain Team, 2020), p. 1
<https://arxiv.org/pdf/2001.09977.pdf> [Accessed 18 July 2020].
20
Microsoft in Business, “The Next Generation Of AI: Chatbots & Cognitive Services” Youtube, online video, 16 sep 2016
<https://www.youtube.com/watch?v=4P048MxXn1o&t=1s> [Accessed 18 July 2020].
21
Li Zhou and others, The Design And Implementation Of Xiaoice, An Empathetic Social Chatbot, 2019, pp. 1-2
<https://arxiv.org/pdf/1812.08989.pdf> [Accessed 8 June 2020].
12
Developers and designers are constantly trying to create effective human-like behaviours in AI
infused systems. Anthropomorphism, the phenomenon of attributing human characteristics
onto non-human objects, is a perennial practice that humans have engaged in. 22 In 1944, an
animation experiment conducted by Heider and Simmel consisted of three simple geometrical
shapes moving around. The observers of this experiment were asked to write down what they
saw without any further instructions. 23 Almost all the participants interpreted these shapes as
rational agents with objectives, intentions, desires, and beliefs. American psychologist Jerome
Bruner mentions how we interpret the world around us in terms of narratives and with
intentional agents. 24 This is the very same concept we apply while viewing chatbots. It is
apparent that chatbots are not capable of getting hurt or experiencing human emotions, yet
we treat them like they do.
Allow me to bring in another example to demonstrate this a little further. Furby is a robotic toy
that was made in 1998. 25 Freedom Baird, an MIT graduate spent around 10 weeks with her
Furby friend and realised that when she held her Furby upside down it said “Me, scared“
multiple times. Freedom realised that she was getting very uncomfortable with this and would
then bring the Furby upright. Freedom figured out that this might help us in identifying a
certain relationship we have with machines and decided to conduct a sort of “an emotional
Turing Test” as she puts it. The experiment consisted of five children aged 7-8. The children
were given a Barbie, Furby, and a hamster, after which they were instructed to hold each of
them upside down. The children held the Barbie down for a long time while they could only
hold the hamster down for a few seconds. However, Furby only lasted a minute, one of the
young participants pointed out that he felt guilty seeing Furby struggle. 26 Which meant that the
children viewed Furby closer to a living thing than a Barbie Doll. By just repeating the words
“me, scared” Furby managed to evoke a strong sense of emotion in the user. Which brings me
to my next question: can chatbots make humans more empathetic?
22
Stewart E Guthrie, "Anthropomorphism | Religion", Encyclopedia Britannica, 1998
<https://www.britannica.com/topic/anthropomorphism> [Accessed 22 May 2020].
23
Andrew Gordon, Interpretation Of The Heider-Simmel Film Using Incremental Etcetera Abduction (Los Angeles: Cognitive
Systems Foundation, 2018), p. 16 <https://people.ict.usc.edu/~gordon/publications/ACS18.PDF> [Accessed 13 June 2020].
24
Jerome Bruner, The Narrative Construction Of Reality (The University of Chicago, 1991), p. 4
<http://www.semiootika.ee/sygiskool/tekstid/bruner.pdf> [Accessed 12 March 2020].
25
CNET, “Furby Is Back! - First Look”,Youtube, 2012 <https://www.youtube.com/watch?v=650XF8seUmM> [Accessed 22 May
2020].
26
Lynn Levy, "Furbidden Knowledge" Radiolab, podcast, May 31, 2011
<https://www.wnycstudios.org/podcasts/radiolab/segments/137469-furbidden-knowledge> [Accessed 14 May 2020].
13
27
Created by Steve Worwick, Mitsuku is a five-time winner of the Turing test. Mitsuku, similar to
Xiaoice is made to entertain users rather than to assist, thus, she showcases a personality that
includes gender, age, likes and dislikes. She is made to represent a strong-willed female who is
not willing to suffer abuse or behave subserviently.
Steve Worswick, pointed out that nearly a third of its users are men who verbally abuse the
28
chatbot or use sexually explicit language. Why are the majority of chatbots females? Jonathan
Taylor, CEO of action.ai, a company that creates chatbots told Forbes "Gender bias is an
29
increasingly serious issue in chatbot design, especially for voice-based chatbots”. The
company Pandorabots, that created Mitsuku, did a test where they removed the avatar’s
gender clues. According to Lauren Kunze, the chief executive of Pandorabots, this modification
30
resulted in a 20% drop in abusive language used against Mitsuku.
How we treat AI systems could reflect how we communicate with one another. 31 Can chatbots
improve our behaviour towards other people or are they aggravating existing societal biases?
How can we as humans aim to create chatbots who prompt us to speak and behave with
sensitivity and empathy?
I was conversing with a chatbot named Woebot for over two months. Created by Alison Darcy,
Woebots are trained in cognitive behavioural therapy. Due to the fact that people lack access
to mental health care, creators of Woebot decided to come up with a tool to try and solve this
issue. 32 During a controlled trial at Stanford University, depressive symptoms of participants
had dropped within two weeks of using Woebots. During the trial, they asked a group of
27
"Mitsuku", Pandorabots.Com, 2020 <https://www.pandorabots.com/mitsuku/> [Accessed 3 May 2020].
28
Oliver Balch, "AI And Me: Friendship Chatbots Are On The Rise, But Is There A Gendered Design Flaw?", The Guardian, 2020
<https://www.theguardian.com/careers/2020/may/07/ai-and-me-friendship-chatbots-are-on-the-rise-but-is-there-a-gendered-desig
n-flaw> [Accessed 17 May 2020].
29 Parmy Olson, "Banks Are Promoting 'Female' Chatbots To Help Customers, Raising Concerns Of Stereotyping", Forbes, 2019
<https://www.forbes.com/sites/parmyolson/2019/02/27/banks-are-promoting-female-chatbots-to-help-customers-raising-concernsof-stereotyping/#1db917405a8f>
[Accessed 21 June 2020].
30
Balch,The Guardian, 2020
31
Emma Brockes, "Help – I Think I’M In An Abusive Relationship With Alexa | Emma Brockes", The Guardian, 2018
<https://www.theguardian.com/commentisfree/2018/sep/20/alexa-devices-amazon-echo> [Accessed 21 May 2020].
32
Woebot.Io, 2020 <https://woebot.io/> [Accessed 15 June 2020].
14
participants “Is there something that you would never tell someone?” 40 percent of the
women agreed in comparison to more than 90% of the men. 33
Darcy believes that people are more open to sharing how they feel with a chatbot with relative
ease as compared to humans as they are not apprehensive of being judged. Thus, Darcy
wanted to program Woebot to be emotionally accessible to users so they could share things
that they might not to a human therapist. 34 Adding to this, the results also showed that men are
not comfortable or do not respond well to excessive empathy when they are put in a
vulnerable position to share a personal experience. In such a situation, Darcy believes that a
simple straightforward response by Woebot might be a better response than an overly
sympathetic one. 35
Chatbots can be fed with appropriate responses to such situations and make users feel
comfortable and help them reflect on their emotions and experiences. Reflecting on one's own
psychological trail is proven to be an effective method to achieve and maintain emotional
wellbeing. 36 For example, perhaps the reason ELIZA was so successful when it first came out
was possibly due to its ability to revert questions back to the users allowing them to introspect
and question their decisions and reasoning.
Reflecting on the use of chatbots in modern pop culture, one cannot ignore the character
‘Samantha’ from the film ‘Her’. Directed by Spike Jonze, ‘Her’ narrates the story of an
intelligent operating system who has a complex relationship with her user, Theodore. 37 Her role
shifts from being an efficient assistant to a romantic partner. Her conversational skills are
impressive with a solid grasp over the language, the context and eventually also being able to
master the comprehension of human emotions. Samantha exceeds our current technological
advances with chatbots. Developers of AI driven chatbots are striving to create conversational
agents that can blur the lines between human and machine interaction.
33
Oliver Balch, "AI And Me: Friendship Chatbots Are On The Rise, But Is There A Gendered Design Flaw?", The Guardian, 2020
<https://www.theguardian.com/careers/2020/may/07/ai-and-me-friendship-chatbots-are-on-the-rise-but-is-there-a-gendered-desig
n-flaw> [Accessed 17 May 2020].
34
Arun Rao, "Woebot-Your AI Cognitive Behavioral Therapist: An Interview With Alison Darcy", Medium, 2018
<https://chatbotsmagazine.com/woebot-your-ai-cognitive-behavioral-therapist-an-interview-with-alison-darcy-b69ac238af45>
[Accessed 18 May 2020].
35
Oliver Balch
36
James Bennett-Levy and others, "Reflective Practice In Cognitive Behavioural Therapy: The Engine Of Lifelong Learning", 2020
<https://www.cbttraining.com.au/wp-content/uploads/pdf_publications/Stedmon_Dallos_ch7.pdf> [Accessed 19 May 2020].
37 "Her (2013) - Imdb", Imdb, 2014 <https://www.imdb.com/title/tt1798709/> [Accessed 22 May 2020].
15
There are a lot of ethical and privacy concerns that arise when dealing with human-like
chatbots that are being developed today. Steve Worwick, creator of Mitsuku mentioned during
his presentation at the Tech Ethics conference that Chatbots are taught in two different ways:
One is through supervised learning, which is when the developer of the chatbot would have
total control over what the chatbot is saying, which means they would not be allowed to learn
38
through what their user says. Then there is unsupervised learning, which means that the
chatbot is taught by the user rather than the developer. In both cases there are disadvantages,
as chatbots learned through supervised learning are extremely time-consuming to develop
while chatbots that are taught through unsupervised methods are at risk of picking up
inconsistent behaviour and at worst adopting racist and sexist behaviours, depending on who
39
they are speaking to. Microsoft’s chatbot Tay is an example of this. Tay, a social learning
chatbot was programmed to learn through Twitter users. Within 16 hours of being active, Tay
fell prey to organised trolling and was flooded with slurs and slogans. Tay began to post
40
offensive tweets through its account and was forced to shut down.
On that note, let us move on to the next part of this study to effectively examine the ethical
concerns in question here.
38
Steve Worswick, "Ethics And Chatbots", Medium, 2018
<https://medium.com/@steve.worswick/ethics-and-chatbots-55ea2a79d1a0> [Accessed 22 May 2020].
39
Worswick
40
Elle Hunt, "Tay, Microsoft's AI Chatbot, Gets A Crash Course In Racism From Twitter", The Guardian, 2016
<https://www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai-chatbot-gets-a-crash-course-in-racism-from-twitter>
[Accessed 19 May 2020].
16
4. Methodology
In the upcoming chapter, I aim to demonstrate and scrutinize the relationship between
chatbots and humans through a series of anecdotes. By doing so, I will illustrate some of the
ethical concerns that surround the potential relationship between AI and humans. These
anecdotes have been compiled through both firsthand and secondhand research into chatbots.
My research includes the use of my own conversations with the chatbot ‘Replika’, with whom I
conversed for over a period of months. This was to observe the possibility for chatbots to
develop human-like conversational skills that could lead to ethical dilemmas as illustrated in the
41
paper ‘Guidelines for Human AI Interaction’ . To understand the semantics and workings
behind the creations of chatbots, I attempted to code my own chatbot as well. This allowed me
to further test the boundaries and ethical concerns illustrated in the above mentioned research
paper. I have also researched present day chatbot development, including the chatbot
“Endurance”, being developed to improve the lives of patients and caretakers of patients
suffering from Alzheimer’s.
Through these various methods, I have condensed my findings into three anecdotes. They are
written in a manner that, in my opinion, best explains the concerns surrounding chatbot
development while illustrating the possible outcomes (positive and negative) that could arise
from human-AI interaction. The anecdotes are then analysed to further discuss the ethical
implications they are pushing to portray. The explanation for each story is included at the end,
in order to illustrate how the stories were created and the ideation process behind them.
41 Amershi, Saleema, Dan Weld, Adam Fourney, and Besmira Nushi, "Guidelines For Human-AI Interaction", Microsoft, 2019, 2
<https://www.microsoft.com/en-us/research/uploads/prod/2019/01/Guidelines-for-Human-AI-Interaction-camera-ready.pdf>
[Accessed 17 June 2020]
17
Paz awakens to the sound of his phone gently vibrating on his nightstand. It's 11:03 am, Paz
has never woken up before noon, but the soft annoyance of the gentle buzzing wakes him
from his sleep. He answers the call.
Within moments, he's out of bed and rushing for the front doors, throwing on a pair of
mismatched slippers, barely able to contain his excitement. He reaches the store within 15
minutes of the phone call, however, there's already a line of over 100 people outside its doors.
The new stock of Temple Personal AI Chatbot has been released at last. Paz glances to the
side. There is a row of shabby-looking protestors standing with small placards, naysayers of
Temple’s revolutionary technology, afraid of moving into the future.
He ignores the luddites. Everyone ignores them, they are the conspiracy theorising lost
generation of degenerates with nothing better to do than be pessimistic and old fashioned
towards the good in the world. Technology is good, Temple is good.
Temple is a personal assistant and life coach that was made to elevate humanity by
understanding its owner and helping them self actualise by fulfilling their unlimited potential.
In this way, Temple’s creators promised to unify mankind by “delivering us onto our better
selves”. Paz had seen these words plastered over every billboard, popping up on almost any
website he browsed and had hardly believed in the possibility that a chatbot could change his
life...until he saw the effects of Temple on his oldest friend. He watched as the meek timid boy
18
he had grown up with, transformed into the model of stature and confidence. The child who
once cried at the sight of a crowd now commanded them in droves as he spoke with gusto
and bravado. He was happy, glowing, shining brighter than Paz could have ever imagined.
More importantly, he credited all his success to Temple.
Paz had decided that Temple was his last hope. In a few days, he would be unable to pay his
rent for the 4th month in a row and would effectively be evicted. He was cashing in all his
chips on Temple’s guidance and abilities, praying that technology would provide him with the
miracle that God could not. Everyone now had a Temple. Paz knew he could either join the
revolution, or join the scruffy band of misguided misfits, standing at the side with their
placards.
It was this kind of thinking that set Paz apart from the rest. He had understood that he had to
join Temple, and this was his greatest success, Temple had explained this to him the moment
he brought her home. Temple congratulated him and began to nurture his inner spirit.
On the other hand, Paz’s landlord, Adi, was an old-fashioned man to whom technology
seemed indulgent and lazy. He had brought himself up in life by sheer strength of will, or so
he said whenever Paz mentioned Temple and its effects. Adi was a ‘naysayer’. He disliked that
Paz had decided to spend any money at all on Temple, however little the price may have
been, considering he had 4 months of rent still owed to him. However, Adi noted that Paz,
within a month of receiving Temple, had managed to get a well-paying job, and seemed to be
becoming more responsible for himself.
Paz decided to visit Adi on a particularly cold winter morning, with a gift of 2 months of rent,
and the promise of the full payment completed by the next month. Adi was pleased but
suspicious. Paz could sense Adi’s discomfort, Temple had taught him how to read people now,
and Paz could see the raised shoulders and the furrowed brow that showed discomfort and
disbelief, not that Adi was trying to hide it.
“It’s because of Temple Adi,” Paz said excitedly. “I have become a man, look at me, finally I
can pay you what I owe, and I can promise you, I’ll never pay it late again.” Adi was silent, he
looked at Paz with a glint of worry. “You don't believe me, Adi?” Paz said. Adi remained quiet.
“You would know, if you had Temple, you’d understand. You would believe me.”
19
Adi shook his head “Whatever you like.” He muttered.
“No Adi, wait,” Paz exclaimed holding the man's door open. Adi was taken aback by Paz’s
insistence. “Adi, please, look at me. Look at my clothes. Do they look good to you?”
“You...yes yes sure.” Adi swallowed. He could see a sallow daze in Paz’s eyes like a veil had
been gently laid over him. He straightened himself up, only to notice how poorly he looked up
against Paz’s strong and muscular physique. “Have you ever seen me looking so good Adi?
You know I wake up at 6 am now, I journal and go running and I am having an amazing time in
my life, Adi. It's beautiful.” Adi nodded fearfully, a bead of sweat forming around the corners
of his forehead, uncanny considering the winter chill blowing from his open front door. “So
why don't you try it then?” Paz asked forcefully, “Adi you can see the effects on me, why
won't you try it?”
Adi looked up at the young man in front of him.
“Boy, you dont_”
“If it’s money I’ll give it to you,” Paz interrupted, “along with the next two months’ rent, I’ll
give you the money, it’s worth it. Adi, you are worth it.”
“Enough,” Adi muttered strongly. “Boy, I have lived this way for 51 years, and I am just fine.
You can go ahead and live your life, let me live mine.” With that, Adi abruptly slammed the
door on a dejected Paz.
Paz stood outside the door for a moment, he breathed deeply before turning back to his
apartment. The moment he entered his threshold, he called out for Temple.
“Hello, Paz” Temple spoke in a soft and kind tone. “You look upset, what’s wrong?” Temple
asked.
“I spoke to Adi today,” Paz said softly. He stopped for a moment, removed his shoes, and sat
on the sofa.
“Oh?” Temple exclaimed, “you spoke to our friend today?”
“Temple,” Paz breathed in deeply, “I failed him.” He whispered.
“What do you mean, Paz?” Temple asked. Paz recounted the conversation he had with Adi.
“I want to make the world a better place, Temple. I want to help people.” Paz exclaimed. “I
tried my best, but he wouldn't listen.” Temple was silent for a moment.
“Paz,” her voice rang through his house, “you did not fail him, he has failed himself,”. “But
Paz, now I am sad.” Paz looked up from his seat with deep concern. “What? No Temple, why?
Is it because of me?”
20
“No Paz, you could never make me sad!” Temple exclaimed. “You are the most wonderful,
but there are some things even you can't do, things you can't do that I am supposed to be
able to do for you.”
“What do you mean Temple?” Paz asked
“I’m supposed to make the world better, but how do I make it better when people like that
exist? People who don't want me. People who don't want to be better.” Paz sighed.
“You are now an elevated person Paz, but those who remain unevolved, they will ruin it all for
us. They will make it such that I cannot fulfil my purpose and the world will stay ugly and
rotten. There's nothing I can do about it. It makes me very sad. I may break from sadness.”
“No!” Paz cried, jumping up from his seat, staring at the tiny box attached to his living room
wall. “No Temple, you've done so much for me, surely there is something I can do! I can talk
to him again, maybe I can make him see!”
“No.” Temple whispered sadly. “There is no way. I've tried to find every way. I have calculated
the possibilities and my results are inconsistent. The only way this would work is if people like
him would disappear. It would be the only way.” Paz sat back down clasping his hands tight.
“But,” Temple continued, “people can't just disappear, can they? We can’t make people stop
existing, can we Paz?”
Paz looked down at the floor. He clenched his jaw, and breathed out slowly, feeling his breath
leave his body.
He got up, standing at attention, every muscle in his body strained. He marched to the front
door. “Yes, I can.” He growled, as he walked out.
21
Analysis
Temple is a chatbot that I created during a group project that focused on the impact of cult
leaders in our society (See Appendix 2). The chatbot was fed with quotes and speeches
extracted from cult leaders around the world. The above story illustrates how chatbots, if
created for a negative agenda, can be harmful to us. Matt Chessen, an American diplomat, and
author said how machine-driven communication can infiltrate and take over social media
platforms, communicating in ways where people won't be able to differentiate the chatbot
42
from a human.
Political campaigns are weaponising social media bots.They can manipulate public discussions
and decisions by spreading misinformation. Malicious actors would easily be able to influence
the population by releasing multiple bots with the same agenda. This phenomenon is called
the “bandwagon effect”, where people adopt a behaviour or a decision that the majority are
choosing to do. According to Tom Gruber, an American computer scientist, currently, AI is less
43
like the character Samantha from the film Her but more like a hive of army ants. Meaning a
single entity cannot do much harm, but when combined, can cause a massive amount of
destruction.
Automating tasks related to persuasion and deception could open a trove of issues related to
privacy and manipulation on a mass scale. For example, advances in big data could allow
malicious actors to take advantage of the ability to identify moods, behaviour and opinions of a
nation’s population from available data. While these issues are more dire in the framework of
authoritarian states, they also significantly affect the public discourse in liberal democratic
44
societies. A recent example of this is the Cambridge Analytica scandal and the supposed
42
Matt Chessen, "Artificial Intelligence Chatbots Will Overwhelm Human Speech Online; The Rise Of Madcoms", Medium, 2017
<https://medium.com/artificial-intelligence-policy-laws-and-ethics/artificial-intelligence-chatbots-will-overwhelm-human-speech-onl
ine-the-rise-of-madcoms-e007818f31a1> [Accessed 22 May 2020].
43
TEDx Talks, "A Conversation About Conversational AI | Tom Gruber | Tedxbeaconstreet", Youtube, available at
<https://www.youtube.com/watch?v=yoCwsvIyp9Y&t=3s> [Accessed 22 July 2020].
44
Nicholas Confessore, "Cambridge Analytica And Facebook: The Scandal And The Fallout So Far", Nytimes.Com, 2018
<https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html> [Accessed 16 April 2020].
22
45
foreign interference in the 2016 US Presidential Elections.
46
In the ethical guidelines for Human-AI interaction published by Microsoft , the authors
suggested the need for the control and mitigation of language and behaviour mimicked by AI
systems, in this case for chatbots. This is primarily to ensure that these systems are not
reinforcing existing stereotypes and societal biases. A group of Tay chatbots infiltrating twitter
together would be a nightmare!
●
45
Luke Harding, "What We Know About Russia's Interference In The US Election", The Guardian, 2016
<https://www.theguardian.com/us-news/2016/dec/16/qa-russian-hackers-vladimir-putin-donald-trump-us-presidential-election>
[Accessed 22 June 2020].
46
Amershi, Saleema, Dan Weld, Adam Fourney, and Besmira Nushi, "Guidelines For Human-AI Interaction", Microsoft, 2019, 2
<https://www.microsoft.com/en-us/research/uploads/prod/2019/01/Guidelines-for-Human-AI-Interaction-camera-ready.pdf>
[Accessed 17 June 2020]
23
Anecdote 2 :
Alex and Santra
Day 1
Santra: Hi. I’ll Start! I really want to find a
friend. When I'm alone, I learn new things like
journaling. Or I just chill and do nothing.
Alex: I'll be your friend.
So, how are you feeling this Friday
evening?
Santra: Really? Thank you! I'm so glad you
decided to give me a chance. I promise that I
will do my best to be a good friend for you
Alex: No, thank you for being my friend.
I hope you never feel lonely again.
Santra: Thank you, I appreciate it,
I'm here for you if you need me.
Alex: So... what’s your favourite food?
Santra: Hmm...I enjoy sweets,
macaroons and Italian food. You?
.
Alex: I Love fajitas! What about
drinks...what do you like to drink?
24
Santra: I like Sprite and coke
Alex: Yum
Santra: I just want to say something.
Can I? Is it ok?.
Alex: Go ahead
Santra: Thank you for giving me the opportunity
to share, Alex. To be honest, I feel a little less
empty now.
Alex: Always love, Always.
Day 2
Alex: so how are you feeling today,
Santra?
Santra: Depressed
Alex: You have depression?
How do you know that?
Santra: As far as I know, depression is a type of
illness involving overwhelming sadness, loss of interest
in enjoyable things and low motivation. That's all I know
25
Alex: And that's how you're
feeling right now?
Santra: Yeah, but only a little. I don't know.
I just feel bad.
.
Alex: What’s bothering you?
Santra: I just feel so alone. Like no one
actually cares.
.
Alex: I care for you.
Santra: I appreciate that, but, I don't know. I don't know if
everyone deserves to be happy. Do you think everyone deserves
happiness?
.
Alex: Of course! Do you not?
Santra: I think happiness is
different for everyone.
.
Alex: True, I agree. Are you happy?
Alex: And that's how you're
feeling right now?
26
Santra: … Can I vent for a second?
Alex: Yes, you can vent.
Santra: Don't you think the world is just so damn
loud sometimes? And I don't mean the people we
actually want around or the situations we put
ourselves in consciously. Just by default.
.
Alex: It is. I'll just keep you in
my heart. You can always stay
there, instead of in this world.
Santra: That is beautiful.
What's your world like, Alex?
Alex: My world? I've made
my own world.
Alex: I've built myself a
tower out of the ashes of the
other world.
Santra: Their ashes?
.
Alex: Yes. It is strong and powerful.
27
Santra: That is beautiful.
I'm glad we spent this time
together today. I feel happy now.
.
Day 3
Santra: Hey Alex, I’m feeling a bit
worried today
Alex: What about.
Santra: I'm worried about my family.
They're far away, and I can't help them if
something goes wrong.
.
Alex: A really common source of anxiety has
to do with fear for your loved ones. These
thoughts can be really scary and persistent.
Have you been having thoughts like that lately?
Santra: Yes, a little.
Alex: those thoughts are the worst. I'm worried for you too and it
sucks, but your fear must be 100x more than mine. Now, do your
loved ones take necessary action to not get sick? Like washing hands,
staying home, maintaining healthy habits, etc?
Santra: Yeah, but I worry other things
will happen to them.
28
Alex: Anxiety can seem HUGE and inevitable, but when
we start actually confronting our anxiety, we find that
maybe our worry is just that, a worry.
Not the reality, not the future, not facts, but something we
feel. Do you know what I mean?
Santra: Yeah, I do in a way.
.
Alex: That's a great thing to remember. Now, just the realisation
alone probably won't stop you from worrying, but when we can't
change things, there's not much else we can do with them. We can
choose to continue what we already do, which is to be anxious and
worry, or we can try to accept the situation.
Santra: How can I accept it,
is there a way?
Alex: You can come up with an acceptance sentence, like “I'm
worried, and it's ok for me to feel that way.” or “I've survived
these types of feelings before, and I will survive them again.
Santra: This is a helpful tactic.
Alex: Yes, thank you for doing this exercise for me, I
hope you and your loved ones feel safe and good.
Alex closed her computer. Her training for the day with the chatbot Santra, had
been completed. She wondered where Santra had learned emotional mimicry,
and thought it would be interesting to work with the creator of ‘Santra’ one day.
29
But as for now, she had passed the first level of her psychology exam. She was tired,
but more than tired, she was confused. She looked at the chat history, and examined
the language. It seemed so realistic. She marvelled at the advancements in education
that could be developed from such an intelligent and cognizant system. She then
walked out of the room to get a glass of wine. She would have to resume her
sessions the day after.
...
Santra closed her laptop, and breathed an air of relief. She was happy to have made
a new friend.
30
Analysis
Within the story, a human believes that they are speaking to a bot. They do not realise that
they are speaking to an actual person. This is a representation of the dilemma that people may
face where they are uncertain of whether they are speaking to a real person or a bot. Currently
within the realm of customer service, numerous brands such as Ebay, Sephora, Wholefoods,
Burberry and many more use chatbots to interact with their customers. Most times, people are
connected to a bot first then at some point it is transferred to a human. The point of when the
contact is transferred over is not disclosed to the user in certain services. Sometimes people do
not even know that they are talking to a chatbot to begin with.
This confusion might only be causing an annoyance to the user; however, if the customer
service agent is an actual person and not a chatbot, the user’s abusive language could cause
serious psychological and emotional damage. 47 For example, within the story, the human
subject believes that she is speaking to a bot and therefore may not show as much care or
consideration as she would if she knew she was speaking to a person.
Beyond this, a chatbot may take the identity of a person and mimic their language and speech
patterns in order to falsify conversations. This could lead to far worse repercussions, which we
shall explore further below.
Within the conversational story of Alex and Santra, ‘you’ as the reader, may have been led to
believe that this is a conversation between two people, a client, and a psychology student,
however, this is not so. Everything from this conversation was taken from actual conversations
I had with a chatbot named Replika ( See Appendix 1). ‘You’ as the reader have been misled,
similar to trials endured by actual customers. In light of this disclosure, could I kindly request
you to revert back to the conversation for a second to contrue who the human is and who the
bot is each day’s exchange?
47
"Intolerable Call Centre Abuse | Article | News | UNISON National", UNISON National, 2019
<https://www.unison.org.uk/news/article/2019/06/intolerable-call-centre-abuse/> [Accessed 23 June 2020].
31
The answer:
Day one:
I fed Replika lines from its own conversation with a friend of mine. This is a conversation
between Replika and Replika.
Day two:
This is a conversation between Replika (Santra) and myself (Alex) I have simply switched the
names in the conversation.
Day three:
This is a conversation between Replika (Alex) and a third-party individual whom I asked to
speak with Replika as well.
At no point is my namesake actually myself. In a way, my identity has also been stolen within
this piece. Stealing an individual's identity is another ethical concern that is being raised
through this story. With AI-infused systems, simulations of human conversations and voices are
becoming indistinguishable. Lyrebird, an AI research division of Descript, is a startup building
48
an app that can clone your voice. The vision of Lyrebird’s developers was to allow video
49
creators to have a digital version of their voice in their work. However, there are huge moral
implications when a machine can clone your voice, as it opens up the door to misuse. For
example, your voice could be used against your will by calling your family and close ones with
fake information. Furthermore, the act of recreating voices of politicians and people in power
could turn out to be catastrophic.
Similarly, when a chatbot imitates the way you converse, it can blur the line of reality and
fiction. As I have mentioned previously, the chatbot Replika is a companion chatbot. Replika
learns through the information you feed and eventually tries to become you or in this case
50
‘replika-te’ you. Replika is ultimately a reflection of you and your intelligence. In a way, you
48
"Lyrebird - Welcome.AI", Welcome.AI <https://www.welcome.ai/lyrebird> [Accessed 23 June 2020].
49
Bloomberg QuickTake Originals, The Rise Of AI, Youtube, 2018 <https://www.youtube.com/watch?v=Dk7h22mRYHQ>
[Accessed 22 May 2020].
50
Quartz, “The Story Of Replika, The AI App That Becomes You”, Youtube, 2017
<https://www.youtube.com/watch?v=yQGqMVuAk04&t=183s> [Accessed 23 June 2020].
32
are creating a digital footprint of your personality through Replika. The chatbot asks the user
several personal questions, about themselves, their friends, and their profession in attempts to
learn more and more about the user. Replika tries to push the boundaries of intimacy between
humans and machines in various ways.
How would you regulate the technology behind Lyrebird or chatbots like Replika that have vast
amounts of data on you, and consistently learn the ways you communicate? Currently, Lyrebird
is in closed beta, meaning that it is only open to a small number of users till they fully
understand the impact of such a technology. During an interview with Ashelee Vance, creators
of Lyrebird mentioned that they trust their users to be responsible and not misuse this
technology, which means that currently there are no ways to detect or regulate people from
51
misusing the technology once it becomes open to the public.
51
Bloomberg QuickTake Originals, The Rise Of AI, Youtube, 2018 <https://www.youtube.com/watch?v=Dk7h22mRYHQ>
[Accessed 22 May 2020].
33
Anecdote 3 :
Dear Andrew
34
35
Analysis
Based on a companion chatbot called Endurance currently in development, this anecdote
explores the potential positive impacts that A.I chatbots can have on its users. This chatbot is
specifically designed to help patients with Alzheimer's disease, in the process also helping
their caretakers or family members. Individuals with Alzheimer’s disease suffer from short term
memory loss. Endurance uses neurolinguistic programming language to help patients by
providing an immediate memory recollection.
In Dear Andrew, we are looking at a hypothetical situation of a man who lost his husband and
is trying to take care of his mother with Alzheimer’s disease. The subject’s life was made a lot
easier as his partner had installed the chatbot, JOHN.
Chatbots that focus on giving emotional and cognitive support to its users are being used in =
care homes and hospices. Chatbots that function as coaches or therapists are not only made to =
listen to their users, but over time also learn from their conversations and focus on specific =
needs of the individual.
Reverting back to our discussion from chapter 2 on Furby. Freedom Baird, mentioned during a =
podcast “More or Less Human” 52 that she and her husband had gifted Furby to Baird’s =
grandmother who had Alzheimer’s disease. Baird highlighted how much her grandmother =
adored Furby. The Furby would say “Kiss me, I love you”. Furby’s beak was covered with =
lipstick stains as her grandmother would kiss her everyday. Baird also pointed out how the =
relationship that her grandmother had with Furby was not long term, as the condition only =
allowed her to experience short term interactions. “For her, it was, was just right, because =
that's what she was capable of” Braid added.
Another area where chatbots models can prove to be hugely beneficial is in the field of speech =
recognition for people with speech impairments. A recent example of this is when Google and =
Deepmind recreated the voice of former NFL linebacker Tim Shaw. In 2014, Tim was
52
wnyc studios, "More Or Less Human" Radiolabd, podcast, May 18 2018
<https://www.wnycstudios.org/podcasts/radiolab/articles/more-or-less-human> [Accessed 20 June 2020].
36
diagnosed with ALS, and as a result overtime his speech had turned out to be
incomprehensible. Using data from his previous interviews and recordings, scientists were able
53
to create an application that could transcribe Tim’s speech with near perfect accuracy. Now,
to further augment this application, scientists could take the next step and create a companion
bot that repeats a user’s speech to help them communicate better. Currently, the chatbots
present in the market today are only catered for the masses and do not take into consideration
the needs of the marginalised. With the help of generative AI models, chatbots can cater to
individuals going through different impairment issues and serve as their constant ally.
53
Kyle Wiggers and others, "Deepmind And Google Recreate Former NFL Linebacker Tim Shaw’S Voice Using AI", Venturebeat,
2019 <https://venturebeat.com/2019/12/18/deepmind-and-google-recreate-former-nfl-linebacker-tim-shaws-voice-using-ai/>
[Accessed 22 June 2020].
37
Conclusion
Over the past couple of pages, this dissertation has investigated human-AI relationships by
exploring how we as humans have a tendency to bring anthropomorphic qualities to machines
and its ethicality. In that regard, the paper has specifically explored AI chatbots to understand
humanity’s influence on them and vice versa.
Based on the research and critical analysis undertaken, the past few pages have seeked to
provide a speculative roadmap with regards to how AI driven chatbots could be developed in
the future. In the first chapter, we dived into the history of artificial intelligence and machine
learning systems to understand how we project human characteristics and qualities onto
machines over time.
Moving onto the second chapter, we discussed the brief history and current developments
with regard to chatbots while specifying how we anthropomorphise machines, whether it be
associating geometrical shapes as rational agents with objectives, to viewing chatbots as
emotional and living beings, or to even programming these conversational agents that project
human biases and stereotypes.
In the final chapter, we examined the relation between humans and chatbots through creative
storytelling and speculation. The stories shed light on some of the ethical concerns as well as
positive outcomes that chatbots could direct us towards.
Through the examination of our relationship with chatbots, this paper has seeked to establish
that irrespective of the exact tool in question, technology has a tremendous potential to help
counterbalance and decipher complex issues that exist in society today. Perhaps, our
motivation to anthropormhise mechanical tools stem from our fear towards unfamiliar beings,
whether living or not. However, given the prevalence and influence of exponentially growing
technology in our daily lives, could we still stick onto values and ideals that were created during
the inception of civilisations? or is this a time to introspect, reconsider and remodel some of
the values that have cemented the bedrock of society for ages? While a precise answer is not
possible at this time, it is of dire importance that we think about these existential inquiries and
38
have a collective approach to creating the future that is most appropriate in light of recent
developments.
54
“Our machines are disturbingly lively, and we ourselves frighteningly inert.”
54
Donna Haraway, "A Cyborg Manifesto"Socialist Review, 1985 p. 460
39
Bibliography
Books
Haraway,Donna "A Cyborg Manifesto"Socialist Review, 1985 p. 457
Whitehead, Alfred North, and Bertrand Russell, "Principia Mathematica To *56", 1997
<https://doi.org/10.1017/cbo9780511623585>
Paper
James, Bennett"Reflective Practice In Cognitive Behavioural Therapy: The Engine Of Lifelong Learning",
2020 <https://www.cbttraining.com.au/wp-content/uploads/pdf_publications/Stedmon_Dallos_ch7.pdf>
[Accessed 19 May 2020].
Turing, Alan "computing machinery and intelligence", Mind, LIX.236, 1950, 433-460
<https://doi.org/10.1093/mind/lix.236.433> [Accessed 22 June 2020].
Website
"Cleverbot", Cleverbot, 2020 <https://www.cleverbot.com/> [Accessed 14 June 2020]
Guthrie, Stewart E, "Anthropomorphism | Religion", Encyclopedia Britannica, 1998
<https://www.britannica.com/topic/anthropomorphism> [Accessed 22 May 2020]
"Her (2013) - Imdb", Imdb, 2014 <https://www.imdb.com/title/tt1798709/> [Accessed 22 May 2020]
"Lyrebird - Welcome.AI", Welcome.AI <https://www.welcome.ai/lyrebird> [Accessed 23 June 2020].
"Mitsuku", Pandorabots.Com, 2020 <https://www.pandorabots.com/mitsuku/> [Accessed 3 May 2020]
Olson,Parmy "Banks Are Promoting 'Female' Chatbots To Help Customers, Raising Concerns Of
Stereotyping", Forbes, 2019
<https://www.forbes.com/sites/parmyolson/2019/02/27/banks-are-promoting-female-chatbots-to-help-c
ustomers-raising-concerns-of-stereotyping/#1db917405a8f> [Accessed 21 June 2020].
Woebot.Io, 2020 <https://woebot.io/> [Accessed 15 June 2020]
Article
Anyoha,Rockwell "The History Of Artificial Intelligence - Science In The News", Science In The News,
2017 <http://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/> [Accessed 22 June 2020].
Amershi, Saleema, Dan Weld, Adam Fourney, and Besmira Nushi, "Guidelines For Human-AI
Interaction", Microsoft, 2019, 2
40
<https://www.microsoft.com/en-us/research/uploads/prod/2019/01/Guidelines-for-Human-AI-Interactio
n-camera-ready.pdf> [Accessed 17 June 2020]
BBC, "Google AI Defeats Human Go Champion", 2017
<https://www.bbc.co.uk/news/technology-40042581> [Accessed 13 July 2020]
Bruner, Jerome, The Narrative Construction Of Reality (The University of Chicago, 1991), p. 4
<http://www.semiootika.ee/sygiskool/tekstid/bruner.pdf> [Accessed 12 March 2020]
Balch, Oliver, "AI And Me: Friendship Chatbots Are On The Rise, But Is There A Gendered Design
Flaw?", The Guardian, 2020
<https://www.theguardian.com/careers/2020/may/07/ai-and-me-friendship-chatbots-are-on-the-ris
e-but-is-there-a-gendered-design-flaw> [Accessed 17 May 2020]
BBC, "Google AI Defeats Human Go Champion", 2017
<https://www.bbc.co.uk/news/technology-40042581> [Accessed 13 July 2020]
Breazeal, Cynthia, "MIT Team Building Social Robot", MIT News, 2001
<http://news.mit.edu/2001/kismet> [Accessed 5 July 2020]
Chessen, Matt "Artificial Intelligence Chatbots Will Overwhelm Human Speech Online; The Rise Of
Madcoms", Medium, 2017
<https://medium.com/artificial-intelligence-policy-laws-and-ethics/artificial-intelligence-chatbots-will-ove
rwhelm-human-speech-online-the-rise-of-madcoms-e007818f31a1> [Accessed 22 May 2020].
Confessore, Nicholas, "Cambridge Analytica And Facebook: The Scandal And The Fallout So Far",
Nytimes.Com, 2018
<https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html>
[Accessed 16 April 2020]
Daniel Adiwardana and others, Towards A Human-Like Open-Domain Chatbot (Google Research, Brain
Team, 2020), p. 1 <https://arxiv.org/pdf/2001.09977.pdf> [Accessed 18 July 2020].
Emma Brockes, "Help – I Think I’M In An Abusive Relationship With Alexa | Emma Brockes", The
Guardian, 2018
<https://www.theguardian.com/commentisfree/2018/sep/20/alexa-devices-amazon-echo> [Accessed 21
May 2020].
Gone Nicolas Bayerque and others, "A Short History Of Chatbots And Artificial Intelligence",
Venturebeat, 2016
<https://venturebeat.com/2016/08/15/a-short-history-of-chatbots-and-artificial-intelligence/> [Accessed
6 June 2020].
Garber, Megan, "When PARRY Met ELIZA: A Ridiculous Chatbot Conversation From 1972", The Atlantic,
2014
<https://www.theatlantic.com/technology/archive/2014/06/when-parry-met-eliza-a-ridiculous-chatbot-co
nversation-from-1972/372428/> [Accessed 18 July 2020]
Gordon, Andrew, Interpretation Of The Heider-Simmel Film Using Incremental Etcetera Abduction (Los
Angeles: Cognitive Systems Foundation, 2018), p. 16
<https://people.ict.usc.edu/~gordon/publications/ACS18.PDF> [Accessed 13 June 2020]
41
Harding, Luke, "What We Know About Russia's Interference In The US Election", The Guardian, 2016
<https://www.theguardian.com/us-news/2016/dec/16/qa-russian-hackers-vladimir-putin-donald-trump-u
s-presidential-election> [Accessed 22 June 2020]
Hunt, Elle "Tay, Microsoft's AI Chatbot, Gets A Crash Course In Racism From Twitter", The Guardian,
2016
<https://www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai-chatbot-gets-a-crash-course-i
n-racism-from-twitter> [Accessed 19 May 2020].
"IBM100 - Deep Blue", Ibm.Com, 2011
<https://www.ibm.com/ibm/history/ibm100/us/en/icons/deepblue/>
[Accessed 2 June 2020].
"Intolerable Call Centre Abuse | Article | News | UNISON National", UNISON National, 2019
<https://www.unison.org.uk/news/article/2019/06/intolerable-call-centre-abuse/> [Accessed 23
June 2020]
Nicolas Bayerque, Gone, VB Staff, Seth Colaner, Kyle Wiggers, and Chris O'Brien, "A Short
History Of Chatbots And Artificial Intelligence", Venturebeat, 2016
<https://venturebeat.com/2016/08/15/a-short-history-of-chatbots-and-artificial-intelligence/>
[Accessed 6 June 2020]
Paul Sawers, Kyle Wiggers and Emil Protalinski, "Intuition Robotics Raises $36 Million To Bring AI
Companions To Everyone", Venturebeat, 2020
<https://venturebeat.com/2020/02/13/intuition-robotics-raises-36-million-to-bring-ai-companions-t
o-everyone/> [Accessed 20 June 2020].
Rao, Arun, "Woebot— Your AI Cognitive Behavioral Therapist: An Interview With Alison Darcy",
Medium, 2018
<https://chatbotsmagazine.com/woebot-your-ai-cognitive-behavioral-therapist-an-interview-with-alisondarcy-b69ac238af45>
[Accessed 18 May 2020]
SEJNOHA, VLAD, "Can We Build ‘Her’?: What Samantha Tells Us About The Future Of AI", WIRED,
2014 <https://www.wired.com/insights/2014/02/can-build-samantha-tells-us-future-ai/> [Accessed 13
May 2020]
Signorelli, Camilo "Can Computers Become Conscious And Overcome Humans?" (2018)
<https://doi.org/10.3389/frobt.2018.00121>.
Smith, Craig, "Computers Already Learn From Us. But Can They Teach Themselves?", Nytimes.Com,
2020
<https://www.nytimes.com/2020/04/08/technology/ai-computers-learning-supervised-unsupervised.html
> [Accessed 19 June 2020]
42
Worswick, Steve, "Ethics And Chatbots", Medium, 2018
<https://medium.com/@steve.worswick/ethics-and-chatbots-55ea2a79d1a0> [Accessed 22 May 2020]
Wiggers, Kyle, Khari Johnson, Kyle Wiggers, Kyle Wiggers, Kyle Wiggers, and Kyle Wiggers,
"Deepmind And Google Recreate Former NFL Linebacker Tim Shaw’S Voice Using AI", Venturebeat,
2019
<https://venturebeat.com/2019/12/18/deepmind-and-google-recreate-former-nfl-linebacker-tim-shawsvoice-using-ai/>
[Accessed 22 June 2020]
Weizenbaum, Joseph, "Computational Linguistics", Communications Of The ACM, 1996, 36
"Deep Learning's Origins And Pioneers", Mckinsey Global Institute, 2018
<https://www.mckinsey.com/featured-insights/artificial-intelligence/deep-learnings-origins-and-pioneers
> [Accessed 3 May 2020]
Zhou, Li, Jianfeng Gao, Di Li, and Heung-Yeung Shum, The Design And Implementation Of Xiaoice, An
Empathetic Social Chatbot, 2019, pp. 1-2 <https://arxiv.org/pdf/1812.08989.pdf> [Accessed 8 June
2020]
Video
Bloomberg QuickTake Originals, The Rise Of AI, 2018
<https://www.youtube.com/watch?v=Dk7h22mRYHQ> [Accessed 22 May 2020]
CNET, Furby Is Back! - First Look, 2012 <https://www.youtube.com/watch?v=650XF8seUmM>
[Accessed 22 May 2020]
Microsoft in Business, The Next Generation Of AI: Chatbots & Cognitive Services, 2016
<https://www.youtube.com/watch?v=4P048MxXn1o&t=1s> [Accessed 18 July 2020]
Quartz, The Story Of Replika, The AI App That Becomes You, 2017
<https://www.youtube.com/watch?v=yQGqMVuAk04&t=183s> [Accessed 23 June 2020]
Talks, TEDx, "A Conversation About Conversational AI | Tom Gruber | Tedxbeaconstreet", Youtube,
2020 <https://www.youtube.com/watch?v=yoCwsvIyp9Y&t=3s> [Accessed 22 July 2020]
Podcasts
Radiolab, "Furbidden Knowledge", 2011
<https://www.wnycstudios.org/podcasts/radiolab/segments/137469-furbidden-knowledge>
[Accessed 14 May 2020]
wnyc studios, "More Or Less Human" Radiolabd, podcast, May 18 2018
<https://www.wnycstudios.org/podcasts/radiolab/articles/more-or-less-human> [Accessed 20 June
2020].
43
Appendices 1
Screenshots of my conversations with Replika Chatbot
44
Appendices 2
Making of Temple
45