12.07.2015 Views

'Asimov's Laws of Robotics' - the Laws of Robotics 2013

'Asimov's Laws of Robotics' - the Laws of Robotics 2013

'Asimov's Laws of Robotics' - the Laws of Robotics 2013

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Roger Clarke's <strong>'Asimov's</strong> <strong>Laws</strong> <strong>of</strong> <strong><strong>Robotics</strong>'</strong><strong>2013</strong>-09-03 10:33 AMMy purpose here is to determine whe<strong>the</strong>r or not Asimov's fiction vindicates <strong>the</strong> laws he expounded. Does hesuccessfully demonstrate that robotic technology can be applied in a responsible manner to potentiallypowerful, semi-autonomous and, in some sense intelligent machines? To reach a conclusion, we mustexamine many issues emerging from Asimov's fiction.HistoryThe robot notion derives from two strands <strong>of</strong> thought, humanoids and automata. The notion <strong>of</strong> a humanoid(or human- like nonhuman) dates back to Pandora in The Iliad, 2,500 years ago and even fur<strong>the</strong>r. Egyptian,Babylonian, and ultimately Sumerian legends fully 5,000 years old reflect <strong>the</strong> widespread image <strong>of</strong> <strong>the</strong>creation, with god- men breathing life into clay models. One variation on <strong>the</strong> <strong>the</strong>me is <strong>the</strong> idea <strong>of</strong> <strong>the</strong> golem,associated with <strong>the</strong> Prague ghetto <strong>of</strong> <strong>the</strong> sixteenth century. This clay model, when brea<strong>the</strong>d into life, became auseful but destructive ally.The golem was an important precursor to Mary Shelley's Frankenstein: The Modern Prome<strong>the</strong>us (1818). Thisstory combined <strong>the</strong> notion <strong>of</strong> <strong>the</strong> humanoid with <strong>the</strong> dangers <strong>of</strong> science (as suggested by <strong>the</strong> myth <strong>of</strong>Prome<strong>the</strong>us, who stole fire from <strong>the</strong> gods to give it to mortals). In addition to establishing a literary traditionand <strong>the</strong> genre <strong>of</strong> horror stories, Frankenstein also imbued humanoids with an aura <strong>of</strong> ill fate.Automata, <strong>the</strong> second strand <strong>of</strong> thought, are literally "self- moving things" and have long interested mankind.Early models depended on levers and wheels, or on hydraulics. Clockwork technology enabled significantadvances after <strong>the</strong> thirteenth century, and later steam and electro- mechanics were also applied. The primarypurpose <strong>of</strong> automata was entertainment ra<strong>the</strong>r than employment as useful artifacts. Although many patternswere used, <strong>the</strong> human form always excited <strong>the</strong> greatest fascination. During <strong>the</strong> twentieth century, several newtechnologies moved automata into <strong>the</strong> utilitarian realm. Geduld and Gottesman 8 and Frude 2 review <strong>the</strong>chronology <strong>of</strong> clay model, water clock, golem, homunculus, android, and cyborg that culminated in <strong>the</strong>contemporary concept <strong>of</strong> <strong>the</strong> robot.The term robot derives from <strong>the</strong> Czech word robota, meaning forced work or compulsory service, orrobotnik, meaning serf. It was first used by <strong>the</strong> Czech playwright Karel Çapek in 1918 in a short story andagain in his 1921 play R. U. R., which stood for Rossum's Universal Robots. Rossum, a fictional Englishman,used biological methods to invent and mass- produce "men" to serve humans. Eventually <strong>the</strong>y rebelled,became <strong>the</strong> dominant race, and wiped out humanity. The play was soon well known in English- speakingcountries.DefinitionUndeterred by its somewhat chilling origins (or perhaps ignorant <strong>of</strong> <strong>the</strong>m), technologists <strong>of</strong> <strong>the</strong> 1950sappropriated <strong>the</strong> term robot to refer to machines controlled by programs. A robot is "a reprogrammablemultifunctional device designed to manipulate and/or transport material through variable programmedmotions for <strong>the</strong> performance <strong>of</strong> a variety <strong>of</strong> tasks"9. The term robotics, which Asimov claims he coined in1942 10 refers to "a science or art involving both artificial intelligence (to reason) and mechanical engineering(to perform physical acts suggested by reason)"11.As currently defined, robots exhibit three key elements:http://www.rogerclarke.com/SOS/Asimov.htmlPage 4 <strong>of</strong> 30


Roger Clarke's <strong>'Asimov's</strong> <strong>Laws</strong> <strong>of</strong> <strong><strong>Robotics</strong>'</strong><strong>2013</strong>-09-03 10:33 AMdeath. It is not for him to decide. He may not harm a human - variety skunk or variety angel." 7 On <strong>the</strong> o<strong>the</strong>rhand <strong>the</strong>y might not, as when a robot tells a human, "In conflict between your safety and that <strong>of</strong> ano<strong>the</strong>r, Imust guard yours." 22 In ano<strong>the</strong>r short story, robots agree that <strong>the</strong>y "must obey a human being who is fit bymind, character, and knowledge to give me that order." Ultimately, this leads <strong>the</strong> robot to "disregard shapeand form in judging between human beings" and to recognize his companion robot not merely as human butas a human "more fit than <strong>the</strong> o<strong>the</strong>rs." 18 Many subtle problems can be constructed. For example. a personmight try forcing a robot to comply with an instruction to harm a human (and <strong>the</strong>reby violate <strong>the</strong> first law) bythreatening to kill himself unless <strong>the</strong> robot obeys.How is a robot to judge <strong>the</strong> trade- <strong>of</strong>f between a high probability <strong>of</strong> lesser harm to one person versus a lowprobability <strong>of</strong> more serious harm to ano<strong>the</strong>r? Asimov's stories refer to this issue but are somewhatinconsistent with each o<strong>the</strong>r and with <strong>the</strong> strict wording <strong>of</strong> <strong>the</strong> first law.More serious difficulties arise in relation to <strong>the</strong> valuation <strong>of</strong> multiple humans. The first law does not evencontemplate <strong>the</strong> simple case <strong>of</strong> a single terrorist threatening many lives. In a variety <strong>of</strong> stories, however,Asimov interprets <strong>the</strong> law to recognize circumstances in which a robot may have to injure or even kill one ormore humans to protect one or more o<strong>the</strong>rs: "The Machine cannot harm a human being more than minimally,and that only to save a greater number" 23 (emphasis added). And again: "The First Law is not absolute.What if harming a human being saves <strong>the</strong> lives <strong>of</strong> two o<strong>the</strong>rs, or three o<strong>the</strong>rs, or even three billion o<strong>the</strong>rs?The robot may have thought that saving <strong>the</strong> Federation took precedence over <strong>the</strong> saving <strong>of</strong> one life." 24These passages value humans exclusively on <strong>the</strong> basis <strong>of</strong> numbers. A later story includes this justification:"To expect robots to make judgments <strong>of</strong> fine points such as talent, intelligence, <strong>the</strong> general usefulness tosociety, has always seemed impractical. That would delay decision to <strong>the</strong> point where <strong>the</strong> robot is effectivelyimmobilized. So we go by numbers." 18A robot's cognitive powers might be sufficient for distinguishing between attacker and attackee, but <strong>the</strong> firstlaw alone does not provide a robot with <strong>the</strong> means to distinguish between a "good" person and a "bad" one.Hence, a robot may have to constrain a "good" attackee's self- defense to protect <strong>the</strong> "bad" attacker fromharm. Similarly, disciplining children and prisoners may be difficult under <strong>the</strong> laws, which would limitrobots' usefulness for supervision within nurseries and penal institutions. 22 Only after many generations <strong>of</strong>self- development does a humanoid robot learn to reason that "what seemed like cruelty [to a human] might,in <strong>the</strong> long run, be kindness." 12The more subtle life- and- death cases, such as assistance in <strong>the</strong> voluntary euthanasia <strong>of</strong> a fatally ill or injuredperson to gain immediate access to organs that would save several o<strong>the</strong>r lives, might fall well outside arobot's appreciation. Thus, <strong>the</strong> first law would require a robot to protect <strong>the</strong> threatened human, unless it wasable to judge <strong>the</strong> steps taken to be <strong>the</strong> least harmful strategy. The practical solution to such difficult moralquestions would be to keep robots out <strong>of</strong> <strong>the</strong> operating <strong>the</strong>ater. 22The problem underlying all <strong>of</strong> <strong>the</strong>se issues is that most probabilities used as input to normative decisionmodels are not objective; ra<strong>the</strong>r, <strong>the</strong>y are estimates <strong>of</strong> probability based on human (or robot) judgment. Theextent to which judgment is central to robotic behavior is summed up in <strong>the</strong> cynical rephrasing <strong>of</strong> <strong>the</strong> first lawby <strong>the</strong> major (human) character in <strong>the</strong> four novels: "A robot must not hurt a human being, unless he can think<strong>of</strong> a way to prove it is for <strong>the</strong> human being's ultimate good after all." 19http://www.rogerclarke.com/SOS/Asimov.htmlPage 9 <strong>of</strong> 30


Roger Clarke's <strong>'Asimov's</strong> <strong>Laws</strong> <strong>of</strong> <strong><strong>Robotics</strong>'</strong><strong>2013</strong>-09-03 10:33 AM* The sheer complexityTo cope with <strong>the</strong> judgmental element in robot decision making, Asimov's later novels introduced a fur<strong>the</strong>rcomplication: "On......[worlds o<strong>the</strong>r than Earth], . . . <strong>the</strong> Third Law is distinctly stronger in comparison to <strong>the</strong>Second Law. . . . An order for self- destruction would be questioned and <strong>the</strong>re would have to be a trulylegitimate reason for it to be carried through - a clear and present danger." 16 And again, "Harm through anactive deed outweighs, in general, harm through passivity - all things being reasonably equal. . . . [A robot is]always to choose truth over nontruth, if <strong>the</strong> harm is roughly equal in both directions. In general, that is."16The laws are not absolutes, and <strong>the</strong>ir force varies with <strong>the</strong> individual machine's programming, <strong>the</strong>circumstances, <strong>the</strong> robot's previous instructions, and its experience. To cope with <strong>the</strong> inevitable logicalcomplexities, a human would require not only a predisposition to rigorous reasoning, and a considerableeducation, but also a great deal <strong>of</strong> concentration and composure. (Alternatively, <strong>of</strong> course, <strong>the</strong> human mayfind it easier to defer to a robot suitably equipped for fuzzy- reasoning- based judgment.)The strategies as well as <strong>the</strong> environmental variables involve complexity. "You must not think . . . thatrobotic response is a simple yes or no, up or down, in or out. ... There is <strong>the</strong> matter <strong>of</strong> speed <strong>of</strong> response."16In some cases (for example, when a human must be physically restrained), <strong>the</strong> degree <strong>of</strong> strength to beapplied must also be chosen.* The scope for dilemma and deadlockA deadlock problem was <strong>the</strong> key feature <strong>of</strong> <strong>the</strong> short story in which Asimov first introduced <strong>the</strong> laws. Heconstructed <strong>the</strong> type <strong>of</strong> stand- <strong>of</strong>f commonly referred to as <strong>the</strong> "Buridan's ass" problem. It involved a balancebetween a strong third- law self- protection tendency, causing <strong>the</strong> robot to try to avoid a source <strong>of</strong> danger, anda weak second- law order to approach that danger. "The conflict between <strong>the</strong> various rules is [meant to be]ironed out by <strong>the</strong> different positronic potentials in <strong>the</strong> brain," but in this case <strong>the</strong> robot "follows a circlearound [<strong>the</strong> source <strong>of</strong> danger], staying on <strong>the</strong> locus <strong>of</strong> all points <strong>of</strong> ... equilibrium." 5Deadlock is also possible within a single law. An example under <strong>the</strong> first law would be two humansthreatened with equal danger and <strong>the</strong> robot unable to contrive a strategy to protect one without sacrificing <strong>the</strong>o<strong>the</strong>r. Under <strong>the</strong> second law, two humans might give contradictory orders <strong>of</strong> equivalent force. The laternovels address this question with greater sophistication:What was troubling <strong>the</strong> robot was what roboticists called an equipotential <strong>of</strong> contradiction on<strong>the</strong> second level. Obedience was <strong>the</strong> Second Law and [<strong>the</strong> robot] was suffering from two roughlyequal and contradictory orders. Robot- block was what <strong>the</strong> general population called it or, morefrequently, roblock for short . . . [or] `mental freeze- out.' No matter how subtle and intricate abrain might be, <strong>the</strong>re is always some way <strong>of</strong> setting up a contradiction. This is a fundamentaltruth <strong>of</strong> ma<strong>the</strong>matics. 16Clearly, robots subject to such laws need to be programmed to recognize deadlock and ei<strong>the</strong>r choosearbitrarily among <strong>the</strong> alternative strategies or arbitrarily modify an arbitrarily chosen strategy variable (say,move a short distance in any direction) and reevaluate <strong>the</strong> situation: "If A and not- A are precisely equalmisery- producers according to his judgment, he chooses one or <strong>the</strong> o<strong>the</strong>r in a completely unpredictable wayand <strong>the</strong>n follows that unquestioningly. He does not go into mental freeze- out."16http://www.rogerclarke.com/SOS/Asimov.htmlPage 10 <strong>of</strong> 30


Roger Clarke's <strong>'Asimov's</strong> <strong>Laws</strong> <strong>of</strong> <strong><strong>Robotics</strong>'</strong><strong>2013</strong>-09-03 10:33 AMhad <strong>the</strong> laws imposed in precisely <strong>the</strong> manner intended; andwas at all times subject to <strong>the</strong>m - that is, <strong>the</strong>y could not be overridden or modified.It is important to know how malprogramming and modification <strong>of</strong> <strong>the</strong> laws' implementation in a robot(whe<strong>the</strong>r intentional or unintentional) can he prevented, detected, and dealt with.In an early short story, robots were "rescuing" humans whose work required short periods <strong>of</strong> relativelyharmless exposure to gamma radiation. Officials obtained robots with <strong>the</strong> first law modified so that <strong>the</strong>y wereincapable <strong>of</strong> injuring a human but under no compulsion to prevent one from coming to harm. This clearlyundermined <strong>the</strong> remaining part <strong>of</strong> <strong>the</strong> first law, since, for example, a robot could drop a heavy weight towarda human, knowing that it would be fast enough and strong enough to catch it before it harmed <strong>the</strong> person.However, once gravity had taken over, <strong>the</strong> robot would be free to ignore <strong>the</strong> danger. 25 Thus, a partialimplementation was shown to be risky, and <strong>the</strong> importance <strong>of</strong> robot audit underlined. O<strong>the</strong>r risks includetrapdoors, Trojan horses, and similar devices in <strong>the</strong> robot's programming.A fur<strong>the</strong>r imponderable is <strong>the</strong> effect <strong>of</strong> hostile environments and stress on <strong>the</strong> reliability and robustness <strong>of</strong>robots' performance in accordance with <strong>the</strong> laws. In one short story, it transpires that "The Machine ThatWon <strong>the</strong> War" had been receiving only limited and poor- quality data as a result <strong>of</strong> enemy action against itsreceptors and had been processing it unreliably because <strong>of</strong> a shortage <strong>of</strong> experienced maintenance staff. Each<strong>of</strong> <strong>the</strong> responsible managers had, in <strong>the</strong> interests <strong>of</strong> national morale, suppressed that information, even fromone ano<strong>the</strong>r, and had separately and independently "introduced a number <strong>of</strong> necessary biases" and "adjusted"<strong>the</strong> processing parameters in accordance with intuition. The executive director, even though unaware <strong>of</strong> <strong>the</strong>adjustments, had placed little reliance on <strong>the</strong> machine's output, preferring to carry out his responsibility tomankind by exercising his own judgment. 27A major issue in military applications generally 28 is <strong>the</strong> impossibility <strong>of</strong> contriving effective compliance testsfor complex systems subject to hostile and competitive environments. Asimov points out that <strong>the</strong> difficulties<strong>of</strong> assuring compliance will be compounded by <strong>the</strong> design and manufacture <strong>of</strong> robots by o<strong>the</strong>r robots. 22* Robot autonomySometimes humans may delegate control to a robot and find <strong>the</strong>mselves unable to regain it, at least in aparticular context. One reason is that to avoid deadlock, a robot must be capable <strong>of</strong> making arbitrarydecisions. Ano<strong>the</strong>r is that <strong>the</strong> laws embody an explicit ability for a robot to disobey an instruction, by virtue<strong>of</strong> <strong>the</strong> overriding first law.In an early Asimov short story, a robot "knows he can keep [<strong>the</strong> energy beam] more stable than we [humans]can, since he insists he's <strong>the</strong> superior being, so he must keep us out <strong>of</strong> <strong>the</strong> control room [in accordance with<strong>the</strong> first law]." 29 The same scenario forms <strong>the</strong> basis <strong>of</strong> one <strong>of</strong> <strong>the</strong> most vivid episodes in science fiction,HAL's attempt to wrest control <strong>of</strong> <strong>the</strong> spacecraft from Bowman in 2001: A Space Odyssey. Robot autonomyis also reflected in a lighter moment in one <strong>of</strong> Asimov's later novels, when a character says to his companion,"For now I must leave you. The ship is coasting in for a landing, and I must stare intelligently at <strong>the</strong> computerthat controls it, or no one will believe I am <strong>the</strong> captain." 14In extreme cases, robot behavior will involve subterfuge, as <strong>the</strong> machine determines that <strong>the</strong> human, for his orhttp://www.rogerclarke.com/SOS/Asimov.htmlPage 12 <strong>of</strong> 30


Roger Clarke's <strong>'Asimov's</strong> <strong>Laws</strong> <strong>of</strong> <strong><strong>Robotics</strong>'</strong><strong>2013</strong>-09-03 10:33 AMWith <strong>the</strong>ir fictional "positronic" brains imprinted with <strong>the</strong> mandate to (in order <strong>of</strong> priority) prevent harm tohumans, obey <strong>the</strong>ir human masters, and protect <strong>the</strong>mselves, Asimov's robots had to deal with greatcomplexity. In a given situation, a robot might be unable to satisfy <strong>the</strong> demands <strong>of</strong> two equally powerfulmandates and go into "mental freezeout." Semantics is also a problem. As demonstrated in Part 1 <strong>of</strong> thisarticle (Computer, December 1993, pp. 53- 61), language is much more than a set <strong>of</strong> literal meanings andAsimov showed us that a machine trying to distinguish, for example, who or what is human may encountermany difficulties that humans <strong>the</strong>mselves handle easily and intuitively. Thus, robots must have sufficientcapabilities for judgment - capabilities that can cause <strong>the</strong>m to frustrate <strong>the</strong> intentions <strong>of</strong> <strong>the</strong>ir masters when, ina robot's judgment, a higher order law applies.As information technology evolves and machines begin to design and build o<strong>the</strong>r machines, <strong>the</strong> issue <strong>of</strong>human control gains greater significance. In time. human values tend to change; <strong>the</strong> rules reflecting <strong>the</strong>sevalues, and embedded in existing robotic devices. may need to be modified. But if <strong>the</strong>y are implicit ra<strong>the</strong>rthan explicit, with <strong>the</strong>ir effects scattered widely across a system, <strong>the</strong>y may not be easily replaceable. Asimovhimself discovered many contradictions and eventually revised <strong>the</strong> <strong>Laws</strong> <strong>of</strong> <strong>Robotics</strong>.Asimov's 1985 revised <strong>Laws</strong> <strong>of</strong> <strong>Robotics</strong>The Zeroth lawAfter introducing <strong>the</strong> original three laws, Asimov detected. as early as 1950, a need to extend <strong>the</strong> first law,which protected individual humans, so that it would protect humanity as a whole. Thus, his calculatingmachines "have <strong>the</strong> good <strong>of</strong> humanity at heart through <strong>the</strong> overwhelming force <strong>of</strong> <strong>the</strong> First Law <strong>of</strong> <strong>Robotics</strong>" 1(emphasis added). In 1985 he developed this idea fur<strong>the</strong>r by postulating a "zeroth" law that placed humanity'sinterests above those <strong>of</strong> any individual while retaining a high value on individual human life. 2 The revised set<strong>of</strong> laws is shown in <strong>the</strong> sidebar.Asimov pointed out that under a strict interpretation <strong>of</strong> <strong>the</strong> first law, a robot would protect a person even if<strong>the</strong> survival <strong>of</strong> humanity as a whole was placed at risk. Possible threats include annihilation by an alien ormutant human race, or by a deadly virus. Even when a robot's own powers <strong>of</strong> reasoning led it to conclude thatmankind as a whole was doomed if it refused to act, it was never<strong>the</strong>less constrained: "I sense <strong>the</strong> oncoming <strong>of</strong>catastrophe . . . [but} I can only follow <strong>the</strong> <strong>Laws</strong>." 2In Asimov's fiction <strong>the</strong> robots are tested by circumstances and must seriously consider whe<strong>the</strong>r <strong>the</strong>y can harma human to save humanity. The turning point comes when <strong>the</strong> robots appreciate that <strong>the</strong> laws are indirectlymodifiable by roboticists through <strong>the</strong> definitions programmed into each robot: "If <strong>the</strong> <strong>Laws</strong> <strong>of</strong> <strong>Robotics</strong>, even<strong>the</strong> First Law, are not absolutes, and if human beings can modify <strong>the</strong>m, might it not be that perhaps, underproper conditions, we ourselves might mod - " 2 Although <strong>the</strong> robots are prevented by imminent "roblock"(robot block, or deadlock) from even completing <strong>the</strong> sentence, <strong>the</strong> groundwork has been laid.Later, when a robot perceives a clear and urgent threat to mankind, it concludes, "Humanity as a whole ismore important than a single human being. There is a law that is greater than <strong>the</strong> First Law: `A robot may notinjure humanity, or through inaction, allow humanity to come to harm." 2Defining "humanity"http://www.rogerclarke.com/SOS/Asimov.htmlPage 14 <strong>of</strong> 30


Roger Clarke's <strong>'Asimov's</strong> <strong>Laws</strong> <strong>of</strong> <strong><strong>Robotics</strong>'</strong><strong>2013</strong>-09-03 10:33 AMorders given it by superordinate robots except where such orders would conflict with a higher order law."Such a law would fall between <strong>the</strong> second and third laws.Fur<strong>the</strong>rmore, subordinate robots should protect <strong>the</strong>ir superordinate robot. This could be implemented as anextension or corollary to <strong>the</strong> third law; that is, to protect itself, a robot would have to protect ano<strong>the</strong>r robot onwhich it depends. Indeed, a subordinate robot may need to be capable <strong>of</strong> sacrificing itself to protect its robotoverseer. Thus, an additional law superior to <strong>the</strong> third law but inferior to orders from ei<strong>the</strong>r a human or arobot overseer seems appropriate: "A robot must protect <strong>the</strong> existence <strong>of</strong> a superordinate robot as long assuch protection does not conflict with a higher order law."The wording <strong>of</strong> such laws should allow for nesting, since robot overseers may report to higher level robots. Itwould also be necessary to determine <strong>the</strong> form <strong>of</strong> <strong>the</strong> superordinate relationships:a tree, in which each robot has precisely one immediate overseer, whe<strong>the</strong>r robot or human;a constrained network, in which each robot may have several overseers but restrictions determinewho may act as an overseer; oran unconstrained network, in which each robot may have any number <strong>of</strong> o<strong>the</strong>r robots or persons asoverseers.This issue <strong>of</strong> a command structure is far from trivial, since it is central to democratic processes that no singleentity shall have ultimate authority. Ra<strong>the</strong>r, <strong>the</strong> most senior entity in any decision- making hierarchy must besubject to review and override by some o<strong>the</strong>r entity, exemplified by <strong>the</strong> balance <strong>of</strong> power in <strong>the</strong> threebranches <strong>of</strong> government and <strong>the</strong> authority <strong>of</strong> <strong>the</strong> ballot box. Successful, long- lived systems involve checksand balances in a lattice ra<strong>the</strong>r than a mere tree structure. Of course, <strong>the</strong> structures and processes <strong>of</strong> humanorganizations may prove inappropriate for robotic organization. In any case, additional laws <strong>of</strong> some kindwould be essential to regulate relationships among robots.The sidebar shows an extended set <strong>of</strong> laws, one that incorporates <strong>the</strong> additional laws postulated in thissection. Even this set would not alway's ensure appropriate robotic behavior. However, it does reflect <strong>the</strong>implicit laws that emerge in Asimov's fiction while demonstrating that any realistic set <strong>of</strong> design principleswould have to be considerably more complex than Asimov's 1940 or 1985 laws. This additional complexitywould inevitably exacerbate <strong>the</strong> problems identified earlier in this article and create new ones.The Meta-LawAn Extended Set <strong>of</strong> <strong>the</strong> <strong>Laws</strong> <strong>of</strong> <strong>Robotics</strong>A robot may not act unless its actions are subject to <strong>the</strong> <strong>Laws</strong> <strong>of</strong> <strong>Robotics</strong>Law ZeroA robot may not injure humanity, or, through inaction, allow humanity to come to harmLaw OneA robot may not injure a human being, or, through inaction, allow a human being to come to harm, unlesshttp://www.rogerclarke.com/SOS/Asimov.htmlPage 19 <strong>of</strong> 30


Roger Clarke's <strong>'Asimov's</strong> <strong>Laws</strong> <strong>of</strong> <strong><strong>Robotics</strong>'</strong><strong>2013</strong>-09-03 10:33 AMThe <strong>Laws</strong> <strong>of</strong> <strong>Robotics</strong> designate no particular class <strong>of</strong> humans (not even a robot's owner) as more deserving<strong>of</strong> protection or obedience than ano<strong>the</strong>r. A human might establish such a relationship by command, but <strong>the</strong>laws give such a command no special status: ano<strong>the</strong>r human could <strong>the</strong>refore countermand it. In short, <strong>the</strong>laws reflect <strong>the</strong> humanistic and egalitarian principles that <strong>the</strong>oretically underlie most democratic nations.The laws <strong>the</strong>refore stand in stark contrast to our conventional notions about an information technologyartifact, whose owner is implicitly assumed to be its primary beneficiary. An organization shapes anapplication's design and use for its own benefit. Admittedly, during <strong>the</strong> last decade users have been givengreater consideration in terms <strong>of</strong> both <strong>the</strong> human- machine interface and participation in system development.But that trend has been justified by <strong>the</strong> better returns <strong>the</strong> organization can get from its information technologyinvestment ra<strong>the</strong>r than by any recognition that users are stakeholders with a legitimate voice in decisionmaking. The interests <strong>of</strong> o<strong>the</strong>r affected parties are even less likely to be reflected.In this era <strong>of</strong> powerful information technology, pr<strong>of</strong>essional bodies <strong>of</strong> information technologists need toconsider:identification <strong>of</strong> stakeholders and how <strong>the</strong>y are affected;prior consultation with stakeholders;quality assurance standards for design, manufacture, use, and maintenance;liability for harm resulting from ei<strong>the</strong>r malfunction or use in conformance with <strong>the</strong> designer'sintentions; andcomplaint- handling and dispute- resolution procedures.Once any resulting standards reach a degree <strong>of</strong> maturity, legislatures in <strong>the</strong> many hundreds <strong>of</strong> legaljurisdictions throughout <strong>the</strong> world would probably have to devise enforcement procedures.The interests <strong>of</strong> people affected by modern information technology applications have been gainingrecognition. For example, consumer representatives are now being involved in <strong>the</strong> statement <strong>of</strong> userrequirements and <strong>the</strong> establishment <strong>of</strong> <strong>the</strong> regulatory environment for consumer electronic- funds- transfersystems. This participation may extend to <strong>the</strong> logical design <strong>of</strong> such systems. O<strong>the</strong>r examples are trade- unionnegotiations with employers regarding technology- enforced change, and <strong>the</strong> publication <strong>of</strong> s<strong>of</strong>tware qualityassurancestandards.For large- scale applications <strong>of</strong> information technology, governments have been called upon to applyprocedures like those commonly used in major industrial and social projects. Thus, commitment might haveto be deferred pending dissemination and public discussion <strong>of</strong> independent environmental or social impactstatements. Although organizations that use information technology might see this as interventionism,decision making and approval for major information technology applications may never<strong>the</strong>less become morewidely representative.Closed- system versus open- system thinkingComputer- based systems no longer comprise independent machines each serving a single location. Themarriage <strong>of</strong> computing with telecommunications has produced multicomponent systems designed to supportall elements <strong>of</strong> a widely dispersed organization. Integration hasn't been simply geographic, however. Thepractice <strong>of</strong> information systems has matured since <strong>the</strong> early years when existing manual systems wereautomated largely without procedural change. Developers now seek payback via <strong>the</strong> rationalization <strong>of</strong>http://www.rogerclarke.com/SOS/Asimov.htmlPage 21 <strong>of</strong> 30


Roger Clarke's <strong>'Asimov's</strong> <strong>Laws</strong> <strong>of</strong> <strong><strong>Robotics</strong>'</strong><strong>2013</strong>-09-03 10:33 AMexisting systems and varying degrees <strong>of</strong> integration among previously separate functions. With <strong>the</strong> advent <strong>of</strong>strategic and interorganizational systems, economies are being sought at <strong>the</strong> level <strong>of</strong> industry sectors, andfunctional integration increasingly occurs across corporate boundaries.Although programmers can no longer regard <strong>the</strong> machine as an almost entirely closed system with tightlycircumscribed sensory and motor capabilities, many habits <strong>of</strong> closed- system thinking remain. When systemshave multiple components, linkages to o<strong>the</strong>r systems, and sophisticated sensory and motor capabilities, <strong>the</strong>scope needed for understanding and resolving problems is much broader than for a mere hardware/s<strong>of</strong>twaremachine. Human activities in particular must be perceived as part <strong>of</strong> <strong>the</strong> system. This applies to manualprocedures within systems (such as reading dials on control panels), human activities on <strong>the</strong> fringes <strong>of</strong>systems (such as decision making based on computer- collated and - displayed information), and <strong>the</strong> security<strong>of</strong> <strong>the</strong> user's environment (automated teller machines, for example). The focus must broaden from meretechnology to technology in use.General systems thinking leads information technologists to recognize that relativity and change must heaccommodated. Today, an artifact may be applied in multiple cultures where language, religion, laws, andcustoms differ. Over time, <strong>the</strong> original context may change. For example, models for a criminal justicesystem - one based on punishment and ano<strong>the</strong>r based on redemption - may alternately dominate socialthinking. Therefore, complex systems must be capable <strong>of</strong> adaptation.Blind acceptance <strong>of</strong> technological and o<strong>the</strong>r imperativesContemporary utilitarian society seldom challenges <strong>the</strong> presumption that what can be done should be done.Although this technological imperative is less pervasive than people generally think, societies never<strong>the</strong>lesstend to follow where <strong>the</strong>ir technological capabilities lead. Related tendencies include <strong>the</strong> economicimperative (what can be done more efficiently should be) and <strong>the</strong> marketing imperative (any effectivedemand should be met). An additional tendency might be called <strong>the</strong> "information imperative," <strong>the</strong> dominance<strong>of</strong> administrative efficiency, information richness, and rational decision making. However, <strong>the</strong> collection <strong>of</strong>personal data has become so pervasive that citizens and employees have begun to object.The greater a technology's potential to promote change, <strong>the</strong> more carefully a society should consider <strong>the</strong>desirability <strong>of</strong> each application. Complementary measures that may be needed to ameliorate its negativeeffects should also be considered. This is a major <strong>the</strong>me <strong>of</strong> Asimov's stories, as he explores <strong>the</strong> hidden effects<strong>of</strong> technology. The potential impact <strong>of</strong> information technology is so great that it would be inexcusable forpr<strong>of</strong>essionals to succumb blindly to <strong>the</strong> economic, marketing, information, technological, and o<strong>the</strong>rimperatives. Application s<strong>of</strong>tware pr<strong>of</strong>essionals can no longer treat <strong>the</strong> implications <strong>of</strong> informationtechnology as someone else's problem but must consider <strong>the</strong>m as part <strong>of</strong> <strong>the</strong> project. 15Human acceptance <strong>of</strong> robotsIn Asimov's stories, humans develop affection for robots, particularly humaniform robots. In his very firstshort story, a little girl is too closely attached to Robbie <strong>the</strong> Robot for her parents' liking.' 12 In ano<strong>the</strong>r earlystory, a woman starved for affection from her husband and sensitively assisted by a humanoid robot toincrease her self confidence entertains thoughts approaching love toward it/him. 16http://www.rogerclarke.com/SOS/Asimov.htmlPage 22 <strong>of</strong> 30


Roger Clarke's <strong>'Asimov's</strong> <strong>Laws</strong> <strong>of</strong> <strong><strong>Robotics</strong>'</strong><strong>2013</strong>-09-03 10:33 AMNonhumaniforms, such as conventional industrial robots and large, highly dispersed robotic systems (such aswarehouse managers. ATMs, and EFT/POS systems) seem less likely to elicit such warmth. Yet severalstudies have found a surprising degree <strong>of</strong> identification by humans with computers. 17,18 Thus, some hi<strong>the</strong>rtoexclusively human characteristics are being associated with computer systems that don't even exhibit typicalrobotic capabilities.Users must be continually reminded that <strong>the</strong> capabilities <strong>of</strong> hardware/s<strong>of</strong>tware components are limited:<strong>the</strong>y contain many inherent assumptions;<strong>the</strong>y are not flexible enough to cope with all <strong>of</strong> <strong>the</strong> manifold exceptions that inevitably arise; and<strong>the</strong>y do not adapt to changes in <strong>the</strong>ir environment;authority is not vested in hardware/ s<strong>of</strong>tware components but ra<strong>the</strong>r in <strong>the</strong> individuals who use <strong>the</strong>m.Educational institutions and staff training programs must identify <strong>the</strong>se limitations; yet even this is notsufficient: The human- machine interface must reflect <strong>the</strong>m. Systems must be designed so that users arerequired to continually exercise <strong>the</strong>ir own expertise, and system output should not be phrased in a way thatimplies unwarranted authority. These objectives challenge <strong>the</strong> conventional outlook <strong>of</strong> system designers.Human opposition to robotsRobots are agents <strong>of</strong> change and <strong>the</strong>refore potentially upsetting to those with vested interests. Of all <strong>the</strong>machines so far invented or conceived <strong>of</strong>, robots represent <strong>the</strong> most direct challenge to humans. Vociferousand even violent campaigns against robotics should not be surprising. Beyond concerns <strong>of</strong> self interest is <strong>the</strong>possibility that some humans could be revulsed by robots, particularly those with humanoid characteristics.Some opponents may be mollified as robotic behavior becomes more tactful. Ano<strong>the</strong>r tenable argument isthat by creating and deploying artifacts that are in some ways superior. humans degrade <strong>the</strong>mselves.System designers must anticipate a variety <strong>of</strong> negative reactions against <strong>the</strong>ir creations from different groups<strong>of</strong> stakeholders. Much will depend on <strong>the</strong> number and power <strong>of</strong> <strong>the</strong> people who feel threatened - and on <strong>the</strong>scope <strong>of</strong> <strong>the</strong> change <strong>the</strong>y anticipate. If, as Asimov speculates, 9 a robot- based economy develops withoutequitable adjustments, <strong>the</strong> backlash could be considerable.Such a rejection could involve powerful institutions as well as individuals. In one Asimov story, <strong>the</strong> USDepartment <strong>of</strong> Defense suppresses a project intended to produce <strong>the</strong> perfect robot- soldier. It reasons that <strong>the</strong>degree <strong>of</strong> discretion and autonomy needed for battlefield performance would tend to make robots rebelliousin o<strong>the</strong>r circumstances (particularly during peace time) and unprepared to suffer <strong>the</strong>ir commanders' foolishdecisions. 19 At a more basic level, product lines and markets might be threatened, and hence <strong>the</strong> pr<strong>of</strong>its andeven <strong>the</strong> survival <strong>of</strong> corporations. Although even very powerful cartels might not be able to impede roboticsfor very long, its development could never<strong>the</strong>less be delayed or altered. Information technologists need torecognize <strong>the</strong> negative perceptions <strong>of</strong> various stakeholders and manage both system design and projectpolitics accordingly.The structuredness <strong>of</strong> decision makingFor five decades <strong>the</strong>re has been little doubt that computers hold significant computational advantages overhttp://www.rogerclarke.com/SOS/Asimov.htmlPage 23 <strong>of</strong> 30


Roger Clarke's <strong>'Asimov's</strong> <strong>Laws</strong> <strong>of</strong> <strong><strong>Robotics</strong>'</strong><strong>2013</strong>-09-03 10:33 AMhumans. However, <strong>the</strong> merits <strong>of</strong> machine decision making remain in dispute. Some decision processes arehighly structured and can be resolved using known algorithms operating on defined data items with definedinterrelationships. Most structured decisions are candidates for automation, subject, <strong>of</strong> course, to economicconstraints. The advantages <strong>of</strong> machines must also be balanced against risks. The choice to automate must bemade carefully because <strong>the</strong> automated decision process (algorithm, problem description. problem- domaindescription, or analysis <strong>of</strong> empirical data) may later prove to be inappropriate for a particular type <strong>of</strong>decision. Also, humans involved as data providers, data communicators, or decision implementers may notperform rationally because <strong>of</strong> poor training, poor performance under pressure, or willfulness.Unstructured decision making remains <strong>the</strong> preserve <strong>of</strong> humans for one or more <strong>of</strong> <strong>the</strong> following reasons:humans have not yet worked out a suitable way to program (or teach) a machine how to make that class<strong>of</strong> decision;some relevant data cannot be communicated to <strong>the</strong> machine;"fuzzy" or "open- textured" concepts or constructs are involved;such decisions involve judgments that system participants feel should not be made by machines onbehalf <strong>of</strong> humans.One important type <strong>of</strong> unstructured decision is problem diagnosis. As Asimov described <strong>the</strong> problem,"How..... can we send a robot to find a flaw in a mechanism when we cannot possibly give precise orders,since we know nothing about <strong>the</strong> flaw ourselves'? `Find out what's wrong' is not an order you can give to arobot; only to a man." 20 Knowledge- based technology has since been applied to problem diagnosis, butAsimov's insight retains its validity: A problem may be linguistic ra<strong>the</strong>r than technical, requiring commonsense, not domain knowledge. Elsewhere, Asimov calls robots "logical but not reasonable" and tells <strong>of</strong>household robots removing important evidence from a murder scene because a human did not think to order<strong>the</strong>m to preserve it. 9The literature <strong>of</strong> decision support systems recognizes an intermediate case, semistructured decision making.Humans are assigned <strong>the</strong> decision task, and systems are designed to provide support for ga<strong>the</strong>ring andstructuring potentially relevant data and for modeling and experimenting with alternative strategies. Throughcontinual progress in science and technology, previously unstructured decisions are reduced to semistructuredor structured decisions. The choice <strong>of</strong> which decisions to automate is <strong>the</strong>refore provisional, pending fur<strong>the</strong>radvances in <strong>the</strong> relevant area <strong>of</strong> knowledge. Conversely, because <strong>of</strong> environmental or cultural change,structured decisions may not remain so. For example, a family <strong>of</strong> viruses might mutate so rapidly that <strong>the</strong>reference data within diagnostic support systems is outstripped and even <strong>the</strong> logic becomes dangerouslyinadequate.Delegating to a machine any kind <strong>of</strong> decision that is less than fully structured invites errors and mishaps. Ofcourse. human decision- makers routinely make mistakes too. One reason for humans' retaining responsibilityfor unstructured decision making is rational: Appropriately educated and trained humans may make moreright decisions and/or fewer seriously wrong decisions than a machine. Using common sense, humans canrecognize when conventional approaches and criteria do not apply, and <strong>the</strong>y can introduce conscious valuejudgments. Perhaps a more important reason is <strong>the</strong> arational preference <strong>of</strong> humans to submit to <strong>the</strong> judgments<strong>of</strong> <strong>the</strong>ir peers ra<strong>the</strong>r than <strong>of</strong> machines: If someone is going to make a mistake costly to me, better for it to bean understandably incompetent human like myself than a mysteriously incompetent machine. 8Because robot and human capabilities differ, for <strong>the</strong> foreseeable future at least, each will have specifichttp://www.rogerclarke.com/SOS/Asimov.htmlPage 24 <strong>of</strong> 30


Roger Clarke's <strong>'Asimov's</strong> <strong>Laws</strong> <strong>of</strong> <strong><strong>Robotics</strong>'</strong><strong>2013</strong>-09-03 10:33 AMcomparative advantages. Information technologists must delineate <strong>the</strong> relationship between robots and peopleby applying <strong>the</strong> concept <strong>of</strong> decision structuredness to blend computer- based and human elementsadvantageously. The goal should be to achieve complementary intelligence ra<strong>the</strong>r than to continue pursuing<strong>the</strong> chimera <strong>of</strong> unneeded artificial intelligence. As Wyndham put it in 1932: "Surely man and machine arenatural complements: They assist one ano<strong>the</strong>r." 21Risk managementWhe<strong>the</strong>r or not subjected to intrinsic laws or design guidelines, robotics embodies risks to property as well asto humans. These risks must be managed; appropriate forms <strong>of</strong> risk avoidance and diminution need to beapplied, and regimes for fallback, recovery, and retribution must be established.Controls are needed to ensure that intrinsic laws, if any, are operational at all times and that guidelines fordesign, development, testing, use, and maintenance are applied. Second- order control mechanisms areneeded to audit first- order control mechanisms. Fur<strong>the</strong>rmore, those bearing legal responsibility for harmarising from <strong>the</strong> use <strong>of</strong> robotics must be clearly identified. Courtroom litigation may determine <strong>the</strong> actualamount <strong>of</strong> liability, but assigning legal responsibilities in advance will ensure that participants take due care.In most <strong>of</strong> Asimov's robot stories, robots are owned by <strong>the</strong> manufacturer even while in <strong>the</strong> possession <strong>of</strong>individual humans or corporations. Hence legal responsibility for harm arising from robot noncompliancewith <strong>the</strong> laws can be assigned with relative ease. In most real- world jurisdictions, however, <strong>the</strong>re areenormous uncertainties, substantial gaps in protective coverage, high costs, and long delays.Each jurisdiction, consistent with its own product liability philosophy, needs to determine who should bear<strong>the</strong> various risks. The law must be sufficiently clear so that debilitating legal battles do not leave injuredparties without recourse or sap <strong>the</strong> industry <strong>of</strong> its energy. Information technologists need to communicate tolegislators <strong>the</strong> importance <strong>of</strong> revising and extending <strong>the</strong> laws that assign liability for harm arising from <strong>the</strong>use <strong>of</strong> information technology.Enhancements to codes <strong>of</strong> ethicsAssociations <strong>of</strong> information technology pr<strong>of</strong>essionals, such as <strong>the</strong> lEEE Computer Society, <strong>the</strong> Associationfor Computing Machinery, <strong>the</strong> British Computer Society, and <strong>the</strong> Australian Computer Society, areconcerned with pr<strong>of</strong>essional standards, and <strong>the</strong>se standards almost always include a code <strong>of</strong> ethics. Suchcodes aren't intended so much to establish standards as to express standards that already exist informally.None<strong>the</strong>less, <strong>the</strong>y provide guidance concerning how pr<strong>of</strong>essionals should perform <strong>the</strong>ir work, and <strong>the</strong>re issignificant literature in <strong>the</strong> area.The issues raised in this article suggest that existing codes <strong>of</strong> ethics need to be reexamined in <strong>the</strong> light <strong>of</strong>developing technology. Codes generally fail to reflect <strong>the</strong> potential effects <strong>of</strong> computer- enhanced machinesand <strong>the</strong> inadequacy <strong>of</strong> existing managerial, institutional, and legal processes for coping with inherent risks.Information technology pr<strong>of</strong>essionals need to stimulate and inform debate on <strong>the</strong> issues. Along with robotics.many o<strong>the</strong>r technologies deserve consideration. Such an endeavor would mean reassessing pr<strong>of</strong>essionalism in<strong>the</strong> light <strong>of</strong> fundamental works on ethical aspects <strong>of</strong> technology.http://www.rogerclarke.com/SOS/Asimov.htmlPage 25 <strong>of</strong> 30


Roger Clarke's <strong>'Asimov's</strong> <strong>Laws</strong> <strong>of</strong> <strong><strong>Robotics</strong>'</strong><strong>2013</strong>-09-03 10:33 AMAsimov's <strong>Laws</strong> <strong>of</strong> <strong>Robotics</strong> have been a very successful literary device. Perhaps ironically, or perhapsbecause it was artistically appropriate, <strong>the</strong> sum <strong>of</strong> Asimov's stories disprove <strong>the</strong> contention that he beganwith: It is not possible to reliably constrain <strong>the</strong> behavior <strong>of</strong> robots by devising and applying a set <strong>of</strong> rules.The freedom <strong>of</strong> fiction enabled Asimov to project <strong>the</strong> laws into many future scenarios; in so doing, heuncovered issues that will probably arise someday in real- world situations. Many aspects <strong>of</strong> <strong>the</strong> lawsdiscussed in this article are likely to be weaknesses in any robotic code <strong>of</strong> conduct. Contemporaryapplications <strong>of</strong> information technology such as CAD/CAM, EFT/POS, warehousing systems, and trafficcontrol are already exhibiting robotic characteristics. The difficulties identified are <strong>the</strong>refore directly andimmediately relevant to information technology pr<strong>of</strong>essionals.Increased complexity means new sources <strong>of</strong> risk, since each activity depends directly on <strong>the</strong> effectiveinteraction <strong>of</strong> many artifacts. Complex systems are prone to component failures and malfunctions, and tointermodule inconsistencies and misunderstandings. Thus, new forms <strong>of</strong> backup, problem diagnosis, interimoperation, and recovery are needed. Tolerance and flexibility in design must replace <strong>the</strong> primacy <strong>of</strong> shorttermobjectives such as programming productivity. If information technologists do not respond to <strong>the</strong>challenges posed by robotic systems. as investigated in Asimov's stories, information technology artifactswill be poorly suited for real- world applications. They may be used in ways not intended by <strong>the</strong>ir designers,or simply be rejected as incompatible with <strong>the</strong> individuals and organizations <strong>the</strong>y were meant to serve.Isaac Asimov, 1920-1992Born near Smolensk in Russia, Isaac Asimov came to <strong>the</strong> United States with his parents three years later. Hegrew up in Brooklyn, becoming a US citizen at <strong>the</strong> age <strong>of</strong> eight. He earned bachelor's, master's, and doctoraldegrees in chemistry from Columbia University and qualified as an instructor in biochemistry at BostonUniversity School <strong>of</strong> medicine, where he taught for many years and performed research in nucleic acid.As a child, Asimov had begun reading <strong>the</strong> science fiction stories on <strong>the</strong> racks in his family's candy store, andthose early years <strong>of</strong> vicarious visits to strange worlds had filled him with an undying desire to write his ownadventure tales. He sold his first short story in 1938 and after wartime service as a chemist and a short hitchin <strong>the</strong> Army, he focused increasingly on his writing.Asimov was among <strong>the</strong> most prolific <strong>of</strong> authors, publishing hundreds <strong>of</strong> books on various subjects anddozens <strong>of</strong> short stories. His <strong>Laws</strong> <strong>of</strong> <strong>Robotics</strong> underlie four <strong>of</strong> his full-length novels as well as many <strong>of</strong> hisshort stories. The World Science Fiction Convention bestowed Hugo Awards on Asimov in nearly everycategory <strong>of</strong> science fiction, and his short story "Nightfall" is <strong>of</strong>ten referred to as <strong>the</strong> best science fiction storyever written. The scientific authority behind his writing gave his stories a feeling <strong>of</strong> au<strong>the</strong>nticity, and hiswork undoubtedly did much to popularize science for <strong>the</strong> reading public.References to Part 11. I. Asimov, The Rest <strong>of</strong> <strong>the</strong> Robots (a collection <strong>of</strong> short stories originally published between 1941 and1957), Grafton Books, London, 1968.2. N. Frude, The Robot Heritage, Century Publishing. London. 1984.http://www.rogerclarke.com/SOS/Asimov.htmlPage 26 <strong>of</strong> 30


Roger Clarke's <strong>'Asimov's</strong> <strong>Laws</strong> <strong>of</strong> <strong><strong>Robotics</strong>'</strong><strong>2013</strong>-09-03 10:33 AM3. I. Asimov, I, Robot (a collection <strong>of</strong> short stories originally published between 1940 and 1950), GraftonBooks, London, 1968.4. I. Asimov, P.S. Warrick, and M.H. Greenberg, eds., Machines That Think, Holt, Rinehart. and Wilson,London. 1983.5. I. Asimov, "Runaround" (originally published in 1942), reprinted in Reference 3, pp. 33- 51.6. L. Del Rey, "Though Dreamers Die" (originally published in 1944). reprinted in Reference 4, pp. 153- 174.7. I. Asimov, "Evidence" (originally published in 1946). reprinted in Reference 3. pp. 159- 182.8. H.M. Geduld and R. Gottesman. eds.. Robots, Robots, Robots, New York Graphic Soc., Boston. 1978.9. P.B. Scott. The <strong>Robotics</strong> Revolution: The Complete Guide. Blackwell, Oxford. 1984.10. I. Asimov, Robot Dreams (a collection <strong>of</strong> short stories originally published between 1947 and 1986),Victor Gollancz, London. 1989.11. A. Chandor, ed., The Penguin Dictionary <strong>of</strong> Computers, 3rd ed.. Penguin, London, 1985.12. I. Asimov, "The Bicentennial Man" (originally published in 1976), reprinted in Reference 4, pp. 519-561. Expanded into I. Asimov and R. Silverberg. The Positronic Man, Victor Gollancz, London, 1992.13. A.C. Clarke and S. Kubrick, 2001: A Space Odyssey, Grafton Books. London, 1968.14. I. Asimov, Robots and Empire, Grafton Books, London, 1985.15. I. Asimov, "Risk" (originally published in 1955), reprinted in Reference 1. pp.122- 155.16. I. Asimov, The Robots <strong>of</strong> Dawn, Grafton Books, London, 1983.17. I. Asimov, "Liar!" (originally published in 1941), reprinted in Reference 3, pp.92- 109.18. I. Asimov, "That Thou Art Mindful <strong>of</strong> Him" (originally published in 1974), reprinted in The BicentennialMan, Pan<strong>the</strong>r Books, London, 1978, pp. 79- 107.19. I. Asimov. The Caves <strong>of</strong> Steel (originally published in 1954). Grafton Books. London. 1958.20. T. Winograd and F. Flores. Understanding Computers and Cognition. Ablex. Norwood, N.J., 1986.21. I. Asimov. "Robbie" (originally published as "Strange Playfellow" in 1940). reprinted in Reference 3. pp.13-32.22. I. Asimov, The Naked Sun (originally published in 1957), Grafton Books. London, 1960.23. I. Asimov. "The Evitable Conflict" (originally published in 1950). reprinted in Reference 3, pp. 183- 706.24. I. Asimov. "The Tercentenary Incident" (originally published in 1976). reprinted in The Bicentennialhttp://www.rogerclarke.com/SOS/Asimov.htmlPage 27 <strong>of</strong> 30


Roger Clarke's <strong>'Asimov's</strong> <strong>Laws</strong> <strong>of</strong> <strong><strong>Robotics</strong>'</strong><strong>2013</strong>-09-03 10:33 AMMan, Pan<strong>the</strong>r Books, London, 1978, pp. 229- 247.25. I. Asimov, "Little Lost Robot" (originally published in 1947). reprinted in Reference 3, pp. 110- 136.26. I. Asimov, "Robot Dreams," first published in Reference 10, pp. 51- 58.27. I. Asimov, "The Machine That Won <strong>the</strong> War" (originally published in 1961), reprinted in Reference 10.pp. 191- 197.28. D. Bellin and G. Chapman. eds., Computers in Battle: Will They Work? Harcourt Brace Jovanovich,Boston, 1987.29. I. Asimov, "Reason" (originally published in 1941), reprinted in Reference 3, pp. 52- 70.References to Part 21. I. Asimov, "The Evitable Conflict" (originally' published in 1950), reprinted in I. Asimov, I Robot, GraftonBooks. London. 1968. pp. l83- 206.2. I. Asimov, Robots and Empire, Grafton Books. London. 1985.3. I. Asimov, "The Bicentennial Man" (originally published in 1976). reprinted in I. Asimov, P.S. Warrick,and M.H. Greenberg, eds., Machines That Think. Holt. Rinehart, and Wilson, 1983, pp 519- 561.4. I. Asimov, The Robots <strong>of</strong> Dawn, Grafton Books. London, 1983.5. I. Asimov, "Jokester" (originally' published in 1956), reprinted in 1. Asimov, Robot Dreams, VictorGollancz, London. 1989 pp 278- ~94.6. D. Adams. The Hitchhikers Guide to <strong>the</strong> Galaxy, Harmony Books. New York. 1979.7. A.C. Clarke. Rendezvous with Rama, Victor Gollancz, London. 1973.8. J. Weizenbaum. Computer Power and Human Reason, W.H. Freeman. San Francisco, 1976.9. I. Asimov, The Naked Sun, (originally' published in 1957). Grafton Books. London. 1960.10. I. Asimov, "Lenny" (originally published in 1958), reprinted in I. Asimov. The Rest <strong>of</strong> <strong>the</strong> Robots.Grafton Books, London. 1968, pp. 158- 177.11. H. Harrison. "War With <strong>the</strong> Robots" (originally published in 1962), reprinted in I, Asimov, P.S. Warrick,and M.H. Greenberg, eds., Machines That Think, Holt, Rinehart, and Wilson, 1983, pp.357- 379.12. I. Asimov, "Robbie" (originally published as "Strange Playfellow" in 1940). reprinted in I. Asimov, I,Robot. Grafton Books. London, 1968, pp. 13- 32.13. A.E. Van Vogt, "Fulfillment" (originally published in 1951). reprinted in I. Asimov, P.S. Warrick, andM.H. Greenberg. eds., Machines That Think, Holt, Rinehart, and Wilson, 1983, pp.175- 205.http://www.rogerclarke.com/SOS/Asimov.htmlPage 28 <strong>of</strong> 30


Roger Clarke's <strong>'Asimov's</strong> <strong>Laws</strong> <strong>of</strong> <strong><strong>Robotics</strong>'</strong><strong>2013</strong>-09-03 10:33 AM14. I. Asimov. "Feminine Intuition" (originally published in 1969), reprinted in I. Asimov, The BicentennialMan, Pan<strong>the</strong>r Books, London, 1978, pp. 15- 41.15. R.A. Clarke, "Economic, Legal, and Social Implications <strong>of</strong> Information Technology," MIS Q,uarterly,Vol 17 No. 4, Dec. 1988, pp. 517- 519.16. I. Asimov, "Satisfaction Guaranteed" (originally published in 1951), reprinted in I. Asimov, The Rest <strong>of</strong><strong>the</strong> Robots, Grafton Books, London, 1968, pp.102- 120.17. J. Weizenbaum, "Eliza," Comm. ACM, Vol. 9, No. 1, Jan. 1966, pp. 36- 45.18. S. Turkle, The Second Self' Computers and <strong>the</strong> Human Spirit, Simon & Schuster, New York, 1984.19. A. Budrys, "First to Serve" (originally published in 1954), reprinted in I. Asimov, M.H. Greenberg, andC.G. Waugh, eds., Robots, Signet, New York, 1989, pp. 227- 244.20. I. Asimov, "Risk" (originally published in 1955), reprinted in I. Asimov, The Rest <strong>of</strong> <strong>the</strong> Robots, GraftonBooks, London, 1968, pp. 122- 155.21. J. Wyndham, "The Lost Machine" (originally published in1932), reprinted in A. Wells, ed., The Best <strong>of</strong>John Wyndham, Sphere Books, London, 1973, pp. 13- 36, and in I. Asimov, P.S. Warrick, and M.H.Greenberg, eds.,Machines That Think, Holt, Rinehart, and Wilson, 1983, pp. 29- 49.Author AffiliationsRoger Clarke is Principal <strong>of</strong> Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Pr<strong>of</strong>essor in <strong>the</strong>Cyberspace Law & Policy Centre at <strong>the</strong> University <strong>of</strong> N.S.W., a Visiting Pr<strong>of</strong>essor in <strong>the</strong> E-CommerceProgramme at <strong>the</strong> University <strong>of</strong> Hong Kong, and a Visiting Pr<strong>of</strong>essor in <strong>the</strong> Department <strong>of</strong> Computer Scienceat <strong>the</strong> Australian National University.PersonaliaPhotographsAccessStatisticsThe content and infrastructure for <strong>the</strong>se community servicepages are provided by Roger Clarke through his consultancycompany, Xamax.From <strong>the</strong> site's beginnings in August 1994 until February2009, <strong>the</strong> infrastructure was provided by <strong>the</strong> AustralianNational University. During that time, <strong>the</strong> site accumulatedclose to 30 million hits. It passed 40 million by <strong>the</strong> end <strong>of</strong>2012.Sponsored by Bunhybee Grasslands, <strong>the</strong> extended ClarkeFamily, Knights <strong>of</strong> <strong>the</strong> Spatchcock and <strong>the</strong>ir drummerXamaxConsultancy PtyLtdACN: 002 360 45678 Sidaway St,Chapman ACT2611 AUSTRALIATel: +61 2 62886916http://www.rogerclarke.com/SOS/Asimov.htmlPage 29 <strong>of</strong> 30


Roger Clarke's <strong>'Asimov's</strong> <strong>Laws</strong> <strong>of</strong> <strong><strong>Robotics</strong>'</strong><strong>2013</strong>-09-03 10:33 AMCreated: 16 May 1997 - Last Amended: 16 May 1997 by Roger Clarke - Site Last Verified: 15 February 2009This document is at www.rogerclarke.com/SOS/Asimov.htmlMail to Webmaster - © Xamax Consultancy Pty Ltd, 1995-<strong>2013</strong> - Privacy Policyhttp://www.rogerclarke.com/SOS/Asimov.htmlPage 30 <strong>of</strong> 30

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!