05.03.2024 Views

Lot's Wife Edition 6 2015

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

SCIENCE & ENGINEERING 33<br />

fully autonomous. They could easily have autonomous<br />

navigation and mobile capabilities. With advances in AI,<br />

targeting software could even be dynamic. Systems could<br />

use a set of training data and efficiently learn while deployed,<br />

then make their own decisions. Other targeting technologies<br />

include facial and image recognition software. But what of<br />

the triggering system? When, if ever, do we allow the machine<br />

to make the call?<br />

Take facial recognition software for instance. Facebook’s<br />

experimental DeepFace algorithm is able to identify human<br />

faces with 97.35% accuracy regardless of lighting or position.<br />

This is almost as accurate as the human capabilities, which<br />

score 97.5%. While this accuracy rate may seem high, we have<br />

to wonder if we can ever entrust a machine with the task of<br />

deciding whether someone should live or die if there is even a<br />

2.65% chance it will make the wrong call.<br />

Similar questions are currently being raised of the U.S.’s<br />

drone program. Created in the name of eliminating terrorism<br />

by Bush in 2001, Obama dramatically expanded the program.<br />

As autonomous weapons, unmanned aerial vehicles (UAVs)<br />

have the advantage of replacing soldiers, risking fewer U.S.<br />

lives on the battlefield. However, the program is not only<br />

widely criticised for its lack of transparency but also the high<br />

number of civilian casualties. Since 2004, drones have killed<br />

approximately 4,000 Pakistani people alone, 1,000 of them<br />

civilians, including children.<br />

Drones reveal several other flaws in our use of autonomous<br />

weapons. Nobody knows how accurate the intelligence,<br />

the data, or the machines themselves are. Nobody knows<br />

whether or not the machine can then make the right call.<br />

Nobody knows whom the machine might kill. And if and when<br />

that happens, nobody will know whom to blame if meaningful<br />

levels of human intervention and control are not defined. It<br />

might be the programmers who select the targeting criteria,<br />

the operators responsible for the machine, or the one who<br />

signs off on the order. Whether it’s the judge, the jury, or<br />

the executioner, we have to ask ourselves: who will we hold<br />

personally accountable for the taking of a human life?<br />

The use of UAVs and assorted autonomous weapons leads<br />

us to question how they change the reality of warfare. While<br />

their use removes humans from the battlefield, this also<br />

removes the atrocities of war, allowing us to distance, excuse<br />

and desensitise ourselves from what should be recognised<br />

as a human tragedy. Additionally, some worry that machine<br />

initiated attacks will lower our threshold for warfare.<br />

"Personally, I believe warfare needs to stay horrific and<br />

brutal. We need it to be so to ensure we only fight wars<br />

as a last resort," says Toby Walsh, Professor of Artificial<br />

Intelligence at UNSW and NICTA, and fellow signatory to the<br />

letter. "Politicians have to see body bags coming home and<br />

be prepared to justify why they risk the lives of our sons and<br />

daughters."<br />

Without regulating these weapons we face dire<br />

consequences, perhaps even the ‘end of the human<br />

race’ as Stephen Hawking warns. We must maintain<br />

meaningful control. Experts are exploring the possibility<br />

that the weapons might be taught to differentiate between<br />

combatants and civilians. Lawmakers question whether they<br />

abide by international humanitarian and human rights laws,<br />

whether they can function ethically.<br />

Consequently, the possibility that this technology might<br />

be upon us within the coming decades necessitates such<br />

a letter and the consideration of pre-emptive policies.<br />

Campaign to Stop Killer Robots is one such group calling<br />

for prohibition. These issues were also discussed during a<br />

Human Rights Council session in April 2013, where questions<br />

such as: "in what situations are distinctively human traits,<br />

such as fear, hate, sense of honour and dignity, compassion<br />

and love desirable in combat?" and, "in what situations do<br />

machines lacking emotions offer distinct advantages over<br />

human combatants?" were considered.<br />

While many countries raised concerns, a ban was opposed<br />

by the UK. "Is a ban on technology which has yet to be full<br />

developed to maturity an appropriate course of action?<br />

I suggest not," said Dr William Boothby in a separate<br />

interview. A retired RAF air commodore and lawyer, Boothby<br />

was responsible for ensuring that newly acquired weapons<br />

conformed to the UK’s international humanitarian law<br />

obligations.<br />

Aside from weaponry however, AI has many more<br />

applications and moreover, benefits. While fully autonomous<br />

weapons rely on AI technologies, they are not one and the<br />

same. In the simplest terms, an autonomous weapon is<br />

able to make a decision. AI on the other hand, is far more<br />

dynamic, and able to emulate more complex human<br />

cognitive processes such as reasoning, problem solving and<br />

planning, among other things. While a mathematical model<br />

from the University of Wisconsin-Madison suggests that<br />

computers will be incapable of human consciousness and<br />

emotion, Hawking suggests that, "Humans, who are limited<br />

by slow biological evolution, couldn’t compete, and would be<br />

superseded."<br />

As of July, the Nao robot created by French company<br />

Aldebaran Robotics passed a self-awareness test. They<br />

were given a riddle, and in order to answer it, had to show<br />

understanding of the question, recognise its voice as distinct<br />

from other robots, and then link this back to the original<br />

question to show self-awareness. Additionally, in June,<br />

computer programme "Eugene Goostman" passed the Turing<br />

test, albeit mildly unconvincingly. The Turing Test assesses<br />

a robot’s capacity to mimic human thought and speech in<br />

a conversation. It should be noted that each of these tests<br />

are inherently flawed, however. A robot may be specifically<br />

programmed to emulate both of these functions.<br />

Robots may also be used to automate industrial processes,<br />

increase productivity and perform tasks too dangerous or<br />

impossible for people. AI technology from Honda’s humanoid<br />

robot ASIMO may be used in disaster response. Yet despite<br />

the increasing intelligence and application of robotics and<br />

AI, any responsible programmer will ensure their machine<br />

has safeguards. ASIMO is programmed to shut down should<br />

it attempt to build itself. Asimov’s laws of robotics stipulate<br />

that: "A robot may not harm humanity, or, by inaction, allow<br />

humanity to come to harm." So while robots may replace us<br />

and take our jobs, there’s no need to worry about them taking<br />

our lives.<br />

That is, unless they learn how to reprogram and control<br />

themselves.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!