20.07.2018 Views

Fortune

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

A.I. SPECIAL REPORT<br />

W<br />

INVISIBLE Timnit<br />

Gebru has studied<br />

the ways that A.I. can<br />

misread, or ignore,<br />

information about<br />

minority groups.<br />

WHEN TAY MADE HER DEBUT in March 2016, Microsoft had high hopes<br />

for the artificial intelligence–powered “social chatbot.” Like the automated,<br />

text-based chat programs that many people had already<br />

encountered on e-commerce sites and in customer service conversations,<br />

Tay could answer written questions; by doing so on Twitter<br />

and other social media, she could engage with the masses.<br />

But rather than simply doling out facts, Tay was engineered to<br />

converse in a more sophisticated way—one that had an emotional<br />

dimension. She would be able to show a sense of humor, to banter<br />

with people like a friend. Her creators had even engineered her to<br />

talk like a wisecracking teenage girl. When Twitter users asked Tay<br />

who her parents were, she might respond, “Oh a team of scientists in<br />

a Microsoft lab. They’re what u would call my<br />

parents.” If someone asked her how her day had<br />

been, she could quip, “omg totes exhausted.”<br />

Best of all, Tay was supposed to get better<br />

at speaking and responding as more people<br />

engaged with her. As her promotional material<br />

said, “The more you chat with Tay the smarter<br />

she gets, so the experience can be more personalized<br />

for you.” In low-stakes form, Tay was<br />

supposed to exhibit one of the most important<br />

features of true A.I.—the ability to get smarter,<br />

more efective, and more helpful over time.<br />

But nobody predicted the attack of the trolls.<br />

Realizing that Tay would learn and mimic<br />

speech from the people she engaged with,<br />

malicious pranksters across the web deluged<br />

her Twitter feed with racist, homophobic, and<br />

otherwise ofensive comments. Within hours,<br />

Tay began spitting out her own vile lines on<br />

Twitter, in full public view. “Ricky gervais<br />

learned totalitarianism from adolf hitler, the<br />

inventor of atheism,” Tay said, in one tweet<br />

that convincingly imitated the defamatory,<br />

fake-news spirit of Twitter at its worst. Quiz<br />

her about then-president Obama, and she’d<br />

compare him to a monkey. Ask her about the<br />

Holocaust, and she’d deny it occurred.<br />

In less than a day, Tay’s rhetoric went from<br />

family-friendly to foulmouthed; fewer than 24<br />

hours after her debut, Microsoft took her ofline<br />

and apologized for the public debacle.<br />

What was just as striking was that the wrong<br />

turn caught Microsoft’s research arm of guard.<br />

“When the system went out there, we didn’t<br />

plan for how it was going to perform in the<br />

open world,” Microsoft’s managing director of<br />

research and artificial intelligence, Eric Horvitz,<br />

told<strong>Fortune</strong> in a recent interview.<br />

After Tay’s meltdown, Horvitz immediately<br />

asked his senior team working on “natural<br />

language processing”—the function central<br />

to Tay’s conversations—to figure out what<br />

went wrong. The staf quickly determined<br />

that basic best practices related to chatbots<br />

were overlooked. In programs that were more<br />

rudimentary than Tay, there were usually<br />

protocols that blacklisted ofensive words, but<br />

there were no safeguards to limit the type of<br />

data Tay would absorb and build on.<br />

Today, Horvitz contends, he can “love the<br />

example” of Tay—a humbling moment that<br />

Microsoft could learn from. Microsoft now<br />

deploys far more sophisticated social chatbots<br />

www.t.me/velarch_official<br />

CODY O’LOUGHLIN: THE NEW YORK TIMES— REDUX; PREVIOUS SPREAD, STATUE: ARTNELI/ALAMY<br />

56<br />

FORTUNE.COM // JULY.1.18

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!