Fortune
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
A.I. SPECIAL REPORT<br />
Posner: “You are going to mitigate risks of bias<br />
if you have more diversity.”<br />
older people, or overweight people, you could add weight to photos<br />
of such individuals to make up for the shortage in your data set.<br />
Other engineers are focusing further “upstream”—making sure<br />
that the underlying data used to train algorithms is inclusive and<br />
free of bias, before it’s even deployed. In image recognition, for example,<br />
the millions of images used to train deep-learning systems<br />
need to be examined and labeled before they are fed to computers.<br />
Radha Basu, the CEO of data-training startup iMerit, whose<br />
clients include Getty Images and eBay, explains that the company’s<br />
staf of over 1,400 worldwide is trained to label photos on behalf of<br />
its customers in ways that can mitigate bias.<br />
Basu declined to discuss how that might play out when labeling<br />
people, but she ofered other analogies. iMerit staf in India may<br />
consider a curry dish to be “mild,” while the company’s staf in New<br />
Orleans may describe the same meal as “spicy.” iMerit would make<br />
sure both terms appear in the label for a photo of that dish, because<br />
to label it as only one or the other would be to build an inaccuracy<br />
into the data. Assembling a data set about weddings, iMerit would<br />
include traditional Western white-dress-and-layer-cake images—but<br />
also shots from elaborate, more colorful weddings in India or Africa.<br />
iMerit’s staf stands out in a diferent way, Basu notes: It includes<br />
people with Ph.D.s, but also less-educated people who struggled<br />
with poverty, and 53% of the staf are women. The mix ensures that<br />
as many viewpoints as possible are involved in the data labeling<br />
process. “Good ethics does not just involve privacy and security,”<br />
Basu says. “It’s about bias, it’s about, Are we missing a viewpoint?”<br />
Tracking down that viewpoint is becoming part of more tech companies’<br />
strategic agendas. Google, for example, announced in June<br />
that it would open an A.I. research center later this year in Accra,<br />
Ghana. “A.I. has great potential to positively impact the world, and<br />
more so if the world is well represented in the development of new<br />
A.I. technologies,” two Google engineers wrote in a blog post.<br />
A.I. insiders also believe they can fight bias by making their<br />
workforces in the U.S. more diverse—always a hurdle for Big Tech.<br />
Fei-Fei Li, the Google executive, recently cofounded the nonprofit<br />
AI4ALL to promote A.I. technologies and education among girls<br />
and women and in minority communities. The group’s activities<br />
include a summer program in which campers visit top university<br />
A.I. departments to develop relationships with mentors and role<br />
models. The bottom line, says AI4ALL executive director Tess<br />
> ralf herbrich<br />
director of a.i., amazon<br />
“age, gender, race,<br />
nationality—they<br />
are all dimensions<br />
that you have to test<br />
the sampling biases<br />
for over time.”<br />
EARS BEFORE this more diverse<br />
generation of A.I. researchers<br />
Y reaches the job market, however,<br />
big tech companies will have<br />
further imbued their products<br />
with deep-learning capabilities. And even as<br />
top researchers increasingly recognize the<br />
technology’s flaws—and acknowledge that<br />
they can’t predict how those flaws will play<br />
out—they argue that the potential benefits,<br />
social and financial, justify moving forward.<br />
“I think there’s a natural optimism about<br />
what technology can do,” says Candela, the<br />
Facebook executive. Almost any digital tech<br />
can be abused, he says, but adds, “I wouldn’t<br />
want to go back to the technology state we<br />
had in the 1950s and say, ‘No, let’s not deploy<br />
these things because they can be used wrong.’ ”<br />
Horvitz, the Microsoft research chief, says<br />
he’s confident that groups like his Aether<br />
team will help companies solve potential bias<br />
problems before they cause trouble in public.<br />
“I don’t think anybody’s rushing to ship things<br />
that aren’t ready to be used,” he says. If anything,<br />
he adds, he’s more concerned about “the<br />
ethical implications of not doing something.”<br />
He invokes the possibility that A.I. could<br />
reduce preventable medical error in hospitals.<br />
“You’re telling me you’d be worried that my<br />
system [showed] a little bit of bias once in a<br />
while?” Horvitz asks. “What are the ethics of<br />
not doing X when you could’ve solved a problem<br />
with X and saved many, many lives?”<br />
The watchdogs’ response boils down to:<br />
Show us your work. More transparency and<br />
openness about the data that goes into A.I.’s<br />
black-box systems will help researchers spot<br />
bias faster and solve problems more quickly.<br />
When an opaque algorithm could determine<br />
whether a person can get insurance, or whether<br />
that person goes to prison, says Buolamwini,<br />
the MIT researcher, “it’s really important that<br />
we are testing these systems rigorously, that<br />
there are some levels of transparency.”<br />
Indeed, it’s a sign of progress that few<br />
people still buy the idea that A.I. will be<br />
infallible. In the web’s early days, notes Tim<br />
Hwang, a former Google public policy executive<br />
for A.I. who now directs the Harvard-<br />
MIT Ethics and Governance of Artificial<br />
Intelligence initiative, technology companies<br />
could say they are “just a platform that represents<br />
the data.” Today, “society is no longer<br />
willing to accept that.”<br />
www.t.me/velarch_official<br />
HERBRICH: TOBIAS KOCH— COURTESY OF AMAZON<br />
62<br />
FORTUNE.COM // JULY.1.18