MBR ISSUE 43
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
SPECIAL FEATURE: POLITICO CONNECTED<br />
Malta Business Review<br />
One example of that trade-off is clickbait. It<br />
may tap into a reader's immediate desire to<br />
read the story behind a juicy headline, but<br />
in the longer term it can skew the type of<br />
information people receive.<br />
This requires governments and businesses<br />
to assign accountability, participants noted.<br />
Businesses need to keep track of every<br />
development step, because decisions made<br />
from an AI's conception will dictate its<br />
evolution. Governments, instead, need to<br />
make clear that developers will be liable if<br />
anything goes wrong.<br />
"Then transparency will follow," said one.<br />
"Yes, machine-learning is something that<br />
can do things you don't expect. But systems<br />
aren't just like one person trying to be Godlike,<br />
systems are lots of pieces. … It is our<br />
responsibility, if we're going to deploy these<br />
technologies, to be willing to make the<br />
liabilities that we come to."<br />
2. Don't reinvent the wheel; adapt it<br />
AI may be a constantly moving target, but<br />
it doesn't require an entirely new policy<br />
framework. Governments can start with<br />
established national and international<br />
standards, such as human rights law and<br />
impact assessments, participants said.<br />
"It's not about actually regulating a<br />
technology," one participant said, pointing to<br />
the U.K.'s strong existing regulations. "There<br />
are lots of values already imbued in our law<br />
and we just need to make sure that the AI is<br />
compatible with that."<br />
“Internationally, human rights law already<br />
provides an ethics framework that allows for<br />
trade-offs in difficult situations," a second<br />
speaker said. "The question needs to be …<br />
how do we translate all the human rights<br />
legislation that's been built up over decades<br />
into stuff that we can actually, practically use<br />
when it comes to machine learning?"<br />
In boardrooms, governments should push<br />
executives to think about responsible AI in the<br />
same way they're increasingly thinking about<br />
climate change and sustainability, another<br />
said. Environmental impact assessments,<br />
for example, could provide blueprints<br />
for algorithmic or technology impact<br />
assessments, someone else noted.<br />
3. Power and jobs to the people<br />
AI is consolidating power and wealth into the<br />
hands of those with the necessary skills, while<br />
automation reduces the workforce needed to<br />
run a business, speakers warned.<br />
"We have companies making tens of billions<br />
of dollars of profit with very few [employees]<br />
"The energy sector, for<br />
example, will need 10 to<br />
15 years of planning and<br />
data collection to gain<br />
public trust<br />
— that's never happened before, ever …<br />
which actually is another problem," one<br />
participant said.<br />
Policymakers should work to rebalance the<br />
power in this increasingly fragmented, AIpowered<br />
world, the way consumer groups<br />
and standards bodies help safeguard market<br />
competition, another said.<br />
"Even when you have got good consumer<br />
choice, it just doesn't work — it doesn't drive a<br />
good product for people, it doesn't mean you<br />
get a good deal," the person said. "We have<br />
to think, what's the ethical infrastructure we<br />
might need that means individuals affected<br />
have a bit more grit in the system?"<br />
With fewer blue-collar jobs available, workers<br />
will have to get used to the idea of "lifelong<br />
learning," instead of relying on one skillset<br />
for their entire careers, participants agreed.<br />
And the education system will have to be<br />
overhauled to support that.<br />
4. Different values, different rules<br />
The definition of responsible AI will<br />
depend on a government's or society's<br />
values — and those vary around the world,<br />
speakers said. That will inevitably lead to<br />
some regionalization.<br />
"Human values are deeply, deeply political …<br />
when we look at it on the international level,<br />
we see just how divided the world is. China,<br />
obviously," one participant said.<br />
The U.K. and EU are already "highly aligned"<br />
on many of those values, the person added.<br />
But as AI spreads into the real world,<br />
governments and companies will have to<br />
think about what the technology they develop<br />
in their countries and sell globally says about<br />
their brand, and what that brand should say,<br />
others added.<br />
5. From research to the real world<br />
Now the hard work starts: Jumping from<br />
theoretical discussions to deploying AI<br />
responsibly.<br />
It's up to governments and developers to<br />
identify those most affected by the changes<br />
and prepare sectors that are lagging, such as<br />
energy and manufacturing, speakers said. The<br />
energy sector, for example, will need 10 to 15<br />
years of planning and data collection to gain<br />
public trust, one person said.<br />
Asked what they would like to see over the next<br />
18 months, another participant suggested<br />
the creation of international standards that<br />
provide a "mark of approval" for companies<br />
without being too heavy-handed.<br />
The U.K. is ahead of many others because<br />
the conversation around AI ethics is already<br />
mainstreamed, speakers agreed. Now it<br />
needs to keep building political and public<br />
awareness, understanding of how AI can be<br />
used and consensus on the ethical trade-offs.<br />
"We're all drinking the same Kool-Aid," said<br />
one. "The problem is we need others to drink<br />
the Kool-Aid. We need the public to come<br />
with us." <strong>MBR</strong><br />
All rights reserved - Copyright 2018<br />
www.maltabusinessreview.net<br />
31