18.02.2020 Views

Smart Industry 1/2020

Smart Industry 1/2020 - The IoT Business Magazine - powered by Avnet Silica

Smart Industry 1/2020 - The IoT Business Magazine - powered by Avnet Silica

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Says Who?<br />

As AI turns to solving<br />

problems further from<br />

human experience, the<br />

utility of explanations<br />

will surely be called<br />

into question. Concerns<br />

around how explainable<br />

these decisions are<br />

bound to grow.<br />

stand how the AI model functions in<br />

its entirety. For example, users of an<br />

AI model which classifies animals in a<br />

zoo may want to drill down into how a<br />

tiger is classified. This can tell them the<br />

information that it uses to say what is<br />

a tiger (perhaps the stripes, face, etc.),<br />

but not how it classifies other animals,<br />

or how it works generally. This allows<br />

you to use a complex AI model, but focus<br />

down into local models that drive<br />

specific outputs where needed.<br />

need to know what features in the<br />

data it used to reach that decision. Of<br />

course, not all decisions will be correct,<br />

and that holds whether it’s a human<br />

or a machine making the decision.<br />

If AI gets 80% of calls on machine<br />

maintenance right, compared to 60%<br />

for human judgement, then it’s likely<br />

a benefit worth having, even if the<br />

decision-making isn’t perfect, or fully<br />

understood.<br />

On the other hand, there are many<br />

situations where we do need to know<br />

how the decision was made. There<br />

may be legal or business requirements<br />

to explain why a decision was taken,<br />

such as why a loan was rejected. Banks<br />

need to be able to see what specific<br />

features in their data, or which combination<br />

of features, led to the final decision,<br />

for instance to grant a loan.<br />

How Do We Know When<br />

AI Decisions Are Right?<br />

In other cases, it is important to know<br />

why the decision is the right one; we<br />

wouldn’t want a cancer diagnosis tool<br />

to have the same flawed reasoning<br />

as the husky AI. Medicine in particular<br />

presents ethical gray areas. Let’s<br />

imagine an AI model is shown to recommend<br />

the right life-saving medical<br />

treatment more often than doctors<br />

do. Should we go with the AI even if<br />

we don’t understand how it reached<br />

the decision? Right now, completely<br />

automating decisions like this is considered<br />

a step too far.<br />

And explainability is not just about<br />

how AI reaches the right answer.<br />

There may be times when we know<br />

an AI model is wrong, for example if it<br />

develops a bias against women, without<br />

knowing why. Explaining how<br />

the AI system has exploited inherent<br />

biases in the data could give us the<br />

understanding we need to improve<br />

the model and remove the bias, rather<br />

than throwing the whole thing out.<br />

As with anything in AI, there are few<br />

easy answers, but asking how explainable<br />

you need your AI to be is a good<br />

starting point.<br />

If complete model transparency is vital,<br />

then a white-box (as opposed to<br />

a black-box) approach is important.<br />

Transparent models which follow<br />

simple sets of rules allow us to explain<br />

which factors were used to make any<br />

decision, and how they were used.<br />

But there are trade-offs. Limiting AI<br />

to simple rules also limits complexity,<br />

which limits its ability to solve<br />

complex problems, such as beating<br />

world champions at complex games.<br />

Where complexity brings greater accuracy,<br />

there is a balance to be struck<br />

between the best possible result and<br />

understanding that result.<br />

A compromise may be the ability to<br />

get some understanding of particular<br />

decisions, without needing to under-<br />

As AI turns to<br />

increasingly<br />

challenging<br />

problems<br />

further from<br />

human experience,<br />

there will<br />

still have to be<br />

human experts<br />

who can help<br />

qualify the<br />

explanations.<br />

Who Should AI Be<br />

Explainable To?<br />

There is also the question of “explainable<br />

to whom?” Explanations about<br />

an animal classifier can be understood<br />

by anyone: most people could appreciate<br />

that if a husky is being classified<br />

as a husky because there is snow in<br />

the background, the AI is right for the<br />

wrong reasons. But an AI which classifies,<br />

say, cancerous tissue would need<br />

to be assessed by an expert pathologist.<br />

For many AI challenges, such as<br />

automating human processes, there<br />

will have to be human experts who<br />

can help qualify the explanations.<br />

However, as AI turns to increasingly<br />

challenging problems further from<br />

human experience, the utility of explanations<br />

will surely come into question.<br />

In the early days of mainstream AI,<br />

many were satisfied with a black box<br />

which gave answers. As AI is used<br />

more and more for applications<br />

where decisions need to be explainable,<br />

the ability to look under the<br />

hood of the AI model and understand<br />

how those decisions are reached will<br />

become more important.<br />

There is no single definition of explainability:<br />

it can be provided at<br />

many different levels depending on<br />

need and problem complexity. Organizations<br />

need to consider issues<br />

such as ethics, regulations, and customer<br />

demand alongside the need<br />

for optimization – in relation to the<br />

business problem they are trying to<br />

solve – before deciding whether and<br />

how their AI decisions should be explainable.<br />

Only then can they make<br />

informed decisions about the role of<br />

explainability when developing their<br />

AI systems.<br />

23

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!