18.02.2020 Views

Smart Industry 1/2020

Smart Industry 1/2020 - The IoT Business Magazine - powered by Avnet Silica

Smart Industry 1/2020 - The IoT Business Magazine - powered by Avnet Silica

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>Smart</strong> Business Title Story: <strong>Smart</strong> Companies<br />

In AI We Trust<br />

When and How Should AI<br />

explain ITS Decisions?<br />

As artificial intelligence (AI) increasingly makes decisions, there are growing<br />

concerns around AI decision-making, and how it reaches its answers.<br />

n By Sam Genway<br />

AI can be complex. Unlike<br />

traditional algorithms, AI<br />

does not follow a set of<br />

predefined rules. Instead,<br />

they learn to recognize patterns<br />

– such as when a component of a<br />

machine will fail or whether a transaction<br />

is fraudulent – by building<br />

their own rules from training data.<br />

Once an AI model is shown to give<br />

the right answers, it is set loose in<br />

the real world.<br />

However, getting the right answer<br />

does not necessarily mean it was<br />

reached in the right way. An AI model<br />

could be successfully trained to recognize,<br />

for instance, the difference<br />

between wolves and huskies. However,<br />

it might later transpire that the<br />

AI model really learned to tell the dif-<br />

22<br />

Right now,<br />

completely<br />

automating<br />

decisions is<br />

considered a<br />

step too far.<br />

Sam Genway<br />

Tessella<br />

ference based on whether there was<br />

snow in the background.<br />

This approach will work most the<br />

time, but as soon as it needs to spot a<br />

source ©: Russell Publishing Limited<br />

husky anywhere outside of its natural<br />

habitat, it will presumably fail. If we<br />

rely on AI (or indeed humans) being<br />

right for the wrong reasons, it limits<br />

where they can work effectively.<br />

We may instinctively feel that any machine<br />

decision must be understandable,<br />

but that’s not necessarily the case.<br />

We must distinguish between trust<br />

(whether we are confident that our AI<br />

gets the right answer) and explainability<br />

(how it reached that answer). We<br />

always need to have a level of trust<br />

demonstrated when using an AI system,<br />

but only sometimes do we need<br />

to understand how it got there.<br />

Take an AI model that decides whether<br />

a machine needs maintenance to<br />

avoid a failure. If we can show that the<br />

AI is consistently right, we don’t even

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!