Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
Dm OPINION: AI<br />
Safety warnings highlight the<br />
importance of AI ethics<br />
While we should undoubtedly proceed with care and caution, underpinning AI<br />
deployment with good data allows organisations to balance regulatory and moral<br />
risks, argues Yohan Lobo, Industry Solutions Manager, Financial Services at M-Files<br />
AI safety and security has been a<br />
hotly discussed topic in recent<br />
times - numerous high-profile<br />
figures expressed concern at the rate of<br />
global AI development at the UK's AI<br />
Safety Summit, held at Bletchley Park.<br />
Even King Charles weighed in on the<br />
subject when virtually addressing the<br />
summit's attendees stating, "There is a<br />
clear imperative to ensure that this<br />
rapidly evolving technology remains<br />
safe and secure." Additionally, in his first<br />
King's speech he set out the UK<br />
government's legislative agenda for the<br />
coming session of parliament. King<br />
Charles explained the government's<br />
intention to establish "new legal<br />
frameworks to support the safe<br />
commercial development" of<br />
revolutionary technologies such as AI.<br />
At M-Files we believe that avoiding the<br />
pitfalls brought to our attention at the<br />
summit and in the King's Speech hinges<br />
on organisations leveraging AI solutions<br />
that are built on a foundation of highquality<br />
data.<br />
Mass adoption of AI presents one of<br />
the most significant opportunities in<br />
corporate history, which businesses will<br />
do their utmost to cash in on, with this<br />
technology capable of delivering<br />
exponential increases in efficiency and<br />
allowing organisations to scale at<br />
speed.<br />
However, concerns rightfully raised at<br />
the UK's Global AI Safety Summit and<br />
reinforced in the King's Speech<br />
demonstrate the importance of<br />
developing AI ethically and ensuring<br />
that organisations looking to take<br />
advantage of AI solutions consider how<br />
they can best protect their customers.<br />
Data quality lies at the heart of the<br />
global AI conundrum - if organisations<br />
intend to start deploying Generative AI<br />
(GenAI) on a wider scale, it's vital that<br />
they understand how Large Language<br />
Models (LLMs) operate and whether<br />
the solution they implement is reliable<br />
and accurate.<br />
The key to this understanding is<br />
having control over the location of the<br />
data the LLM gains its knowledge from.<br />
For example, if a GenAI solution is given<br />
free rein to scour the internet for<br />
information, then the suggestions it<br />
provides will be untrustworthy, as you<br />
can't be sure whether it has come from<br />
a reliable source. Bad data in always<br />
means bad language out.<br />
In contrast, if you only allow a model<br />
to draw from internal company data,<br />
the degree of certainty that any answers<br />
provided can be relied upon is<br />
significantly higher. Any LLMs grounded<br />
in trusted information can be incredibly<br />
powerful tools and a guaranteed way of<br />
boosting the efficiency of an<br />
organisation.<br />
The level of human involvement in AI<br />
integration will also play a crucial role<br />
in its safe use. We must continually<br />
treat AI like an intern, even if a solution<br />
has been operating dependably for an<br />
extended period of time. This means<br />
regular audits and considering the<br />
findings of AI as recommendations<br />
rather than instructions.<br />
Ultimately, companies can contribute<br />
to the safe and responsible development<br />
of AI by only deploying GenAI solutions<br />
that they can trust and that they fully<br />
understand. This begins by controlling<br />
the data the technology is based on and<br />
ensuring that a human is involved at<br />
every stage of deployment.<br />
More info: www.m-files.com<br />
18 @<strong>DM</strong>MagAndAwards <strong>Nov</strong>ember/<strong>Dec</strong>ember 2024 www.document-manager.com