CS Mar-Apr 2024
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
artificial intelligence<br />
"In an intriguing plot twist, much like many<br />
other contemporary writings, this article was<br />
created with assistance from AI. And, in<br />
alignment with its insights, one must ponder<br />
the potential enhancements AI brings to<br />
human capabilities. Are we on the cusp of<br />
a new era of coexistence or are we approaching<br />
a risky precedent?<br />
"The 'black box' nature of AI makes its<br />
decision-making process often inscrutable,<br />
even to its creators. As AI systems grow in<br />
complexity, the chances of unintended biases<br />
and errors increase. This uncertainty raises<br />
crucial questions about responsibility and<br />
control in AI-driven decisions. Harnessing<br />
AI's power responsibly is imperative. We must<br />
not let AI run without oversight. The future<br />
of AI should be a collaborative journey,<br />
with humanity at the helm, guiding it with<br />
wisdom and foresight. Only time will tell if<br />
we are now already, in a sense, AI-enhanced.<br />
"As we reflect on AI's role in our lives and its<br />
creation of this article, we are reminded of<br />
the need for thoughtful, proactive measures<br />
in AI governance. Implementing robust,<br />
ethical frameworks akin to the 'Three Laws'<br />
envisioned in 'I, Robot' is no longer a futuristic<br />
concept, but a present-day necessity."<br />
That’s some collaboration, certainly. But<br />
does it leave you more reassured about the<br />
technology - or simply more queasy.<br />
HIGHLY TRAINED ASSAILANTS<br />
As of January <strong>2024</strong>, the UK National Cyber<br />
Security Centre (N<strong>CS</strong>C) has warned that AI<br />
tools will increase the volume and impact of<br />
cyberattacks, including ransomware, in the<br />
next two years. It will allow unskilled threat<br />
actors to conduct more sophisticated attacks.<br />
Jovana Macakanja, intelligence analyst with<br />
Cyjax, points out that threat actors are already<br />
using AI tools based on ChatGPT, which itself<br />
has had a profound influence on modern<br />
society and is entering common parlance. "In<br />
mid-July 2023, the generative AI cybercrime<br />
tool WormGPT was advertised on underground<br />
forums as a tool for launching<br />
phishing and business email compromise<br />
(BEC) attacks," she says. "Allegedly trained<br />
on several undisclosed data sources<br />
concentrating on malware-related data, it<br />
can produce phishing emails which are<br />
persuasive and sophisticated."<br />
People have always been sceptical of AI<br />
technology and its effect on humanity, she<br />
continues. "These fears often play out in<br />
popular fiction as evil robots taking over the<br />
world. While that eventuality is far-off at<br />
present, AI's continued maturation is resulting<br />
in people losing jobs, which could gravely<br />
impact the economy, and is making it difficult<br />
to discern between AI-generated and humancreated<br />
content. Students use ChatGPT to<br />
write assignments, medical tools identify<br />
various disorders or cancers, with diagnostic<br />
capabilities rivalling those of specialists, and<br />
a popular publishing house has used AI to<br />
replace a range of editorial roles. AI also<br />
poses significant ethical implications, as it<br />
lacks real, logical human-thinking, and is<br />
susceptible to inaccuracies and biases from<br />
the data sources it has been fed."<br />
While the technology is still developing<br />
and may not yet be out of control itself,<br />
Macakanja accepts, the use of it by humans<br />
for nefarious ends is already uncontrollable.<br />
"Its future technological applications could<br />
easily spiral and get out of hand, as machine<br />
learning advances. Due to the rapid growth<br />
in AI capabilities, legislation surrounding the<br />
technology will quickly become outdated and<br />
need to be freshly examined."<br />
UNPREDICTABLE AI<br />
The baseline danger around AI springs from<br />
the fact that we cannot predict what it will<br />
do, says Aleksi Helakari, head of technical<br />
office, EMEA - Spirent. "Traditional tools and<br />
software were clearly defined and we could<br />
accurately predict outcomes. AI, however,<br />
learns and changes autonomously, and a<br />
great deal of speculation around the future of<br />
Keiron Holyome, BlackBerry Cybersecurity:<br />
naïve to deny that malicious actors are<br />
employing AI in increasing efforts to broaden<br />
their scope.<br />
John Smith, LiveAction: perhaps a more<br />
pressing concern lies not in AI itself, but in<br />
the hands wielding it.<br />
www.computingsecurity.co.uk @<strong>CS</strong>MagAndAwards <strong>Mar</strong>ch/<strong>Apr</strong>il <strong>2024</strong> computing security<br />
15