CS Jan-Feb 2024
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
artificial intelligence<br />
WHY AI IS ON ALL OUR MINDS<br />
WE OFFER MORE REFLECTIONS ON ARTIFICIAL INTELLIGENCE,<br />
FOLLOWING ON FROM OUR MAIN FEATURE ON PAGE 18<br />
Tom McVey, the senior solutions<br />
architect EMEA at Menlo Security,<br />
believes that AI can be used in a<br />
multitude of ways to detect and mitigate<br />
threats, including "some that we haven't<br />
even conceived yet, as it's still early days.<br />
If we use the example of detecting<br />
malicious websites, a product that<br />
verifies whether any page was human or<br />
AI will be very powerful. Without this,<br />
the internet may become a bit like<br />
the Wild West - similar to its early days.<br />
Using AI to homologate and structure it<br />
again will help us to defend against the<br />
types of threats that leverage language<br />
models".<br />
He also points to how Security Incident<br />
and Event Management (SIEM) software<br />
is used by security analysts to determine<br />
how a breach took place by collecting<br />
logs, messages and events from every<br />
piece of technology within an organisation.<br />
"That's a huge wealth of data<br />
and an incident response team member<br />
only has traditional filtering tools to<br />
sort through it - ie, by user name or<br />
category. Once they have filtered down<br />
the events that they think are relevant,<br />
they've ultimately got to start making<br />
human judgement calls on how an<br />
attacker got in. It's a case of slowly<br />
drilling down like a detective.<br />
"Whilst I don't think that there is any<br />
way that AI in its current state could<br />
replace this function entirely," states<br />
McVey, "it can certainly be used to<br />
augment it in a certain way. In theory,<br />
the incident response team could give<br />
the huge amount of log data to an AI<br />
language model and, as long as it was<br />
trained with incident response in mind,<br />
it should be able to correlate that data<br />
and draw out the things that are<br />
noteworthy. At the very least, the incident<br />
response team member could compare<br />
this to what their filtering came up with.<br />
It must be said, however, that AI is not<br />
always more correct than a human, but<br />
it's a cheap and quick way to get a<br />
second opinion. which may correlate<br />
with what the team member believes<br />
is correct."<br />
THREATS AT LARGE<br />
According to Brad Freeman, director<br />
of technology at SenseOn, the biggest<br />
risk from AI in <strong>2024</strong> is how LLMs (Large<br />
Language Models) will allow highly<br />
specific and tailored phishing messages.<br />
"These messages will be sent both via<br />
traditional email, instant message and<br />
social networking, but increasingly via<br />
real-time communications such as voice<br />
and potentially via video. This will bring<br />
a new edge to social engineering and is<br />
likely to convince even the most vigilant<br />
targets. The stakes are high, as many<br />
whaling attacks generate millions of<br />
dollars from stolen corporate funds.<br />
"It makes sense that criminal groups<br />
will invest their time to ensure the next<br />
generation of attacks will be as convincing<br />
as possible" he points out. "Many of<br />
us would be persuaded, if we got a phone<br />
call which sounded like somebody we<br />
knew. Even if they were making an<br />
unusual request, these types of<br />
communications will be received by<br />
many accounts departments in <strong>2024</strong>,<br />
requesting or amending payments."<br />
Tom McVey, Menlo Security: AI can be<br />
used in a multitude of ways to detect<br />
and mitigate threats.<br />
Brad Freeman, SenseOn: biggest risk from<br />
AI is how Large Language Models will<br />
allow highly specific and tailored phishing<br />
messages.<br />
www.computingsecurity.co.uk @<strong>CS</strong>MagAndAwards <strong>Jan</strong>/<strong>Feb</strong> <strong>2024</strong> computing security<br />
31