ST2403
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
ANALYSIS: CYBERSECURITY<br />
MODERN WARFARE<br />
DANIEL HOFMANN, CEO OF HORNETSECURITY, EXAMINES THE 'DIGITAL BATTLEFIELD' OF<br />
CYBERSECURITY VERSUS MALICIOUS AI<br />
In the rapidly evolving world of cybersecurity,<br />
there's a relentless race in the use of AI<br />
between cybersecurity experts and malicious<br />
threat actors. This race is driven by the<br />
widespread adoption of Generative AI,<br />
presenting a double-edged sword for<br />
cybersecurity specialists. Cybersecurity experts<br />
use AI within their next-gen solutions to identify<br />
and address vulnerabilities in a system's<br />
security, detect abnormal activity, alert people<br />
to potential threats, and more. On the<br />
opposite side, malicious actors are exploiting<br />
similar technology to orchestrate increasingly<br />
sophisticated cyber-attacks.<br />
THE RISE OF DARK WEB AI VARIANTS<br />
The emergence of Dark Web variants of large<br />
language models (LLMs), such as DarkBERT<br />
and WormGPT, enables threat actors to use AI<br />
for nefarious purposes. These tools provide<br />
even novice attackers with technology that can<br />
be used to easily create and automate cyber<br />
threats with alarming authenticity. One byproduct<br />
of this is the ability to reach new<br />
global markets, including regions that are less<br />
accustomed to traditional cyber threats, as<br />
LLMs can instantly translate and automate<br />
large-scale phishing scams.<br />
The misuse of AI extends beyond the<br />
manipulation of LLMs. The escalating<br />
sophistication of deep-fakes poses a significant<br />
concern, particularly within the realm of<br />
biometric-based Multi-factor Authentication<br />
(MFA). This authentication method is gaining<br />
traction as businesses intensify their efforts to<br />
protect their systems against unauthorised<br />
access. Threat actors create spoofing attacks<br />
that often involve the replicas or imitations of<br />
an individual's biometric details whether it is a<br />
fingerprint, 3D facial masks, or voice - which<br />
was recently used in a deep-fake attack to<br />
scam an undisclosed Hong Kong-based<br />
business out of $25 million.<br />
PUBLIC AWARENESS<br />
It's not just businesses that are facing these<br />
attacks; consumers are at risk too.<br />
Cybercriminals are increasingly using "MFA<br />
bypass kits" to exploit the growing adoption of<br />
MFAs. These kits, such as Evilproxy and the<br />
W3LL panel (private phishing kit), create<br />
deceptive log-in pages that capture a user's<br />
credentials and MFA prompts. In its presence,<br />
unsuspecting users are then directed to<br />
the real login page, signing into<br />
the legitimate service while<br />
the bypass kit steals the<br />
user's session token for the<br />
threat actor to use at their<br />
leisure. Protecting against<br />
these attacks can be<br />
challenging, as they are<br />
adept at bypassing MFA<br />
and are connected to<br />
authentic websites.<br />
Threat actors often use these<br />
bypass kits to impersonate<br />
trusted connections and brands. Our research<br />
found that some of the top impersonated<br />
brands were leading e-commerce and delivery<br />
sites, like DHL, Amazon, and FedEx - with DHL<br />
(26.1%) and Amazon (7.7%) accounting for<br />
some of our top ten recorded brand<br />
impersonations. So, consumers likewise need<br />
to be aware of these tactics.<br />
BEACON OF HOPE<br />
Nevertheless, AI is also a formidable ally in the<br />
ongoing battle against cyber threats. Security<br />
experts and technology vendors integrate AI<br />
and machine learning into their defensive<br />
toolkits against such attacks, helping keep their<br />
solutions one step ahead in the 'cat and<br />
mouse' dance with attackers. Additionally,<br />
leading AI organisations such as OpenAI have<br />
launched initiatives aimed at empowering<br />
cybersecurity entities to "AI-enable" their<br />
defences. This strategic move is predicted to<br />
improve various facets of cybersecurity,<br />
ranging from outlier detection and improved<br />
log analysis to AI-simulated attacks as<br />
underscored in Hornetsecurity's Cyber Security<br />
Report 2024, along with threat modelling.<br />
All businesses with a digital presence may be<br />
at risk, so it's imperative to invest in robust<br />
security solutions, as well as ongoing<br />
employee education and awareness around<br />
cyber risks.<br />
A majority of breached businesses were<br />
infiltrated due to the absence of robust<br />
authentication (preferably MFA with phishresistant<br />
hardware), the allowance of simple<br />
passwords, and the lack of employee training<br />
to instil caution when clicking on links in<br />
emails. Coupled with next-gen cybersecurity<br />
solutions, taking action to avoid these missteps<br />
can go very far in protecting against AIenabled<br />
threats.<br />
More info: www.hornetsecurity.com<br />
26 STORAGE Mar/Apr 2024<br />
@STMagAndAwards<br />
www.storagemagazine.co.uk<br />
MAGAZINE