01.03.2024 Views

The Cyber Defense eMagazine March Edition for 2024

Cyber Defense eMagazine March Edition for 2024 #CDM #CYBERDEFENSEMAG @CyberDefenseMag by @Miliefsky a world-renowned cyber security expert and the Publisher of Cyber Defense Magazine as part of the Cyber Defense Media Group as well as Yan Ross, Editor-in-Chief and many more writers, partners and supporters who make this an awesome publication! 225 page March Edition fully packed with some of our best content. Thank you all and to our readers! OSINT ROCKS! #CDM #CDMG #OSINT #CYBERSECURITY #INFOSEC #BEST #PRACTICES #TIPS #TECHNIQUES

Cyber Defense eMagazine March Edition for 2024 #CDM #CYBERDEFENSEMAG @CyberDefenseMag by @Miliefsky a world-renowned cyber security expert and the Publisher of Cyber Defense Magazine as part of the Cyber Defense Media Group as well as Yan Ross, Editor-in-Chief and many more writers, partners and supporters who make this an awesome publication! 225 page March Edition fully packed with some of our best content. Thank you all and to our readers! OSINT ROCKS! #CDM #CDMG #OSINT #CYBERSECURITY #INFOSEC #BEST #PRACTICES #TIPS #TECHNIQUES

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

It should be noted that trans<strong>for</strong>ming breach data into a <strong>for</strong>mat compatible with LLMs demands<br />

considerable expertise and resources. This isn't a casual pursuit; it's a multibillion-dollar industry often<br />

backed by various governments. Although currently not widespread, elite threat actors, armed with ample<br />

resources, could potentially utilize LLMs to analyze data in unprecedented ways. As malicious AI<br />

applications become more advanced, even those who aren’t well versed in network security mechanics<br />

will be able to enter the game. This includes state sponsored threat actors who don’t care about<br />

government regulations on what AI can and cannot do and hactivists who will utilize AI to generate new<br />

exploits to make their point.<br />

Imagine if threat actors could input data from multiple breaches into a LLM, making it readable. <strong>The</strong>y<br />

could instruct an AI plat<strong>for</strong>m to discern patterns and details about individuals in the database, surpassing<br />

what has been done be<strong>for</strong>e. This could involve extracting extensive personal in<strong>for</strong>mation about a person,<br />

their family, and associates, leading to various malicious activities.<br />

For instance, threat actors could cross-reference this in<strong>for</strong>mation with social media profiles, conference<br />

attendance records, or employment details from plat<strong>for</strong>ms like LinkedIn. This knowledge could be<br />

exploited <strong>for</strong> activities ranging from utility service disruptions to identity-based extortion.<br />

While this may seem like a scenario from a Hollywood movie, the capability exists today, although not<br />

without challenges. Mainstream AI plat<strong>for</strong>ms like those from OpenAI and Google have built-in ethical<br />

protections. However, if hackers gain access to an open software plat<strong>for</strong>m with advanced capabilities,<br />

they could potentially modify it.<br />

Contrasting this with traditional methods, early data breaches primarily involved credit card and social<br />

security number theft. Credit card details were sold on the dark web, enabling subsequent illicit<br />

transactions. In the current landscape, breaches involve diverse data, such as medical records. Attackers<br />

now need to <strong>for</strong>mulate specific queries or understand how to build queries <strong>for</strong> the stolen data, a process<br />

made more efficient with AI. Specifically, AI enhances the ability to cross-reference data, identify patterns,<br />

and track individuals and their associations, marking a significant departure from conventional<br />

cyberattack methods.<br />

Recently, we have seen several companies pop up overnight utilizing this new type of aggregated breach<br />

database to search <strong>for</strong> people and what exposed data may be out there <strong>for</strong> specific individuals. Out of<br />

curiosity, I went to Malwarebytes the other day and entered my work email to see what may be lurking<br />

on the dark web about me. In less than a few minutes, the site correctly revealed my work address, where<br />

I live, and 14 breaches where a password of mine had been exposed.<br />

It can’t be emphasized enough, the rising threat of AI-enhanced cyberattacks is a cause <strong>for</strong> concern.<br />

Currently, the technology to detect such advancements is being developed in tandem with the evolving<br />

threat actor tactics. While companies have primarily focused on utilizing AI <strong>for</strong> threat hunting through<br />

tools like security in<strong>for</strong>mation and event management (SIEM), threat actors across various levels are now<br />

employing AI to craft sophisticated phishing attacks. For example, in the context of AI-enhanced phishing<br />

emails, conventional methods of identifying language and grammar errors are becoming less effective.<br />

AI can now generate phishing emails that appear professionally written.<br />

<strong>Cyber</strong> <strong>Defense</strong> <strong>eMagazine</strong> – <strong>March</strong> <strong>2024</strong> <strong>Edition</strong> 148<br />

Copyright © <strong>2024</strong>, <strong>Cyber</strong> <strong>Defense</strong> Magazine. All rights reserved worldwide.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!