01.11.2023 Views

The Cyber Defense eMagazine November Edition for 2023

Cyber Defense eMagazine November Edition for 2023 #CDM #CYBERDEFENSEMAG @CyberDefenseMag by @Miliefsky a world-renowned cyber security expert and the Publisher of Cyber Defense Magazine as part of the Cyber Defense Media Group as well as Yan Ross, Editor-in-Chief and many more writers, partners and supporters who make this an awesome publication! 196 page November Edition fully packed with some of our best content. Thank you all and to our readers! OSINT ROCKS! #CDM #CDMG #OSINT #CYBERSECURITY #INFOSEC #BEST #PRACTICES #TIPS #TECHNIQUES

Cyber Defense eMagazine November Edition for 2023 #CDM #CYBERDEFENSEMAG @CyberDefenseMag by @Miliefsky a world-renowned cyber security expert and the Publisher of Cyber Defense Magazine as part of the Cyber Defense Media Group as well as Yan Ross, Editor-in-Chief and many more writers, partners and supporters who make this an awesome publication! 196 page November Edition fully packed with some of our best content. Thank you all and to our readers! OSINT ROCKS! #CDM #CDMG #OSINT #CYBERSECURITY #INFOSEC #BEST #PRACTICES #TIPS #TECHNIQUES

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Artificial Intelligence or Mechanical Turk?<br />

Walking the show floor at Black Hat, most vendors were pitching some sort of AI that would “revolutionize”<br />

defense. I found some of these messages deceptive themselves, making promises the industry has heard<br />

<strong>for</strong> years only to disappear into vaporware and disappointment. <strong>The</strong> advances in machine learning (ML)<br />

and large language models (LLM) have been very promising over the past few years, although still a bit<br />

overhyped as “AI” when in reality, these technologies require reliable data inputs, along with ongoing<br />

human tuning and supervision.<br />

Machine learning models are only as good as the data they are fed. As any data scientist will tell you, the<br />

majority of their job is data prep and cleansing, this also makes these models themselves susceptible to<br />

deception through data poisoning and model manipulation. <strong>The</strong> application of LLM through tools such as<br />

ChatGPT has been a fantastic breakthrough in the application of data science, with the promise of<br />

increasing productivity across many different industries. LLM, however, is a machine learning model that<br />

uses Natural Language Processing (NLP) to scan massive amounts of text. Some companies have been<br />

deceptive about how this technology works, confusing the industry.<br />

Although LLM technology can magically create content from a prompt out of thin air, there is more to it<br />

than meets the eye. LLMs rely on data inputs like any other model, so they leverage existing works,<br />

whether articles, blog posts, art, or even code. So there should be no surprise that there are now mass<br />

lawsuits against companies like ChatGPT from content creators claiming copyright infringement and<br />

source code is no different, not to mention privacy implications.<br />

LLM also has another negative side effect of “hallucinations” where it will spit out nonsense or untrue<br />

content that could trick or confuse a person if they believe the content, which shows why even some of<br />

the most advanced uses of “AI” require a human in the loop to verify content. Interestingly, we can be<br />

deceived by this technology by accident; however, the same technology can and is being used offensively<br />

to manipulate data models and people and, in many respects, is ahead of the defense.<br />

Generative Deception<br />

At Def Con I saw the other side of “AI” on the offensive side. Both the Social Engineering and<br />

Misin<strong>for</strong>mation Villages have grown over the years. <strong>The</strong> Social Engineering CTF was amazing to watch<br />

as teams targeted employees at companies to see how they could gather valuable in<strong>for</strong>mation from<br />

targets <strong>for</strong> reconnaissance. This can now be taken a step further to manipulate people using voice<br />

synthesis to mimic the voice of an authority figure, family member, or celebrity, <strong>for</strong> example, to gain the<br />

target's trust.<br />

<strong>The</strong> increasingly widespread use of this technology will pose a significant threat to organizations and<br />

individuals, mainly as many non-tech-savvy folks are unaware of it, and the models become increasingly<br />

convincing. In addition, the use of generative AI to create videos and images that are progressively<br />

realistic is already finding its way into propaganda, fraud, and social engineering at a horrifying rate, and<br />

most security awareness training programs and other defenses <strong>for</strong> these types of attacks are slow to<br />

catch up.<br />

<strong>Cyber</strong> <strong>Defense</strong> <strong>eMagazine</strong> – <strong>November</strong> <strong>2023</strong> <strong>Edition</strong> 22<br />

Copyright © <strong>2023</strong>, <strong>Cyber</strong> <strong>Defense</strong> Magazine. All rights reserved worldwide.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!