02.01.2024 Views

The Cyber Defense eMagazine January Edition for 2024

Cyber Defense eMagazine January Edition for 2024 #CDM #CYBERDEFENSEMAG @CyberDefenseMag by @Miliefsky a world-renowned cyber security expert and the Publisher of Cyber Defense Magazine as part of the Cyber Defense Media Group as well as Yan Ross, Editor-in-Chief and many more writers, partners and supporters who make this an awesome publication! 201 page January Edition fully packed with some of our best content. Thank you all and to our readers! OSINT ROCKS! #CDM #CDMG #OSINT #CYBERSECURITY #INFOSEC #BEST #PRACTICES #TIPS #TECHNIQUES

Cyber Defense eMagazine January Edition for 2024 #CDM #CYBERDEFENSEMAG @CyberDefenseMag by @Miliefsky a world-renowned cyber security expert and the Publisher of Cyber Defense Magazine as part of the Cyber Defense Media Group as well as Yan Ross, Editor-in-Chief and many more writers, partners and supporters who make this an awesome publication! 201 page January Edition fully packed with some of our best content. Thank you all and to our readers! OSINT ROCKS! #CDM #CDMG #OSINT #CYBERSECURITY #INFOSEC #BEST #PRACTICES #TIPS #TECHNIQUES

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

in terms of reputational, operational, and financial damage. <strong>The</strong> damage inflicted by such a breach<br />

would not stop at the company boundaries, but would create a ripple effect across the AI<br />

ecosystem as organizations that had relied on the model(s) would need to immediately go into<br />

damage control mode. Abruptly ceasing to use the model(s) would affect applications that require<br />

it and security teams would have to investigate, reassess, and possibly recreate or replace<br />

elements of the organizational security infrastructure. Explaining their accountability to their own<br />

shareholders and customers would be a painful exercise <strong>for</strong> executives, and come with its own<br />

set of consequences.<br />

• An enterprise embracing GenAI is going to have a permissioning breach due to multiple<br />

models at play and a lack of access controls. As a company layers in external base models,<br />

such as ChatGPT, as well as models embedded in SaaS applications, and retrieval-augmented<br />

generation (RAG) models, the organizational attack surface expands, the security team’s ability<br />

to know what’s going on (observability) decreases, and the intense, perhaps even giddy, focus<br />

on increased productivity overshadows security concerns. Until, that is, a disgruntled project<br />

manager is given the access to the new proprietary accounting model that the payroll manager<br />

with a similar name requested. Depending on the level of disgruntlement and the personality<br />

involved, company payroll in<strong>for</strong>mation could be shared in the next sotto voce rant at the coffee<br />

machine, in an ill-considered all-hands email, or as breaking news on a business news website.<br />

Or nothing will be shared and no one will notice the error until the payroll manager makes a<br />

second request <strong>for</strong> access. Whatever the channel or audience, or lack thereof, the company has<br />

experienced a serious breach of private, confidential, and highly personal data, and must address<br />

it rapidly and thoroughly. <strong>The</strong> AI security team’s days or weeks will be spent reviewing and likely<br />

overhauling the organization’s AI security infrastructure, at the very least, and the term “trust layer”<br />

will become a feature of their vocabulary.<br />

• Data science will become increasingly democratized thanks to foundation models (LLM<br />

usage). <strong>The</strong> speed and power of LLMs to analyze and extract important insights from huge<br />

amounts of data, to simplify complex, time-consuming processes, and to develop scenarios and<br />

predict future trends has already begun to bring big-data analytics into the workflow of teams and<br />

departments in all business functions. That will continue to scale up dramatically. Across an<br />

organization, teams will increasingly be able to rapidly generate data streams tailored to their<br />

specific needs, which will streamline productivity and expand the institutional knowledge base.<br />

Humans will not be out of the loop, however, as I do not <strong>for</strong>esee models’ propensity to make stuff<br />

up being resolved any time soon, although fine-tuning is showing some benefits in that area.<br />

• Increasingly new and novel cyberattacks created by offensive fine-tuned LLMs like<br />

WormGPT and FraudGPT will occur. <strong>The</strong> ability to fine-tune specialized models quickly and<br />

with relative ease has been a boon to developers, including the criminal variety. Just as models<br />

can be trained on a specific collection of financial data, <strong>for</strong> instance, models can also be trained<br />

on a corpus of malware-focused data and be built with no guardrails, ethical boundaries, or<br />

limitations on criminal activity or intent. As natural language processing (NLP) models, these tools<br />

function as ChatGPT’s evil cousins, possessing the same capabilities <strong>for</strong> generating malicious<br />

<strong>Cyber</strong> <strong>Defense</strong> <strong>eMagazine</strong> – <strong>January</strong> <strong>2024</strong> <strong>Edition</strong> 139<br />

Copyright © <strong>2024</strong>, <strong>Cyber</strong> <strong>Defense</strong> Magazine. All rights reserved worldwide.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!