01.11.2023 Views

The Cyber Defense eMagazine November Edition for 2023

Cyber Defense eMagazine November Edition for 2023 #CDM #CYBERDEFENSEMAG @CyberDefenseMag by @Miliefsky a world-renowned cyber security expert and the Publisher of Cyber Defense Magazine as part of the Cyber Defense Media Group as well as Yan Ross, Editor-in-Chief and many more writers, partners and supporters who make this an awesome publication! 196 page November Edition fully packed with some of our best content. Thank you all and to our readers! OSINT ROCKS! #CDM #CDMG #OSINT #CYBERSECURITY #INFOSEC #BEST #PRACTICES #TIPS #TECHNIQUES

Cyber Defense eMagazine November Edition for 2023 #CDM #CYBERDEFENSEMAG @CyberDefenseMag by @Miliefsky a world-renowned cyber security expert and the Publisher of Cyber Defense Magazine as part of the Cyber Defense Media Group as well as Yan Ross, Editor-in-Chief and many more writers, partners and supporters who make this an awesome publication! 196 page November Edition fully packed with some of our best content. Thank you all and to our readers! OSINT ROCKS! #CDM #CDMG #OSINT #CYBERSECURITY #INFOSEC #BEST #PRACTICES #TIPS #TECHNIQUES

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Charting a Trustworthy AI Journey<br />

Sound cybersecurity principles <strong>for</strong> responsible generative AI innovation<br />

By Lisa O’Connor, Managing Director—Accenture Security, <strong>Cyber</strong>security R&D, Accenture Labs<br />

As companies turn to generative AI to trans<strong>for</strong>m business operations, traditional notions of trust are being<br />

reshaped. How will the enterprise navigate a journey toward Responsible AI—designing, developing and<br />

deploying AI in ways that engender confidence and trust?<br />

AI brings unprecedented opportunities to businesses, but also incredible responsibility. Its direct impact<br />

on people’s lives has raised considerable questions around AI ethics, data governance, trust and legality.<br />

In fact, Accenture’s 2022 Tech Vision research found that only 35% of global consumers trust how AI is<br />

being implemented by organizations. And 77% think organizations must be held accountable <strong>for</strong> their<br />

misuse of AI.<br />

That skepticism in understandable. Humans will have a more difficult time determining if sources of<br />

in<strong>for</strong>mation are reliable. <strong>The</strong>re are risks that large language models will be used to manipulate data in<br />

ways that will make us question the veracity of all sorts of in<strong>for</strong>mation.<br />

Today, threat actors are using Gen AI to write more and more effective phishing campaigns. <strong>The</strong>y are<br />

getting better at leveraging collected profiles and the troves of in<strong>for</strong>mation we share on-line on social<br />

sites to craft precision spearphishing. Attacks methods against AI and Generative AI are all over the dark<br />

web. <strong>The</strong>se methods take advantage of how these models work. For example “prompt injection” attacks<br />

can cause the large language model (LLM) to unknowingly execute the malicious user’s instructions,<br />

<strong>Cyber</strong> <strong>Defense</strong> <strong>eMagazine</strong> – <strong>November</strong> <strong>2023</strong> <strong>Edition</strong> 28<br />

Copyright © <strong>2023</strong>, <strong>Cyber</strong> <strong>Defense</strong> Magazine. All rights reserved worldwide.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!