31.07.2023 Views

The Cyber Defense eMagazine August Edition for 2023

Cyber Defense eMagazine August Edition for 2023 #CDM #CYBERDEFENSEMAG @CyberDefenseMag by @Miliefsky a world-renowned cyber security expert and the Publisher of Cyber Defense Magazine as part of the Cyber Defense Media Group as well as Yan Ross, Editor-in-Chief and many more writers, partners and supporters who make this an awesome publication! Thank you all and to our readers! OSINT ROCKS! #CDM #CDMG #OSINT #CYBERSECURITY #INFOSEC #BEST #PRACTICES #TIPS #TECHNIQUES

Cyber Defense eMagazine August Edition for 2023 #CDM #CYBERDEFENSEMAG @CyberDefenseMag by @Miliefsky a world-renowned cyber security expert and the Publisher of Cyber Defense Magazine as part of the Cyber Defense Media Group as well as Yan Ross, Editor-in-Chief and many more writers, partners and supporters who make this an awesome publication! Thank you all and to our readers! OSINT ROCKS! #CDM #CDMG #OSINT #CYBERSECURITY #INFOSEC #BEST #PRACTICES #TIPS #TECHNIQUES

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Why compliance standards are important <strong>for</strong> AI<br />

When people discuss their concerns about artificial intelligence, most people would cite the loss of jobs<br />

or the spreading of false in<strong>for</strong>mation as their primary concerns. However, more people should be<br />

concerned about their cybersecurity and privacy being endangered by the use of AI. After all, AI models<br />

are able to rapidly process, store, and — perhaps more frighteningly — learn from massive amounts of<br />

in<strong>for</strong>mation. This means that if a hacker can gain access, they have enormous amounts of data available<br />

they can then exploit <strong>for</strong> their own gain.<br />

Compliance standards in the AI industry ensure that AI developers put the right protections in place to<br />

minimize or eliminate the risk to the data the algorithm is processing and storing. Some measures that<br />

should be standard include a legal obligation not to sell, rent, or share data with third parties, as well as<br />

ensuring that all regulatory requirements <strong>for</strong> data protection are met or exceeded.<br />

What compliance standards are needed <strong>for</strong> AI<br />

One of the most important considerations when it comes to the use of AI is user consent. From the user’s<br />

end, it is important to read the terms of use and understand what is being consented to. Meanwhile, from<br />

the operator's end, enabling users to understand their consent clearly — such as allowing them to track<br />

their consent with intuitive tools and completely delete their data — is necessary not only <strong>for</strong><br />

accountability, but also <strong>for</strong> protection to ensure that users are in<strong>for</strong>med of potential risks. This is especially<br />

vital <strong>for</strong> financial companies, whose user data is particularly sensitive.<br />

Companies that implement AI into their practices while handling financial data should also implement<br />

stringent cybersecurity standards. <strong>The</strong> use of bank-level cybersecurity standards can ensure that<br />

systems and data are fully encrypted and protected, and any sensitive data stored in the system should<br />

have restricted access. Access to this data should only be granted to authorized and verified users with<br />

a legitimate reason to view or utilize it.<br />

Additionally, it’s important to remember that cybersecurity is about being proactive. Entities employing AI<br />

who want to be proactive about protecting their data should pursue penetration and vulnerability testing<br />

from a professional service. Through penetration testing, the weaknesses of a program and its<br />

cybersecurity measures can be exposed be<strong>for</strong>e wrongdoers can ever exploit them, and fixes can be<br />

implemented to protect the data.<br />

Still, there are certain types of data that users should avoid inputting into AI programs, and that the entities<br />

behind AI programs should avoid collecting and storing, regardless of how strong the system might seem.<br />

If an AI program contains user data that is typically valuable to wrongdoers — such as card payment<br />

in<strong>for</strong>mation or usernames and passwords to banking accounts — it is more likely to be targeted <strong>for</strong><br />

attacks, and there<strong>for</strong>e far more susceptible to data breaches. After all, the best method to protect against<br />

attacks is to prevent the attack from ever happening in the first place.<br />

<strong>The</strong> truth is that, like any other tool they use, businesses will be held accountable <strong>for</strong> the risks created by<br />

their use of artificial intelligence. That isn’t to say that businesses should not implement AI — it is a<br />

powerful tool with numerous exciting implications — but it is vital that companies use this technology<br />

<strong>Cyber</strong> <strong>Defense</strong> <strong>eMagazine</strong> – <strong>August</strong> <strong>2023</strong> <strong>Edition</strong> 127<br />

Copyright © <strong>2023</strong>, <strong>Cyber</strong> <strong>Defense</strong> Magazine. All rights reserved worldwide.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!