18.05.2021 Views

Cyber Defense Magazine Special Annual Edition for RSA Conference 2021

Cyber Defense Magazine Special Annual Edition for RSA Conference 2021 - the INFOSEC community's largest, most popular cybersecurity event in the world. Hosted every year in beautiful and sunny San Francisco, California, USA. This year, post COVID-19, virtually with #RESILIENCE! In addition, we're in our 9th year of the prestigious Global InfoSec Awards. This is a must read source for all things infosec.

Cyber Defense Magazine Special Annual Edition for RSA Conference 2021 - the INFOSEC community's largest, most popular cybersecurity event in the world. Hosted every year in beautiful and sunny San Francisco, California, USA. This year, post COVID-19, virtually with #RESILIENCE! In addition, we're in our 9th year of the prestigious Global InfoSec Awards. This is a must read source for all things infosec.

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

legitimate activity may be blocked and disrupt business activity, or alternatively malicious activity may be<br />

mistakenly identified as being OK and an alert not generated, leaving the organization open to risk.<br />

Hurdles AI/ML detection tools must overcome<br />

Despite their potential, AI/ML detection tools are not a panacea. They can miss threats and they can flag<br />

activities as malicious when they’re not. In order <strong>for</strong> these tools to be trusted, the alerts they raise and<br />

the decisions they make need to be continually validated <strong>for</strong> accuracy.<br />

If we train our AI and machine learning algorithms the wrong way, they risk creating even more noise,<br />

and alerting on things that are not real threats (“false positives”). This wastes analysts’ time as they chase<br />

these phantoms unnecessarily, only to discover they’re not real threats. Alternatively, they may also miss<br />

threats completely that should have been alerted on (“false negatives”).<br />

How do we guard against the pitfalls?<br />

In the case of false positives, we need to validate that a detected event is not malicious and train the tool<br />

to ignore these situations in future - while at the same time ensuring this doesn’t cause the tool not to<br />

alert on similar issues that are in fact malicious. The key to doing this effectively is having access to<br />

evidence that enables accurate and timely investigation of detected threats. Recorded packet history is<br />

an indispensable resource in this process - allowing analysts to determine precisely what happened and<br />

to accurately validate whether an identified threat is real or not.<br />

Dealing with false negatives is more difficult. How can we determine whether a threat was missed that<br />

should have been detected? There are two main approaches to this. The first is to implement regular,<br />

proactive threat hunting to identify whether there are real threats that your detection tools - including<br />

AI/ML tools - are not detecting. Ultimately, threat hunting is a good habit to get into anyway, and if<br />

something is found that your AI/ML tool missed the first time around, it provides an opportunity to train it<br />

to correctly identify similar threats the next time.<br />

The second approach is using simulation testing - of which one example is penetration testing. By<br />

creating simulated threats, companies can clearly see if their AI/ML threat detection tools are identifying<br />

them correctly or not. If they’re not, it’s once again an opportunity to train the tool to identify similar activity<br />

as a threat in future.<br />

16

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!