20.12.2023 Views

CS Jan-Feb 2024

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>2024</strong> predictions<br />

Aaron Kiemele, Jamf: smaller businesses<br />

in the supply chain or partner ecosystem<br />

will increasingly be attacked as vectors to<br />

the true targets.<br />

Simon Hodgkinson, Semperis: Businesses<br />

are finally starting to understand that cyber<br />

is an enterprise risk.<br />

features in Active Directory. It is good to see<br />

that there is a bigger focus placed on identity<br />

protection.<br />

Zscaler<br />

AI and machine learning (ML) will resurface<br />

the data privacy debate. We are starting<br />

to see customers asking about how best to<br />

protect their own data when working with<br />

third-party providers. There is a growing<br />

concern that, as cloud providers and other<br />

vendors have access to an organisation's<br />

data, they are more likely to become a<br />

target of bad actors to acquire a company's<br />

data using AI and ML solutions.<br />

Additionally, there is also a legislation<br />

discussion to be had, as GDPR currently<br />

puts AI models in jeopardy. As models are<br />

trained on datasets, organisations need<br />

a stable and consistent set of data, in<br />

order to be as accurate as possible. GDPR<br />

currently says that companies should only<br />

keep data for as long as it is necessary<br />

to process it, which could have serious<br />

implications for AI models moving forward.<br />

In <strong>2024</strong>, we expect companies to revisit<br />

their data privacy statutes and strive to<br />

enforce more bespoke data loss prevention<br />

(DLP) tools to secure their datasets and<br />

ensure data privacy is at the top of the<br />

cybersecurity agenda.<br />

Organisations will need to learn to hide<br />

their attack surface at a data level. The<br />

influx of generative AI, such as ChatGPT,<br />

has forced businesses to realise that, if their<br />

data is available on the internet, then it<br />

can be used by generative AI and therefore<br />

competitors, no matter if it is an owned IP<br />

or not. So, if organisations want to avoid<br />

their IP getting utilised by AI tools, then<br />

they will need to ensure their attack surface<br />

is now hidden on a data level, rather than<br />

just at an application level. Based on this<br />

trend, we predict we will see initiatives<br />

to classify data into risk categories and<br />

implement security measurements<br />

accordingly to prevent leakage of IP.<br />

Amit Sinha, CEO, DigiCert<br />

AI may be a coup for defenders, but in <strong>2024</strong><br />

attackers are going to use it to develop new<br />

tactics and launch ever-more sophisticated<br />

attacks. At the most basic level, they'll be<br />

able to use generative AIs like ChatGPT or<br />

malicious versions like FraudGPT to educate<br />

themselves on how to plan and perpetrate<br />

cyberattacks, with little pre-existing technical<br />

knowledge or coding experience. Even fledgling<br />

attackers will be able to use AI capabilities<br />

to scrape key information about potential<br />

victims, harvesting crucial data from<br />

around the internet to enable social engineering<br />

attacks and perpetrate identity fraud.<br />

Generative AIs will be increasingly used to<br />

create sophisticated malware that can avoid<br />

detection by using advanced techniques like<br />

Steganography. Indeed, examples of this<br />

have already emerged. These 'intelligent'<br />

malware strains will be harder to anticipate<br />

and many legacy detection systems will<br />

struggle to keep up against these new<br />

threats.<br />

Just as AI technologies will grant the ability<br />

to create websites quickly, it will allow attackers<br />

to create fake websites, watering holes<br />

and phishing websites like never before -<br />

because of AI's ability to write, build and<br />

render a page as fast as a search result can<br />

be delivered.<br />

Generative AIs are also capable of impersonating<br />

others by learning their writing style<br />

and tone of voice. This sets the stage for<br />

advanced phishing attacks that can better<br />

impersonate a victim's colleagues, friends or<br />

family than a real human can. This will give<br />

spear-phishing and highly targeted phishing<br />

attacks a much greater degree of authenticity,<br />

especially because they'll emanate from<br />

trusted accounts that the intended victim<br />

supposedly knows well. Better and more<br />

realistic Deepfakes will also emerge, which<br />

will fuel social engineering and<br />

12<br />

computing security <strong>Jan</strong>/<strong>Feb</strong> <strong>2024</strong> @<strong>CS</strong>MagAndAwards www.computingsecurity.co.uk

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!