CS Jul-Aug 2023
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
AI<br />
David Trossell, Bridgeworks: AI should<br />
continue to be embraced by enterprises<br />
and startups developing technologies in<br />
the future.<br />
Boris Cipot, Synopsys Software Integrity<br />
Group: legally, who is the owner or author<br />
of what is provided by AI and what are<br />
the flaws AI may have generated?<br />
these apps exist and always be sure to read<br />
the fine print whenever hitting 'Subscribe'.<br />
Users can also report apps to Apple and<br />
Google, if they think the developers are using<br />
unethical means to profit."<br />
KEEPING CONTROL<br />
People are often suspicious of new technologies,<br />
with Sci-Fi movies, such as 'The<br />
Terminator', playing on this fear, says David<br />
Trossell, CEO and CTO of Bridgeworks. "Sure,<br />
cyber-criminals could use AI against us, but<br />
equally we can use AI to protect ourselves;<br />
or to do more with fewer resources. Machine<br />
learning doesn't mean that autonomous<br />
machines will eventually control us. We<br />
can use AI and ML to maintain control.<br />
For example, new technologies that<br />
incorporate AI to ensure that voluminous<br />
amounts of encrypted data can travel<br />
securely at unrivalled speeds over a Wide<br />
Area Network."<br />
He points to the innovation called WAN<br />
Acceleration, which uses AI, ML and data<br />
parallelisation to mitigate latency and packet<br />
loss. "It permits the secure transport and<br />
ingression of encrypted data. Organisations<br />
can increase their bandwidth utilisation,<br />
without investing in new pipes. Without<br />
AI and ML, data would neither be as secure,<br />
nor as fast, over large distances. Rather than<br />
making IT redundant, it enables CIOs to<br />
focus on strategic tasks," advises Trossell.<br />
"Given the benefits of AI and ML, the<br />
question is: 'Why are they trying to stop the<br />
inevitable?' The genie is out of the bottle.<br />
Rodney Brooks [the Australian roboticist]<br />
argues that you have to be aware that, with<br />
any new technology, 50% of the answers are<br />
incorrect. Don't confuse performance with<br />
competence. It's a bit like the cloud. Everyone<br />
rushed to the cloud to avoid missing out,<br />
only to regret it. In 2017, The Global and<br />
Mail wrote: 'The public cloud provider<br />
Nirvanix, in San Diego, California, went<br />
under in 2013, forcing clients to scramble to<br />
retrieve their data before it was forever lost.'<br />
People should reflect, plan, and re-evaluate<br />
AI and ML, Trossell says. "The big VCs are<br />
piling in with money to get on the bandwagon<br />
and AI isn't new. As for ethics, they've<br />
never prevented the making of a dollar. You<br />
can see this with Meta and Twitter. Generative<br />
AI ChatGPT is going to be the same - just<br />
another tool. Will it cause mass unemployment?<br />
Possibly, but growing global trade<br />
and investment impact these changes as<br />
well. Organisations and consumers adapt,<br />
so AI should continue to be embraced by<br />
enterprises and startups developing technologies<br />
in the future."<br />
LEGAL IMPLICATIONS<br />
Ultimately, there is no denying the importance<br />
AI will have in the future, says Boris Cipot,<br />
senior security engineer at the Synopsys<br />
Software Integrity Group. "AI will change the<br />
way information is generated, processed and<br />
used. However, the primary question at this<br />
point is: who will control the usage of this AI?<br />
It is understandable that some companies<br />
have established policies that their employees<br />
do not use AI-based technology for workrelated<br />
tasks, as there are still many unanswered<br />
questions from a legal or security<br />
standpoint.<br />
"For instance, legally, who is the owner or<br />
author of what is provided by AI and what<br />
are the flaws AI may have generated? Here<br />
the flaws can be interpreted as vulnerabilities<br />
in source code created by AI or misinformation<br />
that it is susceptible to, based on<br />
materials used to train the AI. As learned<br />
from the past, technology can be used for<br />
good, but also for bad. AI, still in its infancy,<br />
is no different.<br />
"We cannot predict every possible decision<br />
for every scenario around which AI may be<br />
used," points out Cipot. "But AI systems need<br />
to be trained with reliable and accurate<br />
information, tested to ensure they're not<br />
spreading vulnerable or inaccurate output,<br />
and maintained to ensure they're leveraged<br />
in a productive and constructive way."<br />
30<br />
computing security <strong>Jul</strong>y/<strong>Aug</strong>ust <strong>2023</strong> @<strong>CS</strong>MagAndAwards www.computingsecurity.co.uk