29.07.2016 Views

NCLATEST

Transform your PDFs into Flipbooks and boost your revenue!

Leverage SEO-optimized Flipbooks, powerful backlinks, and multimedia content to professionally showcase your products and significantly increase your reach.

NETWORKcomputing

I N F O R M A T I O N A N D C O M M U N I C A T I O N S – N E T W O R K E D www.networkcomputing.co.uk

7th HEAVEN?

Making the case for upgrading

to Wi-Fi 7 in 2025

BACKUP AND

RECOVERY STRATEGIES

Why businesses must be

better prepared

NO ‘YOU’ IN USB

Avoiding a sticky security

situation with USB storage

A GAP IN THE CLOUD

Overcoming the skills

gap in cloud computing

MAY/JUNE 2025 VOL 34 NO 02


Driving the Future of

Public Sector Technology

Discover the latest technology and strategies to enhance citizen

experience and deliver seamless, user-centric public services.

Complimentary Passes Available for the Public Sector

REGISTER TODAY

Strategic Partners:

Discover more at www.digital-government.co.uk


COMMENT

COMMENT

BRINGING HOME THE TROPHIES IN 2025

May has rapidly become the month of trophy wins in London - first Crystal

Palace, then Spurs and Arsenal WFC, and now of course the 2025 Network

Computing Awards! The 19th annual Network Computing Awards took place

at an evening Awards ceremony at Hilton London Tower Bridge on 22 May just as this

issue of the magazine was going to print (timing is everything!), so we only have time

and space to include news of some our winners here ahead of a full round-up in the

next issue.

They include Zyxel Networks, winners of our Network Management Product of the

Year award for Nebula, and EfficientIP, who won the Observability Product of the Year

category for their DDI Observability Center, while Endace took home the trophy for

the One to Watch Product for the 100GbE EndaceProbe 9400 G5. The 2025 Bench

Tested Product of the Year award was won by NetAlly for their LinkRunner AT4000

which, as David Mitchell said in his review, "raises the bar for cable testing and network

analysis, as it delivers a remarkably powerful set of diagnostics and troubleshooting

features in a ruggedised handheld device."

REVIEWS:

Dave Mitchell

DEPUTY EDITOR: Mark Lyward

(netcomputing@btc.co.uk)

PRODUCTION: Abby Penn

(abby.penn@btc.co.uk)

DESIGN: Ian Collis

(ian.collis@btc.co.uk

SALES:

David Bonner

(david.bonner@btc.co.uk)

SUBSCRIPTIONS: Christina Willis

(christina.willis@btc.co.uk)

PUBLISHER: John Jageurs

(john.jageurs@btc.co.uk)

Published by Barrow & Thompkins

Connexion Ltd (BTC)

Suite 2, 157 Station Road East,

Oxted,

RH8 0QE

Tel: +44 (0)1689 616 000

Fax: +44 (0)1689 82 66 22

SUBSCRIPTIONS:

UK: £35/year, £60/two years, £80/three

years;

Europe: £48/year, £85/two years £127/three

years;

ROW:

£62/year, £115/two years, £168/three years

© 2025 Barrow & Thompkins Connexion Ltd.

All rights reserved. No part of the magazine

may be reproduced without prior consent,

in writing, from the publisher.

101 Data Solutions were crowned Distributor of the Year while the Customer

Service Award went to Prism DCS, and BlueCat Networks won the Testing/Monitoring

Product of the Year for the LiveAction LiveNX. Our Network Project of the Year -

Connectivty winners were Broad Horizons Education Trust and Acmatic Networks,

while Evolvement Networks and CyberSmart won the Network Project of the Year -

Security category, while Oliver Reynolds of Prism DCS won our Inspiration Award.

Congratulations once again to all of our winners and runners-up on the night, and a

big 'thank you' to our sponsors, guests on the night, and everyone who took the time

to vote online. You'll find a full list of our winners on the Awards website.

GET FUTURE COPIES FREE

BY REGISTERING ONLINE AT

WWW.NETWORKCOMPUTING.CO.UK/REGISTER

WWW.NETWORKCOMPUTING.CO.UK @NCMagAndAwards

MAY/JUNE 2025 NETWORKcomputing 03


MAY/JUNE 2025 VOL 34 NO 02

CONTENTS

CONTENTS

NETWORKcomputing

I N F O R M A T I O N A N D C O M M U N I C A T I O N S – N E T W O R K E D www.networkcomputing.co.uk

7th HEAVEN?

Making the case for upgrading

to Wi-Fi 7 in 2025

BACKUP AND

RECOVERY STRATEGIES

Why businesses must be

better prepared

M A Y / J U N E 2 0 2 5

NO ‘YOU’ IN USB

Avoiding a sticky security

situation with USB storage

A GAP IN THE CLOUD

Overcoming the skills

gap in cloud computing

BACKUP AND RECOVERY.....12

Are organisations doing enough to backup

and protect their data? Frank DeBenedetto

at Kaseya talks us through the findings of

Kaseya's State of Backup and Recovery

Report 2025

A GAP IN THE CLOUD............20

Sam Woodcock at 11:11 Systems outlines

the advantages of partnering to overcome

the technology skills gap in cloud computing

7TH HEAVEN?.........................10

Upgrading to Wi-Fi 7 in 2025 could prove

to be attractive for a lot of organisations,

according to Hugh Simpson at Zyxel

Networks

AVOIDING OUTAGES WITH

ADVANCED MONITORING...16

Greg Collins at Progress on why enterprises

need advanced internet connection monitoring

for each of their remote sites

NO ‘YOU’ IN USB.................24

Jon Fielding at Apricorn explains why

businesses can't afford to let USB storage

devices create a sticky security situation

COMMENT.....................................3

Bringing home the trophies in 2025

INDUSTRY NEWS.............................6

The latest networking news

ARTICLES

PIVOTING DATA CENTRE DESIGN

AROUND STORAGE........................14

By Wendell Wenjen at Supermicro

WELCOMING AI..............................18

By Tal Barmeir at BlinqIO

BRIDGING THE AI SKILLS GAP.........19

By Yohan Lobo at M-Files

BEYOND DEEPSEEK.........................22

By Julius Cerniauskas at Oxylabs

DATA LOSS PREVENTION................26

By Iwona Zalewska at Kingston Technology

DATA ENGINEERING IN THE AI ERA...27

By Mari Nilsson Bjorkman at SAS

THE DATA-DRIVEN PATH TO

AUTONOMOUS NETWORKS..........28

By Phil Kippen at Snowflake

GOING BEYOND THE BUILD

FOR AI.............................................30

By John Abbott at Vertiv

IDENTIFYING NIS2 CHALLENGES....32

By Anders Askasen at Omada

UNIFYING AUTOMATED SECURITY...34

By Mike Fry Logicalis UK&I

REVIEW

EXAGRID S3 STORAGE FOR VEAAM....8

LIVEACTION LIVENX 25.1.....................15

04 NETWORKcomputing MAY/JUNE 2025 @NCMagAndAwards

WWW.NETWORKCOMPUTING.CO.UK



INDUSTRY NEWS

NEWS NEWSNEWS

NEWS

NEWS NEWS NEWS

NEWS NEWS NEWS

Colt enhances its On Demand NaaS platform

Colt has announced an expansion to its On Demand

Network as a Service (NaaS) platform with the addition of

three new features. Available now, the three new features are

On Demand Diversity, Dedicated Cloud Ports and new Multi-

Vendor 'Offnet'. On Demand Diversity allows businesses to select

different network routes for their Ethernet, Cloud and Internet

services using a self-serve ordering system in the Colt NaaS

platform. The ability to request access-level diversity 'on demand'

improves network performance; bolsters resilience and disaster

recovery; and supports deeper levels of compliance.

The Dedicated Cloud Ports offer greater scalability for

businesses' cloud solutions, complementing the hosted cloud

capability already live in the platform. They provide direct,

private connections to the cloud and high capacity, reliability

and security. The new Multi-Vendor Offnet feature enhances

flexibility and efficiency for Colt's customers, by enabling them

to choose from hundreds of carrier partners when they connect

a new 'offnet' location which is not directly part of Colt's own

network infrastructure.

New leadership and SME focus for NETGEAR

NETGEAR is evolving its mission to serve small to medium

enterprises by sharpening its focus on delivering nextgeneration

networking solutions that provide simplicity, reliability

and cost-effectiveness. "In this next era of NETGEAR we're

stripping away complexity, removing friction, and reimagining

what business networking can be so that our customers don't just

keep up with change, they lead it," according to NETGEAR CEO

CJ Prober. Helming this evolution is NETGEAR for Business

President and GM Pramod Badjate, who has been driving the

strategic repositioning to make NETGEAR the go-to partner for

SMEs facing increasing digital demands since joining the

company seven months ago.

During this period NETGEAR has experienced rapid growth in

this segment, with over 15% revenue expansion last quarter and

double-digit growth forecasted for 2025. Over the next three

years, the company plans to accelerate its investment in R&D to

enhance its current business offerings. The first key milestone is the

acquisition of VAAG Systems, a creator of cutting-edge

embedded and cloud software solutions based in Chennai India.

This investment allows NETGEAR to accelerate the in-sourcing of

a software development capability and this team will form the

foundation of NETGEAR's new Chennai-based Software

Development Center. This new team brings a wealth of industry

expertise with experience and will focus on leveraging AI to

simplify networking for SMEs.

Hans Nipshagen and Arno van Gennip

Two new appointments to nLighten's leadership team

European edge data centre market specialists nLighten has

expanded its leadership team with the appointment of two senior

executives: Hans Nipshagen as Vice President Channel Sales, AI

and Platform Sales, and Arno van Gennip as Vice President

Operations Enablement. These strategic additions are part of

nLighten's ongoing growth across Europe and reinforce nLighten's

commitment to delivering high-performance, sustainable digital

infrastructure close to its customers.

"Welcoming Hans and Arno to the leadership team is a significant

milestone for nLighten, as we scale across Europe," said Harro

Beusker, CEO and Co-Founder of nLighten. "Hans brings

invaluable expertise in building thriving partner ecosystems across

complex markets, while Arno's operational leadership and

sustainability focus will be crucial as we grow responsibly. Both are

strategic hires that will strengthen our capabilities and help position

nLighten as a leading edge data centre platform in Europe."

As nLighten continues to expand its footprint with sustainable, lowlatency

data centres across core business hubs in Europe, the

appointments of Hans Nipshagen and Arno van Gennip reflect the

company's commitment to combining deep local expertise with

strong pan-European leadership.

06 NETWORKcomputing MAY/JUNE 2025 @NCMagAndAwards

WWW.NETWORKCOMPUTING.CO.UK


INDUSTRY NEWS

Geopolitics top cyber agenda at Infosecurity Europe

Rory Stewart OBE and Paul Chichester, Director of Operations at

the UK's National Cyber Security Centre (NCSC), will headline

day two of this year's Infosecurity Europe, with keynote sessions

focused on the growing connection between geopolitics and

cybersecurity. Infosecurity Europe, which runs from 3-5 June 2025

at ExCeL London, will explore how power shifts, alliances, and

economic pressures are intensifying and increasing the complexity of

digital defence strategies for organisations worldwide. Rory Stewart's

keynote, Shifting Sands: Geopolitics, Threat and the Future, will take

place on Wednesday, 4th Jun at 10:05 and will explore how the

geopolitical world order is evolving at an unprecedented pace. With

global instability at its highest in decades and as nations adapt to

new alliances and economic pressures, digital warfare tactics are

becoming increasingly sophisticated.

Building on this theme, Paul Chichester, will follow on the Keynote

Stage at 12:20 for The Cyber Cold War? Geopolitics Driving Cyber

Threat, offering insights from the frontline of UK national defence.

Chichester will share intelligence on the tactics evolving across

regions, the emergence of new international actors, and the

strategic targeting of UK infrastructure. The NCSC has seen

"nationally significant" cyber incidents double from September 2024

to May 2025 compared with the same period in the previous year.

Additionally, the NCSC received almost 2000 reports of cyberattacks

in 2024, of which 89 were considered nationally significant,

including 12 critical incidents.

Vertiv deliver AI-ready solutions for Polar data centre

Vertiv has been chosen by Polar as the primary supplier for its first

modular AI-ready data centre in Norway. Powered entirely by

hydroelectric energy, the Polar facility minimises its carbon footprint

whilst accommodating high-density, liquid-cooled environments of

up to 120kW per rack. The Vertiv solution is designed with N+1

redundancy across electrical and thermal systems, delivering the

resilience and reliability to support AI and accelerated computing

operations. Polar's mission is to create an industry-leading,

environmentally responsible infrastructure platform for their

customers to develop the future of AI.

Viktor Petik, senior vp, infrastructure solutions at Vertiv,

commented: "This collaboration showcases the strength of Vertiv's

modular approach, providing Polar with a high-density, AI-ready

infrastructure that combines rapid deployment with outstanding

energy efficiency. By leveraging factory-assembled infrastructure, we

overcome traditional on-site challenges and deliver a solution

tailored to Polar's evolving requirements."

EfficiencyIT Awarded Prestigious Royal Warrant

EfficiencyIT, a UK-founded specialist in data centres, IT, and critical

communications environments, has been awarded a Royal

Warrant of Appointment into the place and quality of Supplier of IT

Infrastructure and Services by His Majesty King Charles III -

recognising the company's exceptional service and commitment to

delivering sustainable IT infrastructure solutions to the Royal

Household. As a supplier of IT Infrastructure, maintenance and

modernisation services, EfficiencyIT joins a prestigious group of

companies recognised by His Majesty, King Charles III for their

excellence in service and craftsmanship, and their continued

dedication to infrastructure security, resiliency and sustainability.

Nick Ewing, MD of EfficiencyIT, said: "We are incredibly proud to be

recognised with a Royal Warrant of Appointment by His Majesty, The

King. This award is a testament to our team's dedication to providing

highly secure, resilient, sustainable IT solutions that support missioncritical

operations."

Unfied networking and surveillance solutions for the UK

TP-Link is partnering with Mast Digital (UK) to deliver fully

integrated, easy-to-manage networking and security solutions to

businesses across the UK. Mast Digital (UK) has built a trusted

portfolio spanning CCTV & security systems, intruder alarms, access

control, intercoms, professional AV equipment, and aerial and

satellite solutions - now strengthened by the addition of TP-Link's

networking and security products, bringing greater performance,

integration, and value to its customers.

Mast Digital (UK) will now offer TP-Link's complete professional

portfolio, including VIGI surveillance - featuring the comprehensive

Insight Series of Bullet, Dome, Fisheye, and Turret cameras, along

with NVRs - and Omada networking solutions. All are underpinned

by Omada Central, TP-Link's powerful cloud-based platform that

enables remote, multi-site network and surveillance management,

significantly reducing operational complexity and cost.

NEWS NEWSNEWS

NEWS

NEWS NEWS NEWS NEWS NEWS

NEWS

WWW.NETWORKCOMPUTING.CO.UK @NCMagAndAwards

MAY/JUNE 2025 NETWORKcomputing 07


PRODUCT REVIEW

ExaGrid S3 Storage

for Veeam

PRODUCT REVIEW

PRODUCT

REVIEWPRODUCT RE

ExaGrid is a top choice for enterprise data

backup storage and disaster recovery as

its Tiered Backup Storage family of EX

appliances offer a unique set of data

protection capabilities. Furthermore,

integration with all the leading backup

applications allows enterprises to retain their

existing backup infrastructure.

This makes ExaGrid incredibly appealing for

organisations using Veeam's Data Platform, as

ExaGrid supports an integrated Veeam Data

Mover for ingest performance and security,

Veeam Fast Clone for synthetic full backups

and Scale-Out Backup Repository (SOBR) for

scalability. Even better, ExaGrid now supports

Veeam writing to ExaGrid Tiered Backup

Storage as an S3 object store target, and it

brings Veeam's backup for Microsoft 365

(MS365) solution into play as you back up

M365 data directly to ExaGrid's immutable

on-premises storage.

Deploying ExaGrid in a Veeam data

protection strategy is a breeze as

administrators simply direct their backup jobs

to the ExaGrid shared storage repository. Any

of ExaGrid's EX models can be mixed together

in a single scale-out system comprising up to

32 appliances and adding new ones increases

compute power and network bandwidth in line

with capacity.

ExaGrid's unique Landing Zone disk cache

receives data from the backup application

directly to the Landing Zone in an

undeduplicated form, thus ensuring high

performance backups as well as Veeam's

advanced features such as Sure Backup, Data

Lab, Instant VM recovery, copy and replicate.

Benefits of the Landing Zone are in

abundance as they allow ExaGrid to claim an

unprecedented ingest rate of up to 516TB/hr

for a 6PB full backup. Data is also written to a

Repository Tier on the appliances during

backup operations where it is further

deduplicated taking Veeam's 2:1 to a 14:1

data reduction, greatly reducing the storage

required and saving on storage costs. The

Landing Zone accelerates restore operations

by up to 20 times over deduplication

appliances, as the data does not require

rehydration.

Creating a Veeam S3 Compatible Object

Storage Repository using ExaGrid is simple as

you choose this option from the Veeam

Backup & Recovery Infrastructure page.

Following the Veeam wizard, you enter the

secure URL to the ExaGrid appliance, provide

its access and secret keys, select a bucket,

create an ExaGrid shared folder, enable

immutability and you're done.

The process for creating a Veeam MS365

repository is just as easy although slightly

different as this object storage cannot be part

of an ExaGrid SOBR. From the Veeam MS365

Backup console, you run through the same

process to define S3 compatible storage on

the ExaGrid appliance and your assigned

ExaGrid customer representative will advise on

the optimum number of shares, S3 object

stores, repositories and backup jobs required.

Enterprises worried about ransomware

attacks can rest easy with ExaGrid on their side

as it uses S3 object locking to lock all data for

the period specified in your Veeam backup

jobs. S3 data is locked in the Landing Zone

and Repository Tier and ExaGrid's Retention

Time-Lock for Ransomware Recovery (RTL)

feature effectively double-locks the repository.

RTL is a smart feature as it places an air gap

between ExaGrid's network-facing tier and

non-network-facing tier and includes delayed

deletes, so data is not immediately deleted

during an attack. ExaGrid's RTL works with

Veeam's S3 immutability to provide additional

ransomware recovery.

This ExaGrid and Veeam partnership is the

perfect solution for enterprises looking to

streamline all data protection processes,

reduce their backup windows, have full

scalability, and provide essential ransomware

recovery. Furthermore, the seamless integration

with Veeam's Data Platform allows enterprises

to use Veeam's Backup & Replication and

Backup for Microsoft 365 solutions and secure

them both to a single ExaGrid deployment. NC

Product: ExaGrid S3 Storage for Veeam

Supplier: ExaGrid

Web site: www.exagrid.com

Tel: +44 (0) 1189 497 051

08 NETWORKcomputing MAY/JUNE 2025 @NCMagAndAwards

WWW.NETWORKCOMPUTING.CO.UK


EVENT ORGANISERS:

Do you have something coming up that may

interest readers of Network Computing?

Contact dave.bonner@btc.co.uk

FORTHCOMING EVENTS

2025

FORTHCOMING EVENTS

FORTHCOMING EVENTS

16-17

SEPT

24-25

SEPT

1-2

OCT

DCD COMPUTE

Business Design Centre, London

www.datacenterdynamics.com/en/dcdconnect-compute/london/2025/

DIGIGOV EXPO 2025

ExCel, London

www.digital-government.co.uk

DTX LONDON

ExCel, London

www.dtxevents.io/london


OPINION: WI-FI 7

7th HEAVEN?

HUGH SIMPSON, EMEA MARKETING DEVELOPMENT MANAGER,

ZYXEL NETWORKS, EXPLAINS WHY UPGRADING TO WI-FI 7 THIS

YEAR COULD PROVE TO BE ATTRACTIVE FOR A LOT OF

ORGANISATIONS

One of the challenges with

technology is that there is never

a perfect time to upgrade. If

what you have today does the job for you

- and is still works reliably - there is no

reason to change. There may be

occasions when a change is forced upon

you however - many businesses and users

will need to upgrade their laptops before

the end of support for Windows 10 in

November 2025, for example.

But in most situations, it will be up to

you to decide when to make the leap to

the latest iteration of a new technology.

Today, most organisations and homes will

have Wi-Fi 4 (802.11n) or Wi-Fi 5

(802.11ac) routers and access points

installed. These devices will be working

perfectly well and providing decent

connectivity speeds - at least for most

users, most of the time.

TRIED AND TRUSTED

This is despite faster technologies such as

Wi-Fi 6 / 6E (801.11ax) and Wi-Fi 7

(802.11be) being widely available. Wi-Fi

6 / 6E has been around since 2021 and

is now a thoroughly tried, tested, and

trusted technology. This is the wireless

standard that most people have been

moving onto for the last two years. It

offers higher speeds and better coverage

for moderately busy environments and it's

10 NETWORKcomputing MAY/JUNE 2025 @NCMagAndAwards

WWW.NETWORKCOMPUTING.CO.UK


OPINION: WI-FI 7

certainly affordable. All but the very

cheapest new laptops and smartphones

will support Wi-Fi 6.

Wi-Fi 7 arrived in 2024, delivering

even more bandwidth and assured highspeed

throughput, even for high-density

areas such as conference centres. Plenty

of Wi-Fi 7 devices are already available,

Indeed, Zyxel Networks has been one of

the pacesetters with this technology and

we now have seven Wi-Fi 7 access

points on the market, to suit every kind

of need and budget.

While the arrival of Wi-Fi 6 and 6E

and many more client devices that can

make use of this standard may have

gone largely unnoticed by all those

businesses still using older wireless

technologies, it's hard to imagine that

they've been able to avoid hearing at

least something about Wi-Fi 7. We've

certainly been doing our best to make

sure that partners and their customer

know all about its potential benefits.

THE RIGHT TIME

It's the right time to be making positive

noises around wireless upgrades. Yes,

they may still be doing a job for many

businesses, but it is 14 years since the

Wi-Fi 4 standard was published and Wi-

Fi 5 has been around for 11 years.

Many of the client devices connecting to

these older access points and routers

will have Wi-Fi 6 capability, so users

won't be getting the kind of Wi-Fi

performance they should be getting.

Both businesses and users are starting

to sense that an upgrade to their

wireless networking is now long

overdue. Many of them have been

holding back since the COVID

pandemic and had to prioritise

technology spending in other areas.

They are now looking once again at

their wider infrastructure to make sure

that staff can make the best use of

cloud services, video conferencing and

collaboration, and the richer content

that is being generated by AI; to meet

the heightened demands placed on the

network by the increased volume of

data being generated and the need to

keep cybersecurity protection apps

running constantly in the background.

LATENT DEMAND

This latent demand for wireless

networking updates was gathering pace

throughout 2024 and at Zyxel Networks

we saw sales of Wi-Fi 6 / 6E products

escalating rapidly. As I mentioned

earlier, this is now a trusted technology

and for many businesses and domestic

users it will deliver robust performance

and improved support in busy office

environments or cafes, for example,

where users are coming and going all

the time and in which densities may vary

from a handful to scores of connections

throughout the day.

We've continued to see strong sales of

Wi-Fi 6 / 6E devices in the early part of

this year and we expect this standard to

be the best-seller in 2025. At the same

time, we are also seeing Wi-Fi 7 sales

climbing and we think many businesses

will now start to see this as the best

investment option when they upgrade

their Wi-Fi.

THREE REASONS

There are three main reasons why you

might choose Wi-Fi 7 over Wi-Fi 6 / 6E.

First, where you do really need the extra

throughput. As I mentioned earlier,

bandwidth demands continue to grow

and as that happens, you need to

provide more capacity right across the

network. We're seeing strong and

growing demand for multi-gig and

higher speed aggregation switches and

many organisations now want to run

10G out to client endpoints on the wired

infrastructure. If you want to match that

with Wi-Fi, you really do need Wi-Fi 7.

Second, more client devices that

support Wi-Fi 7 are now coming to

market. The significant change here is in

the laptop specifications and with most

new devices now supporting Wi-Fi 7, it

makes sense to support connections at

the highest possible speed and

throughput.

Third, it makes financial and business

sense to invest in the very latest

technology as this will give you better

future-proofing and return on investment.

Yes, you could try to hang on until the

next iteration of the standard arrives - but

Wi-Fi 8 is not expected until 2028, and

even then, it will need time to be fully

tested and ratified, and for products to

come to market.

GOOD RETURN ON INVESTMENT

Wi-Fi 7 meanwhile is growing in stature,

and is becoming more widely

supported, available and used. And

while there is still a gap between the

cost of Wi-Fi 6 / 6E and Wi-Fi 7, the

latter is also getting more affordable.

Zyxel Networks' Wi-Fi 7 access points

are available at prices that compare

favourably with those asked by other

vendors for Wi-Fi 6 / 6E devices.

For all these reasons it may well make

sense for many organisations to go

straight to Wi-Fi 7 in 2025, rather than

transition via Wi-Fi 6 / 6E. The

technology that they invest in now will

almost certainly be in use for the next five

years, and as most updates can be

applied through our Nebula cloud

management platform, there is no

question they will get a good return on

their investment in Wi-Fi 7 technology. NC

WWW.NETWORKCOMPUTING.CO.UK @NCMagAndAwards

MAY/JUNE 2025 NETWORKcomputing 11


OPINION: BACKUP AND RECOVERY

BACKUP AND RECOVERY: WHY BUSINESSES MUST BE BETTER PREPARED

ARE ORGANISATIONS DOING ENOUGH TO BACKUP AND PROTECT THEIR DATA? FRANK

DEBENEDETTO, GTM GENERAL MANAGER, BACKUP SUITE, KASEYA TALKS US THROUGH THE

FINDINGS OF KASEYA'S STATE OF BACKUP AND RECOVERY REPORT 2025

Most organisations are well aware of the

need for a robust backup and

recovery strategy to protect their data

against data loss. But, as a recent Kaseya

survey of more than 3,000 IT professionals

worldwide found, around a third are worried

about how well their organisation is prepared

for cyber threats and other incidents.

According to the State of Backup and Recovery

Report 2025: Navigating the Future of Data

Protection, only 40% of respondents currently

feel confident in their backup systems - and

many are experiencing challenges when it

comes to implementing strong data protection

strategies, from security concerns to timeconsuming

backup processes.

DISSATISFACTION WITH BACKUP

TOOLS

Across surveyed organisations, over half of

workloads and applications are currently run in

the public cloud, and this percentage is

expected to grow further. Most businesses

therefore use a multi-cloud strategy in a bid to

enhance their backup resilience and flexibility.

However, managing those diverse IT

environments can be complex.

Many businesses reported challenges in

optimising their backup processes and are

struggling with time-consuming management

tasks. For example, over half of respondents

said their IT teams already spend more than two

hours per day or more than 10 hours per week

monitoring, managing and troubleshooting

backups. This issue is only exacerbated when

using multiple backup tools.

Moreover, while organisations currently rely

on an average of more than three backup

solutions, this does not give them the

confidence that they are well protected. A third

of respondents said their company's backup

and recovery situation was causing them

nightmares, with another 30% worrying

that their organisation doesn't have an

adequate backup and recovery

solution in place.

In fact, over half of respondents

plan to switch to a different primary

backup solution in the coming year,

highlighting gaps in performance,

reliability and ease of use.

CONCERNS ABOUT SECURITY ISSUES

Strong security measures are critical for

protecting backups and addressing potential

vulnerabilities, especially with the rapid growth

in sensitive data that is being handled and

stored by businesses. While most organisations

(75%) have policies and controls in place to

secure workloads across public cloud,

endpoints, SaaS apps and servers, the report

found that a quarter still lack essential

safeguards. As businesses move towards

increasingly hybrid environments, this gap can

result in significant security risks.

Additional vulnerabilities arise when

organisations don't implement strong access

controls. Securing sensitive account credentials

is a key aspect of backup system integrity, but

according to the report, the methods used by

businesses to achieve this vary widely. Only

around one-third use dedicated password

managers, while others rely on document

storage solutions (22%), IT documentation

software (19%) - or even pen and paper (12%).

Worryingly, 5% admitted they do not manage

credentials at all.

INFREQUENT TESTING RESULTS IN A

LACK OF PREPAREDNESS

Budget constraints and a lack of resources force

businesses to compromise on the robustness of

their backup strategies as well as the frequency

of testing. Only 15% of respondents carry out

daily backup tests. Around a quarter test their

backups weekly and another 24% test once a

month, running a significant risk of not being

able to recover in the event of a disaster.

In addition, only around 1 in 10 businesses

perform daily disaster recovery tests, with many

12 NETWORKcomputing MAY/JUNE 2025 @NCMagAndAwards

WWW.NETWORKCOMPUTING.CO.UK


OPINION: BACKUP AND RECOVERY

organisations only managing much longer

testing cycles - 21% quarterly and 13%

annually. Even more concerning is that 12%

of businesses test their recovery capabilities on

an ad-hoc basis or not at all, which could

make them highly susceptible to businesscrippling

outages.

This sporadic testing means many

organisations are ill-prepared for downtime

events and unable to restore operations fast.

While 60% of respondents were confident

that they would be able to recover in under a

day, only 35% could do so in reality when hit

by an on-premises outage in the past year.

Recovery times for data stored in SaaS

applications and the cloud are also slow.

Only around 40% of respondents believe they

can recover lost SaaS data in a matter of

hours, with others needing days or weeks to

restore it (35%). And an alarming 40% of

respondents stated they would need days or

weeks to recover public cloud data.

PLANNING AND TESTING ARE KEY

As data protection becomes ever more

business-critical and more complex,

organisations will have to adopt more

reliable strategies and systems to safeguard

data across on-premises, cloud and SaaS

platforms. Importantly, any data protection

strategy must include a concrete plan for

scalability, ensuring it can evolve to

accommodate future technologies, new

workloads and growing storage needs.

To start with, organisations should pinpoint

their most critical data and applications;

ensuring that these are protected and

recoverable is a priority. To minimise

downtime and data loss during incidents,

Recovery Time Objectives (RTO) and

Recovery Point Objectives (RPO) must be

clearly defined and aligned with business

continuity plans.

Next, businesses should implement

consistent backup policies across all their onpremises,

cloud and SaaS environments -

including regular updates to reflect ongoing

changes in technology, regulations, business

requirements and business priorities.

Multilayered security measures with strict

access controls, reinforced by staff training,

are essential to help protect the backup

systems themselves. In addition, the backup

infrastructure must include a high level of

ransomware protection, which can be

achieved by using immutable and air-gapped

storage to ensure data integrity in the event of

an attack. Regularly auditing backup systems

with periodic security assessments can identify

and address potential vulnerabilities.

Organisations will only know that their

backups are working if they test them

repeatedly and regularly. Therefore,

maintaining a regular testing schedule is the

only way to guarantee data integrity - and the

ability to quickly recover should disaster strike.

Finally, achieving strong data protection is a

long-term commitment that requires

continuous investment. Smart tools including

behavioural analytics and machine learning

can help improve efficiency and reliability by

predicting failures, optimising backup

schedules and automating recovery

processes. Businesses that keep innovating

and leveraging advanced technologies will

find it easier to keep up with their fastchanging

backup needs. NC

WWW.NETWORKCOMPUTING.CO.UK @NCMagAndAwards

MAY/JUNE 2025 NETWORKcomputing 13


OPINION: DATA CENTRES

PIVOTING DATA CENTRE DESIGN AROUND STORAGE

WENDELL WENJEN, DIRECTOR OF STORAGE MARKET

DEVELOPMENT, SUPERMICRO, EXPLAINS HOW STORAGE IS OFTEN

OVERLOOKED WHEN DESIGNING DATA CENTRES, AND WHY THAT

URGENTLY NEEDS TO CHANGE

AI prevalence is a key topic when

exploring data centre workloads. It's

clear that the technology has

permeated the strategies behind how

business set goals and operate, but there are

core storage limitations that prevent success.

The industries affected by this include high

performance computing (HPC),

hyperconverged infrastructure (HCI), media

and entertainment production, and

hyperscalers. However, the strategies to

mitigate such limitations can vary. This article

will explore these solutions and guide

businesses on the best methods to integrate

robust storage architecture into their data

centre design.

SOFTWARE-DEFINED STORAGE VS

PROPRIETARY SOLUTIONS

Having a software-defined storage approach

has significant benefits over more traditional

and proprietary storage applications. A

proprietary approach, while popular for many

years, has limitations in high cost, vendor

lock-in and limited solution availability. This is

specifically due to the need to use specific

software in tandem with the custom hardware

in play. Software-oriented solutions navigate

this limitation with storage-optimised servers

hosting purpose-built management software

allowing for specialised solutions for a variety

of workloads.

These solutions combine scale-out parallel

file systems, object storage systems and

scale-up networks with the right

combination of workload optimised solid

state and hard disk drives for bespoke, and

more successful delivery. Additionally, by

integrating more of the new networking and

system technologies such as RoCE (RDMA

over Converged Ethernet), GPUDirect

Storage (GDS), CXL (Compute eXpress Link)

and PCIe Gen 5 data centres can tap into

additional performance benefits and new

functional capabilities.

FINDING THE RIGHT FIT

Businesses can only find the right storage mix

through experimentation. As a result, robust

solution engineering, testing and qualification

matters. Whilst software-defined methods

theoretically can use any combination of

system hardware, storage media, software

and networking, the reality is much different.

In practice the specific combination must be

tested and quantified to ensure operation in

the field is reliable and consistent. This is

something that the overarching solution

architect and integrator needs to consider

when allocating resources.

Additionally, the system architect needs to

make sure they have their finger on the

industry's pulse. Staying on top of rapid

innovations across system architecture, CPU

design, SSD and HDD design and capacity,

networking and storage software is vital when

selecting the right combination of hardware

and software. This is often done jointly with

the technology partners, including the

software provider, to achieve optimal

performance and cost metrics.

HAVING AN AI MINDSET

Given the pervasiveness of AI across data

centres, every storage solution needs to

factor in AI into operations even in instances

where it is not the primary workload. For

example, while parallel file systems such as

WEKA and DDN have been designed for

HPC and AI training workloads, enterprisefocused

HCI solutions from Nutanix support

AI inference workloads with ChatGPT-in-a-

Box functionality. Object storage software

from DDN, Cloudian, Quantum and

OSNexus is used either as a capacity tier for

a parallel file system or directly used for AI

training workloads.

WHAT THE FUTURE HAS IN STORE

Data centre storage will always be a key

discussion point for how data centres can

overcome their limitations and bridge the

gap into more demanding workloads. The

implementation of customer specific

enterprise AI models and applications is a

critical milestone for general AI adoption

and aggregation, with the common factor

being their enterprise data backbone. To

ease this, we expect businesses will look to

the adoption of data lakes and lakehouses

for aggregating data parsing

through it.

Finally, as the sources of enterprise data

continue to expand, and the need to update

AI models in real-time via event-driven

frameworks increases, more automated data

orchestration through sophisticated

management software will be critical. NC

14 NETWORKcomputing MAY/JUNE 2025 @NCMagAndAwards

WWW.NETWORKCOMPUTING.CO.UK


PRODUCT REVIEW

LiveAction

LiveNX 25.1

PRODUCT REVIEW

PRODUCT

REVIEWPRODUCT RE

The exponential growth and complexity of

enterprise networks are presenting network

administrators with new challenges. To

effectively manage and monitor network

performance they need a solution that provides

total visibility, and yet many products are merely

a disparate collection of point solutions with little

correlation across them, making it difficult to

accurately pinpoint the root cause of problems.

Recently acquired by network optimisation

experts BlueCat Networks, LiveAction's LiveNX

stands out as this network monitoring, analytics

and visualisation platform presents a single

pane of glass for end-to-end visibility across onpremises,

hybrid, SD-WAN and cloud

environments. Nothing is beyond its scope as it

collects data from multiple sources including

SNMP, wireless, applications plus devices and

supports multi-vendor networks such as Cisco,

Palo Alto, Fortinet, AWS, Azure and many more.

LiveNX can be virtualised on VMware, KVM

or Hyper-V hosts, in the cloud on AWS, Azure

or Google Cloud and delivered as a hardware

appliance. Its distributed architecture makes it

highly flexible as collector nodes can be

located in remote locations to offload data

processing and reduce WAN traffic.

The well-designed Operations Dashboard

offers discovery services for easy device

onboarding which can be scheduled to run

regularly. On completion, discovered devices

can be placed in groups to visually and

logically organise them, allowing relevant

critical information to be easily accessed.

Smart dashboards present high-level network

health views and widgets make it easy for

NetOps and SecOps teams to customise it to

their requirements. You can see the status of all

monitored sites along with device availability

and utilisation, check on the top WAN

applications by bandwidth usage and apply

filters to refine the information.

Using advanced flow analytics, LiveNX

provides hop-by-hop visualisations that

dynamically show traffic paths with QoS

(quality of service) monitoring and intelligent

overlays. This allows support staff to isolate

and remediate issues such as bottlenecks or

misrouted traffic in minutes, thus ensuring

minor problems don't become major outages.

The WAN dashboard presents overviews of

sites, applications and service providers

including breakdowns of capacity, the top

alerts plus application group bandwidth and its

utilisation monitoring provides predictions of

future usage. LiveNX is a great choice for

enterprises migrating from MPLS circuits to SD-

WANs as its Geo Topology map shows how

traffic is being steered across the network and

moving from one service provider to another -

ideal for presenting support teams with a

global heads-up wallboard display.

Application-aware performance monitoring

down to Layer 7 provides deep visibility into

business-critical apps such as Microsoft 365,

Zoom and SalesForce. Alert smokescreens for

all network issues are also avoided as multitier

alerting and an intelligent correlation

engine ensure support staff focus on the

events that matter. LiveNX uses AI-powered

anomaly detection to baseline normal

behaviour patterns, detect anomalies in realtime

and provide timely alerts based on

predictive performance insights. Even better,

the LiveAssist conversational UX service will

answer your questions to help interpret and act

on critical network issues.

LiveAction's LiveWire appliances provide high

fidelity packet capture plus deep packet

inspection (DPI) and their seamless integration

with LiveNX allows it to deliver precise root

cause analysis. There's more as the LiveNX

Insight module uses machine learning to

identify network anomalies that have the

potential to cause problems.

LiveAction's LiveNX takes all the pain points

out of managing complex enterprise networks

as it seamlessly integrates its myriad network

and application performance monitoring

functions into a single, unified platform. Its

smart interface presents a global end-to-end

view of your entire network making it easy to

identify and troubleshoot problems while its

simple architecture allows it to scale easily with

demand now and well into the future. NC

Product: LiveNX 25.1

Supplier: LiveAction

Web site: www.liveaction.com

Sales: +44 (0)800 098 8040

WWW.NETWORKCOMPUTING.CO.UK MAY/JUNE 2025 15

NETWORKcomputing

@NCMagAndAwards


OPINION: INTERNET MONITORING

AVOIDING OUTAGES WITH ADVANCED MONITORING

GREG COLLINS, PRODUCT

MARKETING MANAGER AT

PROGRESS, EXPLAINS WHY

ENTERPRISES NEED ADVANCED

INTERNET CONNECTION

MONITORING FOR EACH OF

THEIR REMOTE SITES

It's an all-too-familiar scenario for network

administrators - stopping work on a

Friday with everything running smoothly,

only to return Monday to a nightmare of

system-wide outages with alerts flashing and

a flood of user messages. Servers being

down, even momentarily, can cause

disruption, costly IT support time, downtime

and loss of business.

Internet outages give network

administrators headaches for network

administrators and there are significant

challenges to identifying these issues as they

happen. Organisations need a 24/7 service

that monitors their internet connection from

anywhere, alerting them via email or SMS

because speed of response is critical.

IT professionals need real-time visibility

into network traffic to detect and analyse

ongoing issues, minimise downtime and

facilitate efficient responses to

performance bottlenecks. This is where

Internet Connection Monitoring comes

in, notifying administrators of outages or

instability within their IT infrastructure

and facilitating them to take swift

corrective action.

Continuous monitoring of the entire

network estate has benefits beyond

reducing downtime. It can maintain

productivity without significant

disruptions and enhance operational

resilience by avoiding delayed

notifications during outages.

16 NETWORKcomputing MAY/JUNE 2025 @NCMagAndAwards

WWW.NETWORKCOMPUTING.CO.UK


OPINION: INTERNET MONITORING

THE CHALLENGES FACING

NETWORK MANAGERS

A significant challenge network

administrators face when countering

internet outages is their existing alerts and

notifications, which cause confusion and

delays. Firstly, on-premises notification

systems don't work independently of the

primary internet or email services, leading

to delayed notifications and longer outage

times. An administrator would expect an

immediate notification when a problem

occurs. Yet, in the scenario of a primary

internet or email server outage, this

notification may come later or not at all.

This can extend the outage time, as the

service team is oblivious to the issue.

Dial-up and GSM modems aren't viable

for most customers due to their complexity

and the additional infrastructure requires a

literal dial-up or GSM modem. The main

problem is that the dial-up modems of

most service providers have depreciated

the technology used for this, known as the

TAP protocol for sending text messages

using dial-up.

CURRENT IT INFRASTRUCTURE

MONITORING MARKET

EXPECTATIONS

As tech estates grow and become more

complex and network threats escalate in

prevalence and impact, network

administrators and their organisations are

raising their expectations. They now expect

IT infrastructure monitoring platforms to

deliver a centralised management system

for remote sites so they can identify any

issues centrally from a single interface.

The ability to generate contextualised

insights and analytics from their data is

also essential. Trends and patterns can be

used to focus on specific vulnerabilities

that might cause problems. An easy-to-use

interface is essential to make the

technology equally accessible to tech and

non-tech stakeholders.

With the budgetary challenges facing

tech teams, they can't afford to overlook a

flexible license model without adding

more bandwidth, as monthly ISP

bandwidth charges are costly. With the

ability to drill-down to identify the sources

and destinations of their internet traffic,

the applications consuming bandwidth

and the users of those applications,

network managers can maintain their

business-critical applications and have the

necessary bandwidth.

THE IMPORTANCE OF NETWORK

TRAFFIC ANALYSIS

Network Traffic Analysis (NTA) is part of an

IT Infrastructure Monitoring solution, which

elevates network admins' understanding of

their networks to a superior level across

on-premises, cloud or hybrid estates. NTA

has three key features that keep networks

safe and running smoothly:

1. Monitor Network Traffic

It is essential to have a solution that

collects network traffic and bandwidth

usage data from any flow-enabled device

on the network. It should support Cisco's

NetFlow and NetFlow-Lite as well as NSEL

protocols, J-Flow, sFlow and IPFIX. An

enterprise-level network traffic monitoring

solution, like Progress WhatsUp Gold's

Network Traffic Analysis Plus (NTA+),

should help network pros reduce

troubleshooting time and determine the

root cause of network and application

performance issues.

2. Receive Alerts

Specialist solutions, such as the Progress

WhatsUp Gold product, provide

threshold-based alerting to help address

bandwidth problems before they impact

users, applications and the business. With

a direct line in from each remote site to

the cloud, for instance AWS Azure and

Google networks along with hybrid cloud

environments, network managers will

receive an instant email or SMS

notification of any internet outage or

instability. It also alerts administrators

when senders or receivers' interface traffic

exceeds utilisation thresholds, failed

connections and the number of

conversation partner thresholds.

3. Report

With built-in dashboards to view the

network flow data gathered by the

collector, organisations can easily identify

any network delays, outages,

configuration issues or performance

degradation. Network managers can

better understand traffic patterns and

identify any bandwidth hogs by filtering

each dashboard report by date, time or

traffic type.

ADOPTING NEXT-LEVEL NETWORK

TRAFFIC VISIBILITY

In a modern digital business environment,

NetOps professionals need to be able to

understand the health, uptime status and

utilisation of IT assets like servers or

network components. With complete

transparency across the entire network

path, they have visibility into problems

beyond their local network.

The rise of cloud adoption also means

that analysing bandwidth consumption

across hybrid cloud networks is essential.

With the most advanced continuous

network monitoring tools, organisations

will be able to track real-time user

experience and proactively manage

application performance.

With the ability to leverage historical

data potentially going back years,

network administrators can determine

network traffic trends that will forecast

future expansion and maintain smooth

company operations. NC

WWW.NETWORKCOMPUTING.CO.UK @NCMagAndAwards

MAY/JUNE 2025 NETWORKcomputing 17


OPINION: AI

WELCOMING AI

TAL BARMEIR, CEO AND CO-FOUNDER, BLINQIO, ON HOW UK WORKPLACES ARE ADAPTING TO

ARTIFICIAL INTELLIGENCE

Artificial Intelligence is reshaping

industries globally, particularly in the

UK. In my discussions with professionals

there has been a noticeable shift towards AI,

influenced by its potential to streamline

operations and enhance innovative growth.

Here, I'll share some insights into why UK

employers and employees are becoming

increasingly receptive to AI, highlighting

unique perspectives from recent webinars with

test engineers.

AI IS EASING JOB SECURITY

CONCERNS

I think one of the most pressing concerns

among UK test engineers is the fear of job loss

due to AI integration. This is a valid concern

that echoes a broader apprehension about AI

across various sectors. However, I've noticed a

turning point where more professionals are

recognising AI's role as a support rather than a

substitute. By maintaining a 'human in the

loop,' companies can ensure a smoother

transition where AI enhances job roles and

efficiency without displacing the workforce.

AI'S DUAL IMPACT IN THE WORKPLACE

The implementation of AI in the UK workplace

presents both opportunities and challenges. AI

tools have proven effective in reducing often

arduous tasks and enhancing decision-making

through data-driven insights. Yet, the

integration process can be daunting,

necessitating new skills and adaptability

among employees. Effective communication

about these changes is essential, helping to set

clear expectations and mitigate concerns

about AI's role in daily activities.

GENERATIVE AI AND CREATIVE

EXPANSION

In my opinion, generative AI stands out for its

potential to revolutionise job roles and

enable creativity, particularly in fields like

software testing. Where repetitive tasks can

be automated, there's a significant

opportunity for employees to engage in more

creative and strategic endeavours. Recent

studies, including PwC's 2024 report,

highlights seventy three percent of users

believe these tools will unlock creative

potential at work, underscoring a positive shift

in perception towards generative AI.

THE SHIFT TOWARDS AI-FIRST

DEVELOPMENT

Observing the trends in software

development within the UK, there's a clear

movement towards an AI-first approach. This

strategy isn't just about incorporating AI but

fundamentally rethinking how software is

created and maintained. AI's ability to adapt

and learn from ongoing processes is

improving software reliability and

functionality, marking a significant

advancement in development practices.

NAVIGATING CHANGE AND

ENSURING INCLUSION

The integration of AI into UK workplaces

requires thoughtful change management

strategies, something employers are warming

up to. From the numerous webinars I've

hosted, it's clear that transitioning to AIsupported

roles involves not only technical

training but also reassurance and support. I

think that emphasising the importance of

humans in supervising and working alongside

AI can help alleviate these fears and

encourage a more welcoming approach to

technological changes.

REALISING AI'S ABILITY TO PERFORM

COMPLEX TESTING

The strategic advantages of AI in settings like

testing are becoming more evident. AI's ability

to perform complex tests with little human

intervention has not only cut down on time to

market but has also significantly enhanced the

quality of outcomes. As companies in the UK

leverage AI for testing, they're seeing real value

in faster, more accurate results, which in turn

supports broader business objectives.

As AI continues to permeate various aspects

of work life in the UK, the focus should remain

on education, clear communication, and the

continued involvement of human expertise to

guide AI integration. By addressing AI as a tool

for enhancement rather than replacement, UK

workplaces can navigate the future with

optimism and ensure that AI serves to augment

the workforce, leading to greater productivity

and innovation. NC

18 NETWORKcomputing MAY/JUNE 2025 @NCMagAndAwards

WWW.NETWORKCOMPUTING.CO.UK


OPINION: AI SKILLS

BRIDGING THE AI SKILLS GAP

TARGETED DEVELOPMENT IS ESSENTIAL TO ENCOURAGE AI UPTAKE FROM NON-TECHNICAL

EMPLOYEES, ACCORDING TO M-FILES

Despite billions invested in AI solutions

worldwide, a major roadblock

remains: employees don't understand

how to use these tools effectively. Without

proper guidance, businesses risk stalled

adoption, wasted investment, and an inability

to realise AI's full potential. This hesitation is

hindering AI adoption and limiting its benefits

within organisations.

Recent research indicates that only 6% of

workers feel very comfortable utilising AI in

their roles. This stark statistic highlights a

significant skills gap that businesses must

address. A McKinsey Global Survey found that

while 85% of businesses have AI initiatives in

place, only 25% of employees feel they

understand how to apply them to their roles.

This gap stems from a lack of training,

uncertainty about AI's role, and concerns over

job security. Without a confident and

competent workforce, even the most

advanced AI strategies will struggle to deliver

meaningful impact.

According to Yohan Lobo, Senior Industry

Solutions Manager a M-Files, the key to

successful AI integration lies in a focused and

purposeful approach: "Instead of deploying AI

widely without clear objectives, organisations

must ensure that AI solutions serve a specific

purpose and align with employee needs." Key

considerations for encouraging AI adoption

businesses should consider are:

1. Clarifying AI's purpose

Clearly articulating why AI is being integrated

into a particular business area is essential.

Employees should understand the specific

challenges AI is addressing and the expected

outcomes, ensuring transparency and

alignment with business objectives.

2. Ensuring data quality and reliability

Trust in AI solutions is fundamental. If

employees doubt the accuracy or reliability of

AI-generated outputs, they are unlikely to

engage with the technology. Businesses must

ensure their AI models are built on highquality,

relevant data and produce consistently

reliable results.

3. Driving AI adoption with employee buy-in

and champions

Successful AI adoption hinges on employee

enthusiasm rather than enforcement.

Organisations can foster this by showcasing

real-world benefits and success stories while

appointing AI champions-trusted team

members who advocate for the technology,

provide hands-on support, and address

concerns. These champions act as a bridge

between employees and leadership, ensuring a

confident, informed, and seamless transition to

AI-powered workflows.

4. Simplifying AI tools

Employees should not need technical expertise

to leverage AI effectively. The most successful AI

solutions are user-friendly, intuitive and fit

seamlessly into daily work without specialist

knowledge to deliver accurate results.

Prioritising ease of use will accelerate adoption

and drive efficiency.

5. Maintaining clear AI policies

A structured AI governance framework is

crucial to ensuring employees understand the

organisation's stance on AI adoption. Clear

guidelines should outline ethical

considerations, data privacy policies, and the

intended scope of AI use.

"AI integration becomes a much easier

process when employees actively want this

technology instead of having it forced upon

them," says Yohan. "The key is to show

employees how AI enhances - not replaces -

their work. When they see real value, adoption

follows. Companies must ensure that AI tools

are intuitive, reliable, and demonstrably

beneficial to employees' daily tasks. Without

this, adoption will remain a challenge.

"Without a workforce that trusts and

understands AI, even the most sophisticated

tools will remain underutilised. Businesses need

to take a structured approach, ensuring AI

solutions are introduced with clear goals,

proper training, and employee support

mechanisms in place."

Yohan concluded: "Ultimately, AI is only as

effective as the workforce that uses it. Even the

most advanced solutions will fall short if

employees are not fully convinced of their value.

Businesses should conduct an AI-readiness

assessment to identify skill gaps and ensure their

workforce is equipped for success. By focusing

on education, clarity, and usability,

organisations can foster widespread AI

adoption and unlock its

full potential." NC

WWW.NETWORKCOMPUTING.CO.UK @NCMagAndAwards

MAY/JUNE 2025 NETWORKcomputing 19


OPINION: THE SKILLS GAP

A GAP IN THE CLOUD

SAM WOODCOCK, SENIOR DIRECTOR OF CLOUD STRATEGY AND ENABLEMENT AT 11:11

SYSTEMS, OUTLINES THE ADVANTAGES OF PARTNERING TO OVERCOME THE TECHNOLOGY

SKILLS GAP IN CLOUD COMPUTING

For organisations that are always trying

to leverage the latest technology to gain

an edge over their competitors, utilising

public cloud computing is at the top of the

list for most decision makers. The scalability

and cost-effectiveness, along with businesses

not needing to invest in traditional

infrastructure and having it managed by a

third party, offers a myriad benefits for

companies in all industries.

However, migrating an entire organisation's

data and workflows to a public cloud is a

daunting and complicated task, and it is

difficult to even know where to begin. This is

also heavily impacted by the technology skills

gap, an issue which the industry is facing as a

whole and has been a much-discussed

problem over the past few years. Essentially,

the number of skilled workers does not meet

the level required in the industry, leading to

stifled business growth. Areas that require

specialised and niche knowledge, like

cybersecurity and cloud migration, are

particularly affected.

Obviously, the technology skills gap is not an

issue that any one organisation can solve, but

for companies that want to migrate their data

to a public cloud and lack the expertise to do

so, partnering with a specialist will help close

the skills gap.

DATA SECURITY AND COMPLIANCE

One of the most pressing concerns for

businesses when considering cloud migration

is ensuring the security of mission critical

data. Unsecured networks can lead to data

breaches, whether through a malicious

attack from a threat actor, or a genuine

mistake by an employee. A data breach can

have untold consequences on a business.

From loss of revenue incurred from the cost

to fix the issue and associated downtime, to

a loss of customer and partner trust.

Therefore, it is no surprise that decisionmakers

might be apprehensive about taking

their data from an internal server and

migrating it to a public cloud.

Additionally, many businesses must adhere to

strict industry standards surrounding the

safeguarding of their data, particularly in

heavily regulated sectors such as healthcare

and finance. Businesses must understand these

regulations, including DORA, NIS 2, and

GDPR, and how they apply when considering

cloud migration. This can become even more

complicated for global businesses, as different

territories often have their own unique

standards and requirements.

To address these risks, working with a cloud

service provider to understand how to

configure the chosen public cloud will ensure

the correct cybersecurity protocols are in place

20 NETWORKcomputing MAY/JUNE 2025 @NCMagAndAwards

WWW.NETWORKCOMPUTING.CO.UK


OPINION: THE SKILLS GAP

to protect organisational data. Service

providers can lend the specific expertise

needed to achieve this, especially as most

organisations are unlikely to have this

knowledge as part of their in-house IT team.

An external cloud provider will also be

familiar with any relevant industry

regulations, ensuring that the cloud platform

is compliant.

LEGACY SYSTEMS INTEGRATION

Another key issue that can cause decisionmakers

to be hesitant about migrating key

workflows to the cloud is integrating legacy

systems into the new network. A lot of vital

applications used by businesses pre-date

cloud technology, and as such are not

designed to operate in a cloud environment.

These systems, which in many cases are

key to the function of the business, may

require substantial modification to ensure

compatibility with the new environment.

Furthermore, migrating these applications

could lead to downtime if not done correctly.

Working with a cloud partner is key to

ensuring any legacy applications are

correctly configured for a new cloud

environment. The cloud service provider is

able to fully audit existing IT infrastructure

and identify which systems can be migrated

easily and which will need to be modified,

as well as know how they will need to be

reconfigured. A cloud partner can also

provide a plan of the best way to enact the

migration to minimise downtime and make

sure the new cloud environment is tailored

to the organisation's needs.

COST MANAGEMENT

Cost-effectiveness is one of the key benefits

of cloud computing, and one of the main

reasons the C-suite is interested in utilising it.

The fact that companies only pay for the

storage they use, and do not have to invest

in infrastructure, can represent a significant

decrease in costs in the long term.

In the short term, however, and especially

when it comes to migration, businesses can

incur unexpected expenses. This can be due

to several factors, including inadequate

planning, unforeseen disruptions, or the

need for additional resources.

Another key issue is understanding public

cloud pricing structures, as these can be

extremely complex and vary from provider to

provider. Migrating to the cloud without

knowing what the business needs to pay for

can lead to overspending on redundant or

underutilised services.

This is another area where making use of a

cloud partner can lead to a much smoother

migration process. A cloud partner will have

a deep understanding of the pricing

structures of each public cloud and

following a thorough cost analysis of the

migration plan, will be able to ensure the

business is only paying for the services that

are needed. The partner will be able to

identify pre-migration usage, project future

needs, and choose the appropriate service

model for the organisation.

OPTIMISING AND MONITORING

PERFORMANCE

While the actual migration itself is the

most complicated part of the process, it is

vital that businesses take the correct postmigration

steps, chiefly monitoring and

optimising performance. Following the

migration, organisations might see issues

related to latency, bandwidth, or resource

allocation. No matter how

comprehensive the migration plan is,

there are always unforeseen issues that

can only be discovered once the system is

up and running.

Without a cloud partner, it might be difficult

for organisations that lack the in-house

expertise to recognise and optimise these

challenges. Partnering with a specialist who

can track the health of cloud-based systems

and identify performance bottlenecks will

ensure the best outcome following a

successful cloud migration.

Cloud migration presents exciting

opportunities for businesses and can form

the backbone of a modernisation or

digitisation strategy. It is therefore imperative

that organisations have the correct expertise

at their disposal to make sure the process is

as smooth as possible, and that the benefits

of the cloud are fully realised.

With the cyber skills gap contributing to a

lack of in-house knowledge and experience,

having a relationship with a cloud partner

that can fill those gaps is the best way to

have a worthwhile and headache-free

cloud migration. NC

WWW.NETWORKCOMPUTING.CO.UK @NCMagAndAwards

MAY/JUNE 2025 NETWORKcomputing 21


OPINION: AI

BEYOND DEEPSEEK

JULIUS CERNIAUSKAS, CEO AT OXYLABS POSES THREE CRITICAL

QUESTIONS FOR THE FUTURE OF AI

The year 2025 started with a shockwave

for the AI community. Launched by a

relatively obscure Chinese startup,

DeepSeek not only challenged the rules of the

AI game by sending Nvidia's stock

plummeting 17% in one day and becoming

the most-downloaded app on the App Store

and Play Store, but also showed the persisting

security problems by accidentally exposing its

database and leaking sensitive data including

chat histories, API keys and backend

operational details.

Success and failures aside, DeepSeek made

the world realise how quickly and deeply a

single AI model release can impact global

events, and this raises three questions. First,

how legitimate (and sustainable) are the

massive AI investments in the West? Second,

what risks and opportunities does opensource

development pose? Finally, is it

possible to balance growth and innovation

with data privacy and security amidst a global

AI race?

WESTERN AI - THE EMPEROR WITH

NO CLOTHES?

The claim that DeepSeek's model training cost

a mere $6 million is questionable at best and

blatantly false at worst: SemiAnalysis

speculates about $1.3 billion in server capital

expenditure, meaning that modest training

costs were backed by way larger infrastructure

expenditures. Even so, there is reason to

believe that DeepSeek AI still costs a fraction

of Western models' development costs.

We will no doubt discover more details in

the coming months, as evidence emerges

that the startup used datasets the big US

players spent a fortune on, a situation that

OpenAI is very unhappy about. Ironically,

OpenAI has been accused numerous times of

dubious data collection and copyright usage

practices themselves. Perhaps the greatest

paradox is that by putting a chokehold on

technological exports to China, the US

prompted Chinese engineers to innovate

better and do more with less.

What matters more than the exact

development costs, however, is the broader

economic impact that cheaper LLMs will have

on the AI industry. As venture capitalists and

tech giants reassess their investment strategies,

DeepSeek's scrappy approach suggests that

the path to AI leadership might require fewer

resources. This is already impacting how AI

services are priced and delivered to end users

and developers.

With API calls cheaper by an order of

magnitude - DeepSeek reportedly charges just

1.4 cents per million tokens compared to

Meta's $2.80 for the same output - the

Chinese player is changing the nature of the

game, lowering the barrier to entry for

different market players not only in the West,

but in the developing world as well. If

DeepSeek becomes the platform of choice

for budding AI developers, will it result in

even larger losses for incumbents?

DeepSeek is neither the only nor the last

"discounted" AI model to emerge from

China. Other Chinese LLMs also have the

advantage of being narrower in their use,

which means that they need less

computational resources to operate. This

helps them keep their pricing competitive -

Doubao 1.5-pro by ByteDance and Qwen

plus by Alibaba both cost $0.30 per 1

million tokens.

OPEN-SOURCE DILEMMA

DeepSeek's success sheds light on another

important aspect of the AI game - open vs.

closed-source systems. The Chinese startup

partially based its breakthrough model on

Meta's open-source Llama architecture, and

in a bold move, released its own models in

open-weights form. While Meta's Chief AI

Scientist Yann LeCun celebrated this as

proof that "open-source models are

surpassing proprietary ones," the new state

of affairs raises serious strategic concerns.

With a tariff war seemingly imminent, will an

AI war be waged in parallel, and where

does it put all security concerns?

22 NETWORKcomputing MAY/JUNE 2025 @NCMagAndAwards

WWW.NETWORKCOMPUTING.CO.UK


OPINION: AI

The situation at hand has forced Western

AI leaders to reassess their positions on

open-source technology. In the aftermath of

DeepSeek's debut, OpenAI's CEO Sam

Altman admitted that his company might be

"on the wrong side of history" regarding its

open-source strategy, hinting at potential

changes in the near future. This move would

mean a return to open-source development,

which was abandoned after GPT-3 due to

safety concerns. Echoing this sentiment,

Mark Zuckerberg emphasized that

advancing "an American open source

standard is crucial" to maintain the country's

global advantage.

However, market competition is only

one side of the coin. Existing licensing

schemes weren't built for software

capable of leveraging vast swathes of

data from multiple sources, as Meta's VP

for AI research Joëlle Pineau pointed out.

This brings open-source AI into more or

less direct confrontation with data

protection efforts. Further, increased

technological complexity also brings

about liability issues - if an AI system

based on open-source models produces

harmful outputs, determining

responsibility becomes nearly impossible

when multiple contributors are involved.

SAFETY AND SECURITY DILEMMA

For most Western observers, DeepSeek's

rapid success raises critical questions about

data privacy and security. These concerns

mirror earlier debates about TikTok but with

potentially greater implications. All data

processed through DeepSeek's models is

stored on Chinese servers, raising serious

concerns about the different ways this data

can be put to use. Similarly, there exists the

issue of censorship and bias.

Users and journalists have already noticed

that DeepSeek's model refuses to respond to

queries about sensitive topics within China,

such as the Tiananmen Square massacre

and Uyghur detention camps. It is worth

noting that the thinking process of

DeepSeek's model (which you can openly

observe) is very advanced and probably not

that biased; however, the chatbot sitting

above the model is politically biased and

does all the censoring. While self-censorship

in AI models is nothing new, DeepSeek

shows a yet unseen marriage of technology

and political ideology.

Western AI firms, on the contrary, are

tightly bound by various ethical and legal

concerns, ranging from public outrages to

emerging AI regulation. The EU has just

kicked off the AI Act to protect fundamental

human rights and ensure ethical AI

development, but its implementation

presents significant challenges, made acute

by AI training data requirements.

By nature, LLMs need vast amounts of

data to ensure adequate contextual

comprehension and prevent bias or

hallucinations. For data providers, the new

regulation translates into new

responsibilities, as they now must

implement robust verification procedures to

ensure their infrastructure is not supporting

prohibited AI systems or collecting

copyright-protected information.

Moreover, although strict in certain areas,

the EU regulation is oddly loose in others.

For example, it provides important

exemptions for open-source AI systems,

even though the industry hasn't yet reached

a consensus on what "open-source AI" is.

This ambiguity creates opportunities for

exploitation, with companies using

loopholes to win legal exemptions.

Some experts, including Anthropic CEO

Dario Amodei, warn of broader strategic

implications this regulatory imbalance might

bring. With China directing more

technological focus, AI included, towards its

military-industrial complex, DeepSeek's

breakthrough could help China "take a

commanding lead on the global stage, not

just for AI but for everything". Especially

considering China's well-known cynical

attitudes toward data privacy and broader

societal implications.

THE ROAD FORWARD

DeepSeek and the many more models

that will inevitably follow it, signal an

urgent need for global coordination in AI

governance. The tech's dual-use

potential, coupled with its rapid and

uncontrolled proliferation, make it clear

that no single nation's regulatory

approach will be sufficient.

Today, talks of AI (and AGI in particular)

being an "existential risk" are not as

prominent as they were in 2023, when the

big tech giants called for "a regulatory

body overseeing AI to make sure that it

does not present a danger to the public".

However, while reduced alarmism should

be embraced as a positive development,

guardrails in the form of regulation are

needed more than ever before.

As with nuclear energy, what's needed is

an all-encompassing international

framework that addresses AI development,

deployment, and safety standards. This

framework must balance innovation with

security, establish clear guidelines for data

protection across borders, and most

importantly, create mechanisms for

monitoring and enforcement.

The EU's AI Act can be seen as a strong

enough starting point, but broader global

consensus is needed for issues, such as

data rights, the use of AI in military

conflicts, and open-source AI development

with possible proliferation of AI technology

in rogue states. Otherwise, we might be

facing a situation similar to that of a

teenager being able to build a DIY nuclear

reactor in their parents' garage. NC

WWW.NETWORKCOMPUTING.CO.UK @NCMagAndAwards

MAY/JUNE 2025 NETWORKcomputing 23


SECURITY UPDATE

WHY THERE SHOULD BE NO 'YOU' IN USB

JON FIELDING, MANAGING DIRECTOR FOR EMEA AT APRICORN

EXPLAINS WHY BUSINESSES CAN'T AFFORD TO LET USB STORAGE

DEVICES CREATE A STICKY SECURITY SITUATION

USB drives continues to pose a

considerable threat to data and

network security, with risks incuding

infecting systems with malware such as

ransomware, keylogging or spyware that are

used to capture access information. In

September 2024 we saw Mustang Panda,

a Chinese nation state actor, spread the

PUBLOAD malware variant using a selfpropagating

worm dubbed HIUPAN

via USB drives, for example, in a bid

to achieve persistent data

exfiltration. Interestingly, the attack

had previously been carried out

using spear phishing, suggesting

the USB was regarded as a

more effective way to achieve

these goals.

In fact, the humble

memory stick is often

used to infiltrate even

the most inaccessible

airgapped networks.

In October, the

Golden Jackal

hacking group

used USB drives

carrying

malware to

transfer data

from

airgapped

computers to those connected to the internet

in government organisations including an

EU building and an embassy in Belarus.

USBs often make the ideal vehicle to

sidestep such security measures enabling

attackers to target segregated systems

housing highly sensitive data.

The ease with which USB drives can be used

to exfiltrate data is not just limited to these

sophisticated attacks however, with employees

often using them to take data. A recent report

claimed that over half of IT security

professionals have seen company data stolen

via USB over the past two years, providing

some insight into the scale of the issue.

LONE USBS

Despite these risks, relatively few organisations

seek to restrict the types of USB devices that

their staff can use. Only half of USB devices

are supplied by employers and just a quarter

limit the type of USB to certain approved

manufacturers, which means the vast majority

are allowing any device chosen by the

individual to be brought onto the network.

Consequently, the business is unable to

enforce a minimum level of security.

Privately owned USB sticks may not have

encryption or are password protected, for

instance, with research suggesting a quarter

are unencrypted and almost a fifth do not

have a password or the ability to lock the

device. These base level protections can prove

invaluable in protecting data as they prevent it

from being viewed in the event the device

becomes lost or stolen.

Again, the scale of losses is concerning.

More than half of users claim to have lost a

24 NETWORKcomputing MAY/JUNE 2025 @NCMagAndAwards

WWW.NETWORKCOMPUTING.CO.UK


SECURITY UPDATE

USB drive over the past two years of which

80% contained data from work. But users

will not always report a stolen device due to

inertia or fear of repercussions, so having

encryption in place can help to safeguard

data that the company may not even know

it has lost.

Yet, despite all the potential risks that a

rogue USB drive can pose, they remain a

critical tool in the workplace. They're widely

used to store or transfer data, with over a

third of users (34%) using these devices daily

and almost as many again (31%) using

them on a weekly basis. And over the course

of the past few decades they've become a

staple in the way we work due to their

convenience. But they do need to be taken

more seriously, starting with procurement.

GOVERNING USAGE

Businesses should be exerting more control,

either by specifying a manufacturer or

providing devices that incorporate security

measures. Governance should also be put

in place in the form of an acceptable use

policy. This should detail what constitutes a

sufficiently complex password, for example,

as well as what happens to these drives if

they become damaged or reach end of life.

When it becomes time for the device to be

wiped it's important to distinguish between

the user deleting data and proper data

sanitisation. Many users don't realise that

deleting files will not remove them

completely and that the data can be

recovered using free tools. In fact, only 16%

of users were aware of this risk whereas

almost double that number (30%) thought

deleted data was gone for good. If the

business owns the drive it becomes that

much easier to carry out proper data

sanitisation, which sees the data

permanently erased.

There are also steps the business can take

to monitor when USB devices are being

plugged in and even to prevent

unsanctioned ones from working on the

network. USB blocking sees a solution

housed on the endpoint determine who can

use them and the types of data that can be

downloaded on to them, with alerts

generated by any unauthorised USB activity.

In effect, ports are locked to all USBs by

default, making such solutions useful for

enforcing data loss prevention policies.

SELECTING USB STORAGE

So what should a business look for when

choosing a USB drive? Irrespective of

whether the company is buying devices

directly from the manufacturer or via a

reseller such as their MSSP, they should seek

to ensure it meets their security requirements

and also any compliance mandates. This

usually comes down to the industry

standards and accreditations the device

complies with.

In terms of encryption, the USB storage

device should support AES 256-bit

encryption and be FIPS 140-2 accredited.

Devices that use the Federal Information

Processing Standard (FIPS) have been tested

and validated by the US and Canadian

authorities with respect to cryptography,

which means the cryptographic algorithms

and key generation are government grade

and can be used in regulated industries. The

drive should also be TAA (Trade Agreements

Act) compliant, meaning it has been

manufactured or substantially engineered in

the US or a TAA-designated country as this

ensures it uses approved chipsets.

It's also vital to consider the firmware. If this

isn't sufficiently robust, it can be tampered

with and reprogrammed, turning the drive

into a malicious device. An attack known as

BadUSB utilises this approach and makes

the USB drive emulate the functions of a

keyboard, performing keystrokes that open a

PowerShell window to download malware.

Businesses should therefore look for USB

sticks that have their firmware locked down

to prevent this from happening.

In conclusion, USB storage devices

continue to be an important part of how

employees work and will remain a staple

means of transporting and storing data.

They offer unparalleled convenience,

flexibility and portability, which is why they

remain so popular. By providing employees

with a list of approved devices - or even

better, supplying these direct - and ensuring

there are built in safeguards such as

encryption and passwords, the business can

mitigate the risks associated with these

storage devices while retaining the benefits

they provide. NC

WWW.NETWORKCOMPUTING.CO.UK @NCMagAndAwards

MAY/JUNE 2025 NETWORKcomputing 25


SECURITY UPDATE

EFFECTIVE DATA LOSS PREVENTION

IWONA ZALEWSKA, REGIONAL DIRECTOR FOR UK & IRELAND, DRAM BUSINESS MANAGER, EMEA

REGION, KINGSTON TECHNOLOGY ON WHY PREVENTION WILL ALWAYS BE BETTER THAN CURE IN

THE BATTLE TO KEEP DATA SECURE

Data loss due to security breaches is

what keeps CSO's awake at night -

and there is every indication that the

risk is growing. A report published in April from

Vodafone Business found that small and

medium-sized enterprises in the UK are

incurring annual losses amounting to £3.4

billion due to inadequate cybersecurity

measures. It's not only the security of sensitive

data that is in danger, but brand reputation

and the one-two punch of regulatory fines and

negative commercial impact.

It's a no-brainer, therefore, to place breach

prevention at the top of the cybersecurity

agenda, with strategies that combine processes

and tools that stop unauthorised access to

data before it can be used by bad actors.

To implement an effective Data Loss

Prevention (DLP) approach several security

tools can be used, the most vital of which is

strong encryption, however diligent attention to

detail at the implementation stage is the key to

ensuring DLP is successful in the long term.

BEST PRACTICES FOR IMPLEMENTING

DATA LOSS PREVENTION

1) Assessment

Start with an assessment of the company's

data. Some will be particularly critical and

should be prioritised for protection. Data can

be classified by context, such as by the source

app, the data store, or even by who created it,

making it easier to track.

2) C-suite buy-in

CSO's and the network team will implement

DLP, but the CFO and CEO must sign off the

budget for the programme. This means

presenting a strong case of the benefits for

individual business units, the efficient use of

assets and resources, the ability to address

pain points and minimise risk.

3) Objectives

Objectives might extend beyond simply

prevention to ensuring regulatory compliance,

protecting an IP or achieving improved data

visibility. Identifying priorities makes

deployment of DLP more efficient and in the

long term, more effective.

4) Approach

A company can take a project approach,

starting by focusing on data of a specific type.

Discovering and automating the classification

of the most sensitive or critical data is a good

place to start. Whatever classification of data

is chosen first, it must be applied across all

departments to ensure consistency.

5) Training

Training can reduce the risk of accidental data

loss by employees. Advanced DLP solutions

provide user prompting which notifies

employees that use of certain data will

contravene company or regulatory policy or

alerts them if their activity is deemed risky. This

might be forwarding business emails outside

the network perimeter or uploading critical

files to unauthorised cloud services.

6) Monitoring

Getting an understanding of how the

organisation's data is being used is

important. Monitoring data in motion helps

to identify risky behaviour, particularly with

sensitive files. Hybrid working means data is

at risk during transit or when it is used on

unprotected endpoints, but DLP will account

for this risk increase.

7) Setting KPIs

Metrics will gauge the success of a DLP

programme and should be agreed in

advance. Assessing KPIs will allow

improvements to be made and determine the

value that DLP is bringing to the organisation.

8) Tools

Preventing data loss means investing in the

right tools, and one of the best ways to do this

is through hardware-encrypted hard drives.

These are designed to suit organisations of all

sizes and are invaluable in shoring up

defences and bolstering DLP programmes.

As attack surfaces expand and work habits

change, data loss prevention will become

even more necessary. Companies will need

to commit to and retune their strategies

ensuring that personal information protection

and compliance, IP protection and data

visibility - the three tenets of a data loss

prevention programme - are in place. If DLP

demonstrates it has successfully combatted a

data loss risk or resolved a cyber incident, it

will be valuable proof that its deployment

was worthwhile. NC

26 NETWORKcomputing MAY/JUNE 2025 @NCMagAndAwards

WWW.NETWORKCOMPUTING.CO.UK


OPINION: DATA ENGINEERING

DATA ENGINEERING IN THE AI ERA

MARI NILSSON BJÖRKMAN, GLOBAL TELECOM INDUSTRY LEAD AT SAS, ON

THE IMPORTANCE OF ENSURING WE HAVE TRUSTED DATA FOR RELIABLE AI

In the age of AI, data engineering has

become a foundational pillar for ensuring

the reliability, transparency, and

trustworthiness of AI-driven solutions in the

telecommunications industry. As telecom

organisations race to leverage AI for enhancing

network performance, optimising service

delivery, and enabling innovations such as IoT

and smart cities, the quality and governance of

the data feeding these AI models has never

been more crucial.

Without robust data engineering practices, AI

systems risk being built on flawed, incomplete,

or biased data, leading to a lack of

transparency and data lineage, as well as

unreliable outcomes that could impact both

operational efficiency and customer trust.

Research has shown that poor data quality costs

organisations over £10 million on average

annually, which makes it imperative for telcos to

prioritise data engineering excellence.

OPTIMISING NETWORK EFFICIENCY

With the rapid rollout of 5G, telco companies

are handling unprecedented volumes of data,

generated from millions of devices, sensors, and

network interactions. AI models can leverage

this vast dataset to drive predictive maintenance

by identifying potential network failures before

they occur, reducing downtime, energy

consumption and repair costs. They can also

automate network optimisations by dynamically

adjusting bandwidth allocation, load balancing,

and traffic routing to ensure seamless

connectivity. Additionally, AI-powered analytics

enhance customer experience by proactively

detecting service issues, personalising user

interactions, and streamlining support processes.

However, poor data quality, caused by

inconsistencies, duplication, or incomplete

records, can lead to inaccurate predictions,

operational inefficiencies, and costly mistakes,

ultimately undermining the benefits of AI-driven

automation.

STRENGTHENING AI RELIABILITY

One of the most pressing issues in AI-driven

telecom operations is data governance. Given

the sensitivity of network data and customer

information, CSPs must implement stringent

data governance frameworks to ensure

compliance with regulatory requirements and

industry standards. This involves establishing

clear data lineage, ensuring data integrity, and

implementing access controls to prevent

unauthorised usage.

Advanced metadata management systems can

help telcos track and document the movement

and transformation of data across AI pipelines,

enhancing traceability and accountability. By

embedding strong data governance practices,

organisations can instil confidence in AI-driven

decisions while mitigating the risks of bias,

misinformation, and security breaches.

Automation is another key driver of AI reliability

in fixed and mobile networks. Traditional data

processing methods are no longer sufficient to

handle the scale and complexity of modern

telecom datasets. Instead, AI-powered

automation is transforming data engineering

workflows, enabling real-time data integration,

anomaly detection, and predictive analytics.

Machine learning (ML) algorithms can be

employed to clean and preprocess data

automatically, identifying inconsistencies,

outliers, and missing values before they corrupt

AI models. Additionally, data pipeline

automation ensures that AI systems receive fresh

and accurate data continuously, improving

responsiveness and adaptability to dynamic

network conditions.

SYNTHETIC DATA

AND COLLABORATION

Synthetic data is emerging as a powerful solution

to the privacy and availability challenges faced

by telecom AI models. By generating artificial yet

statistically representative data, organisations can

train AI models without exposing sensitive

customer information or critical network data.

This approach not only enhances privacy

compliance but also mitigates the risks

associated with biased or incomplete datasets. It

can be used to simulate network scenarios, test

AI-driven optimisations, and develop new use

cases in a controlled environment, ultimately

improving the robustness of AI solutions.

Beyond technology, fostering cross-functional

collaboration is essential for ensuring AI

reliability in the networks. Data engineers, data

scientists, and analysts must work together to

align data strategy with AI objectives, ensuring

that datasets are curated, processed, and

validated in a way that maximises AI accuracy

and effectiveness. Bridging the gap between

data engineering and AI development involves

cultivating a culture of knowledge sharing,

establishing shared data repositories, and

leveraging collaborative platforms that enable

seamless communication and iteration. As AI

continues to redefine the telecommunications

landscape, the role of data engineering will only

grow in significance.

Ensuring the reliability of AI-driven decisions

requires a multi-faceted approach that

encompasses robust data governance,

automation, synthetic data utilisation, and crossfunctional

collaboration. By embracing

advanced data engineering practices, telecom

organisations can not only optimise their AI

initiatives but also build a trusted foundation for

sustainable business growth. NC

WWW.NETWORKCOMPUTING.CO.UK @NCMagAndAwards

MAY/JUNE 2025 NETWORKcomputing 27


OPINION: AUTONOMOUS NETWORKS

THE DATA-DRIVEN PATH TO AUTONOMOUS NETWORKS

HOW FAR AWAY ARE WE

FROM TRULY REALISING

FULLY FLEDGED

AUTONOMOUS NETWORKS?

PHIL KIPPEN, HEAD OF

TELECOMS AT SNOWFLAKE,

OFFERS A GUIDE

Autonomous networks (ANs) have the

potential to be a transformative

technology for the telecom industry

by improving customer experiences and

helping address the high costs associated

with telecom operations, such as network

planning, network engineering, network

support and call centre operations.

From both the CapEx and OpEx

perspectives, ANs can improve the

operator cost model. They can help

operators maximise hardware and software

utilisation and eliminate stranded assets

(network resources that are paid for but

underutilised or overused), while also

having the potential to save operator

costs. This is primarily by shifting manual

service provisioning and management to

automated decisions made in real-time,

which optimise network hardware and

software and customer/subscriber services.

Additionally, the impact of autonomous

networks on customer experience cannot

be overstated. Subscribers who want realtime

service optimisation, such as getting

the best video quality while streaming a

show on the move, or connecting remotely

with family, can rely on ANs to provide the

highest quality of service dynamically and

faithfully at any time, and at any place -

without manual operator intervention.

Service quality is monitored and

adjustments are made to ensure the

best customer experience possible.

28 NETWORKcomputing MAY/JUNE 2025 @NCMagAndAwards

WWW.NETWORKCOMPUTING.CO.UK


OPINION: AUTONOMOUS NETWORKS

ANs monitor the network and services to

ensure alignment with customer

expectations, network operator

capabilities and business intent.

At its most basic level, ANs require AI,

intent and automation. AI analyses the

data and makes decisions, based on

operator (and customer) intent.

Automation is used to carry out the result

of those decisions, including additions,

changes and deletions to network

services, infrastructure and policy. This

underlying foundation depends on large

volumes of data, such as telemetry

domain infrastructure data, service

assurance data, business data and most

importantly - intent data.

Intent data represents the desired

outcomes and business objectives that

guide network decisions. It encompasses

everything from performance targets like

network availability and latency

requirements to customer experience

goals like call quality metrics and

application responsiveness. It is by far

the most difficult data to manage, given

that it is dynamic and changes frequently

based on user and operator priorities.

Of course, with any technology that

leverages AI, expert human supervision

must be in place with properly trained AI

models to prevent hallucinations and

bias. If present, any of these risks have

the potential to massively impact critical

infrastructure and affect thousands of

customers. The potential benefits of

lower operational costs, better asset

utilisation and faster time to market all

stand for nothing if the product isn't

secure, reliable, performant or the

experience fails to meet customer

expectations.

These security and reliability concerns

are particularly important given the

current state of technology. We're still a

few years from meeting the aggregate

requirements of what the TM Forum is

calling "Level 5: Fully Autonomous

Networks". Current state-of-the-art

technology is passing "Level 3" today,

with new innovations addressing some of

the "Level 4" requirements.

There's still some work to do to fully

satisfy Level 4, specifically around AI

capabilities and maturity. This work is

going on in tandem with the ongoing

R&D associated with technologies that

will enable 'intent' awareness. The 3GPP

industry standards forum, which defines

wireless architectures, is also in the

process of defining a blueprint and

evolution plan for fully autonomous

networks.

However, the rise of 6G as the first

generation of mobile network

architecture to start incorporating datadriven

and AI capabilities into its

architecture will help drive autonomous

network adoption. There will be some

complexity and slower than expected

adoption in the early stages, but as

requirements become consistent and we

begin to see cross-industry collaboration,

there will be a wider rollout of ANs.

At the heart of a successful autonomous

network rollout is the need for telecom

operators to effectively harness the power

of all data, and effective and trusted AI

models. Modern data platforms that offer

robust AI capabilities can bring together

both structured and unstructured data

across the entire business ecosystem in a

centralised and governed platform. This

will give telecom operators a full view of

their data to make accurate and

autonomous decisions in near-real time,

thus accelerating innovation and

delivering on the promise of truly

autonomous networks. NC

WWW.NETWORKCOMPUTING.CO.UK @NCMagAndAwards

MAY/JUNE 2025 NETWORKcomputing 29


OPINION: AI AND THE DATA CENTRE

GOING BEYOND THE BUILD FOR AI

JON ABBOTT, TECHNOLOGIES DIRECTOR - GLOBAL STRATEGIC

CLIENTS AT VERTIV HOW AI IS FORCING A REDESIGN OF DATA

CENTRE INFRASTRUCTURE

The shift to artificial intelligence is not

a simple scale-up exercise. AI

workloads are forcing a redefinition

of what data centre infrastructure looks

like, how it behaves and where investment

is needed - not just for today's compute

requirements, but for the more volatile,

distributed and power-hungry workloads

of the next decade.

As businesses push forward

with model training,

inference at the edge and

AI-as-a-service

offerings, the existing

architecture

underpinning those

services is being

pushed to

capacity.

Facilities built

around

virtualised

enterprise IT are

hitting limitations

in power, thermal

capacity,

resilience and

layout. Even newer

builds are being

revisited sooner than

planned, as demand

grows faster than

anticipated.

This moment marks a key

inflection point for data centre

infrastructure. Data centre operators,

systems integrators and solution providers

alike are being asked to think differently -

not about how to scale, but how to adapt.

INFRASTRUCTURE MUST NOW

RESPOND TO BEHAVIOUR, NOT

JUST CAPACITY

The scale of AI deployments is well

documented. Training a large language

model can consume megawatts of compute

over weeks or months. What's more

disruptive is how AI changes the load profile.

Power usage is no longer steady. Instead, it

fluctuates rapidly based on training cycles,

model updates or edge inference demand.

That has immediate consequences for both

power and cooling systems. Electrical

infrastructure must handle unpredictable

surges. Thermal systems must respond to

hotspots that shift minute by minute, often in

tightly packed GPU clusters. Standard rack

densities of 10-15kW are being eclipsed by

40-60kW zones, with some AI nodes

pushing well beyond 100kW.

It's not just about size. The infrastructure

must now be responsive - able to adjust

dynamically, reallocate resources and

maintain stability under unpredictable

operating conditions.

LIQUID COOLING IS NO LONGER A

SPECIALIST SYSTEM

Many data centres are now exploring or

deploying liquid cooling. Direct-to-chip

approaches are becoming mainstream in AIready

halls, while immersion cooling is

gaining traction in high-density, smallfootprint

deployments.

The appeal is better thermal control, higher

energy efficiency and the ability to extract

heat in a form more suitable for reuse. But

implementing these systems requires

30 NETWORKcomputing MAY/JUNE 2025 @NCMagAndAwards

WWW.NETWORKCOMPUTING.CO.UK


OPINION: AI AND THE DATA CENTRE

significant change. Pipework, pumping

systems, monitoring controls and failure

modes are all different from traditional airbased

systems. Maintenance teams need

new skills and commissioning teams need

new protocols. Safety and redundancy

considerations are heightened.

What was once seen as an advanced

option for HPC clusters is quickly

becoming standard practice for AI

environments. The operational

assumptions behind cooling systems are

changing - and design teams need to build

in flexibility for what comes next.

AI IS DRIVING A NEW

COMMISSIONING MINDSET

Facilities being adapted for AI - or built to

accommodate it - are being commissioned

differently. Until now, much of the focus

was on validating uptime, redundancy and

airflow. Today, commissioning plans must

account for variable load behaviour,

software-defined power management and

hybrid cooling systems.

Simulation tools are being deployed

earlier. Digital twin models are helping to

understand how systems behave under

extreme or unbalanced loads.

Commissioning teams are being brought

in sooner, with more integrated

collaboration between mechanical,

electrical and software teams.

This shift also affects how operators plan

capacity. Modular designs are preferred,

allowing for incremental growth and faster

response to customer demands. But

modularity is only effective when

commissioning can keep pace - and that's

where standardised processes and realtime

system visibility become essential.

GRID ACCESS AND ENERGY

PLANNING ARE CREATING FRICTION

Grid capacity is now one of the most

significant barriers to infrastructure

deployment. AI workloads are

contributing to a steep rise in energy

demand and not every region can

accommodate that growth quickly.

For data centre operators, this means

rethinking energy strategy. Colocation with

renewable energy generation, microgrid

integration and battery storage are all on

the table - not just for sustainability

reasons, but for operational viability.

These strategies intersect with cooling in

subtle ways. Liquid systems require

consistent power to maintain flow and

temperature. Hybrid cooling arrangements

may require sequencing controls to

optimise energy use. And where grid

reliability is variable, thermal risk

management becomes more complex.

Energy design, cooling performance and

infrastructure reliability are now

inseparable. AI has linked them in ways

that require closer coordination at every

stage.

THE EDGE IS EVOLVING FASTER

THAN EXPECTED

While most infrastructure discussion still

focuses on central data centres, the edge

is being quietly redefined. Retail,

manufacturing, transport and healthcare

are all deploying AI models closer to the

source - whether to enable real-time

decision-making, reduce latency or

improve privacy.

Edge computing environments introduce

harder constraints. Space, power, access

and maintenance are limited. That makes

resilience and monitoring even more

important. Data centre operators are now

deploying AI-ready micro data centres

that are preconfigured, remotely

managed and often equipped with

integrated liquid cooling.

Designing for the edge now means

designing for AI. That includes smart

cooling, compact power distribution, and

automated recovery. This isn't the fringe of

infrastructure - it's the future of distributed

intelligence.

INFRASTRUCTURE VISIBILITY IS

BECOMING A NON-NEGOTIABLE

The final piece of the puzzle is

monitoring. As systems become more

dynamic and workloads less

predictable, infrastructure needs to

offer more than just uptime. It must

explain itself in real time.

That means integrated monitoring of

power, cooling, performance and

utilisation across a single pane of glass.

AI-aware infrastructure management

platforms are already being adopted in

larger sites. They're enabling predictive

maintenance, thermal balancing, and

dynamic workload optimisation.

This isn't just about operational

efficiency. With regulatory frameworks

tightening, and ESG reporting becoming

the norm, operators need to show how

their infrastructure performs - not just

when it's new, but over time.

WHAT COMES NEXT?

AI is not a short-term demand spike. It's a

shift in how digital services are built,

delivered and scaled. The infrastructure

that supports it must evolve to match -

more responsive, more modular, more

visible and more closely integrated across

energy, cooling and compute.

Systems need to be designed to cope

with AI's volatility without locking

operators into fixed paths. And as

deployment expands across core and

edge environments, the ability to adapt

infrastructure in real time will become a

key differentiator. NC

WWW.NETWORKCOMPUTING.CO.UK @NCMagAndAwards

MAY/JUNE 2025 NETWORKcomputing 31


OPINION: NIS2

IDENTIFYING NIS2 CHALLENGES

ANDERS ASKASEN, PRODUCT MARKETING DIRECTOR, OMADA, ON MAKING THE CONNECTION

BETWEEN NIS2 COMPLIANCE AND IDENTITY MANAGEMENT

The Network and Information Security

Directive 2 (NIS2) is an enhanced

cybersecurity directive enacted by the

European Union. Member states were to

transpose it into national legislation by Oct.

17, 2024, but for the organisations within

those states, this can seem like no easy task.

Organisations are already grappling with the

General Data Protection Regulation (GDPR)

and a host of other existing regulations; any

new requirement can be daunting.

How can modern businesses comply with this

and other emerging regulations while

retaining their sanity? Organisations must

understand how a strong identity governance

strategy can address the challenges of

meeting this regulation.

HOW NIS2 IMPACTS YOUR

ORGANISATION

Industries including transportation, energy,

digital infrastructure and healthcare fall within

NIS2's purview. It takes a stronger stance than

its predecessor, recommending the adoption of

strong access controls and security measures,

as well as Multi-Factor Authentication (MFA).

This framework standardises reporting

requirements and increases accountability.

Standardisation simplifies the simultaneous

management of multiple obligations.

NIS2 mandates a phased, more stringent

reporting approach. Within 24 hours,

companies must make an initial notification

(early warning) indicating potential malicious

causes or cross-border impact. Within 72

hours, they must provide an updated incident

notification with an initial assessment of severity

and impact, including indicators of

compromise. Within one month, companies

must present a final report with a detailed

description of the incident, its severity,

consequences and mitigation measures taken.

Identity governance and administration

(IGA) solutions enable companies to monitor

and manage access in real-time. This

guarantees that only authorised users can

access critical IT systems at a given time. This

method makes cross-sector compliance

easier and ensures uniformity of incident

tracking, reporting and resolution.

The goal of NIS2 is to strengthen the

resilience and cybersecurity of critical

infrastructure. By properly managing access

rights and digital identities, IGA plays an

essential part in implementing these

requirements. IGA is crucial for ensuring

that the NIS2 framework is embedded

within companies. It gives them

greater access control for their

critical systems and data, which

enables security standards compliance and

lowers cyber incident risk.

THE COMPLIANCE CHALLENGES

INTRODUCED BY NIS2

NIS2 is just one piece of the regulation puzzle

organisations face today. Maintaining

compliance is challenging; besides NIS2,

organisations face an ever-evolving landscape

of regulatory demands, which often differ

across jurisdictions.

Companies must attend to a constantly

growing list of technical and operational

hurdles as they work to comply with

regulations. There are four main trends that

highlight the changing nature of compliance:

Multiplied regulations - The EU's GDPR

inspired similar laws all around the world,

which has added to companies' regulatory

burden.

Shifting to the cloud - It becomes increasingly

complex for companies to maintain

visibility and control over data as they

move to cloud-based systems.

Limited resources - IT departments are

often understaffed and underfunded, and

now they must manage operational efficiency,

security and compliance.

Work from anywhere - Remote and hybrid

work options add new risks as workers use

a variety of devices and locations to

access sensitive corporate data.

HOW IGA HELPS

Though NIS2 is mainly focused on

cybersecurity, it carries implications for IGA as

well. There are three reasons its new

requirements may lead companies to

implement modern IGA solutions. One reason

is enhanced security. To meet the NIS2 security

32 NETWORKcomputing MAY/JUNE 2025 @NCMagAndAwards

WWW.NETWORKCOMPUTING.CO.UK


OPINION: NIS2

requirements for digital service providers and

operators of essential services, organisations

need to improve their IGA processes. This

includes strengthening identity lifecycle

management so users have the access rights

they need for onboarding, changing roles or

departments, and offboarding.

A second reason is better incident response

and reporting. NIS2 requires that security

incidents be reported. Today's IGA can help

companies understand who had access and

what they did during an incident. A nextgeneration

IGA tool empowers an admin

who's detected a breach to execute an

emergency lockout. That admin can also

learn from the intelligence gleaned in the

subsequent investigation and use that intel to

address future threats.

A third reason is more effective compliance

and auditing. NIS2 has auditing and

compliance reporting requirements. A nextgen

IGA tool continuously monitors data

integrity and can evaluate how accurate

implemented processes are - on demand.

This capability helps auditors know that

policies and rules are being enforced.

Modern IGA empowers companies to show

that they have the proper governance control

in place and lowered non-compliance risk.

As compliance requirements become more

complex, they need a framework that

enables automation, tracks user activity and

centralises access controls. IGA has all of

these features so that companies can

manage compliance at scale. They can

achieve three significant goals by

implementing a modern IGA platform:

Records that are audit-ready - holistic

audit records guarantee that all access

decisions are fully documented. This

simplifies NIS2 compliance.

More robust access control - implementing

role-based access control (RBAC)

creates "least privilege" for users.

Scalable automation - AI-assisted access

approvals, automated certifications and

self-service workflows lower administrative

workload and make compliance

more efficient.

These features empower IGA to deliver a

united methodology for risk mitigation,

compliance and access governance. This

enables companies to remain compliant and

lower operational complexity at the same time.

MAKING NIS2 COMPLIANCE DOABLE

NIS2, like most new regulations, adds

complexity to the compliance function.

Compliance and security professionals need

to reduce complexity while ensuring a safe

and compliant experience for their users.

But new requirements, the work from

anywhere phenomenon, cloud migration

and limited human and fiscal resources

make this harder than it would first appear.

However, a robust identity governance

approach helps with both security and

compliance while lowering the associated

complexity. Modern IGA is an essential

security and compliance partner. NC

WWW.NETWORKCOMPUTING.CO.UK @NCMagAndAwards

MAY/JUNE 2025 NETWORKcomputing 33


SECURITY UPDATE

UNIFYING AUTOMATED SECURITY

MIKE FRY, INFRASTRUCTURE DATA & SECURITY SOLUTIONS DIRECTOR AT LOGICALIS UK&I, ON HOW

MXDR CAN HELP SECURITY TEAMS OVERCOME I.T. MIDDLE MANAGEMENT CHALLENGES

Today's cyber threats are faster, smarter,

and harder to detect. This means IT

security teams must have visibility across

all corners of the IT estate - from staff devices,

external locations containing company data,

communication apps and the network

foundation of a business.

Insufficient security resources remain a major

challenge. While you may have tools for

certain areas, such as endpoint protection, you

might still have blindspots to unknown security

flaws elsewhere. Even when running multiple

tools at once, limited integration and visibility

of each other can cause alert overload and

create critical gaps in protection.

This is where managed extended detection

and response (MXDR) comes into play. A

unified approach to automated security,

deployed across your entire IT estate, is the key

to complete protection and peace of mind.

THE BENEFITS OF XDR AND WHY

THEY MATTER

Before we consider MXDR, let's clearly define

what extended detection and response (XDR)

entails and how it can support you in

overseeing security protocols. At its core, XDR

is a holistically unified technology that

provides real-time threat detection and

response across communication, endpoints,

networks, and the cloud. It helps identify and

mitigate threats before they cause significant

damage. As a result, XDR protects against

emerging threats and helps stay one step

ahead of threat actors, eliminating blind spots

that would otherwise go unnoticed.

Here's what you can expect:

Faster threat detection results from a

greater understanding of particular

vulnerabilities and risk areas.

Prioritised threats by impact, allowing for

greater focus on the most pressing

dangers to the business, and lowered

strain in the process.

Accelerated investigations through

awareness of the full scope and entry

vectors of attacks.

Accelerated response, aided by AI and

machine learning, along with remediation

recommendations.

OVERCOMING SKILLS GAPS WITH

MXDR

XDR alone is powerful. But when it's managed

by a trusted partner, it becomes a gamechanger.

To complicate matters further, the

constant evolution of cyber-attacks means that

security teams need to also constantly evolve

and sharpen up their defence capabilities - this

means your staff could be constantly playing

catch-up. Using AI and machine learning tools

working in tandem can help close skills gaps

and keep evolving threats at bay.

The 'managed' aspect of MXDR refers to an

extended detection and response team

brought in as a service. MXDR adds a 24/7

Security Operations Centre (SOC) to the mix,

giving you access to experienced analysts,

real-time monitoring, and rapid incident

response, without the need to build or manage

it all in-house. Crucially, this eases the strain

on your internal team, helps address skills

shortages and allows you to focus on strategic

priorities such as improving access controls,

reviewing policies, or enabling training, without

living in fear that something's being missed.

Having these new data capabilities is also

likely to boost buy-in from the C-suite,

especially those who may be hesitant about

investing in new security tools, but are aware of

the risk to the firm's reputation.

THE LOGICAL STEP FORWARD

Partnering with a managed security provider

takes the burden off you and your IT security

team. It gives you clear visibility, faster

resolution and fewer sleepless nights. No more

juggling tools. No more missed alerts. You no

longer need to be pulled away from your

regular responsibilities to play the role of fulltime

security analyst - a role you likely didn't

sign up for.

MXDR helps tip the balance of power away

from threat actors and in your favour. Working

with a digital managed service provider gives

you the guidance and support to take the right

steps and strengthen your security posture.

That means no more blind spots and no more

playing catch-up. NC

34 NETWORKcomputing MAY/JUNE 2025 @NCMagAndAwards

WWW.NETWORKCOMPUTING.CO.UK



THE RESULTS FROM THE AWARDS

OF 2025 CAN SEEN HERE:

WWW.NETWORKCOMPUTINGAWARDS.CO.UK

The team at Network Computing congratulate all the winners and runners up and

thank the sponsors, the Awards night attendees and everyone who made

nominations or cast votes.

ATTENTION VENDORS:

The BENCH TESTED PRODUCT OF THE YEAR

category is for all solutions that have been

independently reviewed for Network Computing.

We congratulate NetAlly, the 2025 winners of

this Award.

To ensure that your solutions are contenders for this Award in 2026 you will need to

book them in for review. Contact: dave.bonner@btc.co.uk

THE AWARDS ARE SPONSORED BY:

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!