02.12.2024 Views

NC Nov-Dec 2024

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

NETWORKcomputing

I N F O R M A T I O N A N D C O M M U N I C A T I O N S – N E T W O R K E D www.networkcomputing.co.uk

ON-SITE INSIGHTS

The evolution of in-person IT support

NETWORK RESILIENCE

Gaining a competitive edge

with resilient IT

CENTRE OF ATTENTION

Transforming data centres

for the AI era

AI NETWORKS

How to safeguard your

AI deployment

NOVEMBER/DECEMBER 2024 VOL 33 NO 04


REGISTER TO

ATTEND

SCAN ME

Simplify

Complexity.

Unlock

Potential.

AI and cloud are unlocking new frontiers. Join

Cloud & AI Infrastructure on 12-13 March 2025 to

shape your future-ready strategies. Gain insights

into AI scalability, regulatory compliance, and cost

optimisation, empowering your business to thrive

in an evolving digital landscape. Be part of the

infrastructure driving tomorrow’s innovation.

www.cloudaiinfrastructure.com/btc

CLOUD & AI

INFRASTRUCTURE

cloudaiinfrastructure.com


COMMENT

COMMENT

SUPPORTING AI NETWORKS

Welcome to our last issue of 2024, where we're looking at the impact of AI on the network

infrastructure and beyond. First, Linas Dauksa, Product Manager at Keysight

Technologies, gives us a guide to five fundamental aspects of AI networking that anyone

looking to implement the technology successfully should know - while also highlighting the

challenges the technology can pose to large networks.

As Linas explains, for AI networking to be successful it will require the infrastructure and devices

supporting it to be fully optimised: "Corporate research labs and academic settings are working

on analysing all aspects of building and running effective AI networks to solve the challenges of

working on large networks, especially as best-practices are continuously evolving. It's only

through this reiterative, collaborative approach that the industry can achieve the repeatable testing

and agility in experimenting "what-if" scenarios that is foundational to optimising the networks

that AI is built upon."

The impact of AI on IT infrastructure, in this case the data centre, is also explored by Andrew

Donoghue, Global Segment Strategy at Vertiv, in his article this issue. "As AI's computing requirements

grow in complexity and intensity, data centre operators are faced with a pressing need to

rethink facility design and operation," according to Andrew. "Those who act decisively and strategically

in this area will position themselves at the forefront of this transformation, enabling both operational

efficiency and future readiness." Vertiv have developed an AI Imperatives framework for

staying ahead in the AI era, fundamental to which is a robust, flexible and scalable infrastructure.

REVIEWS:

Dave Mitchell

DEPUTY EDITOR: Mark Lyward

(netcomputing@btc.co.uk)

PRODUCTION: Abby Penn

(abby.penn@btc.co.uk)

DESIGN: Ian Collis

(ian.collis@btc.co.uk

SALES:

David Bonner

(david.bonner@btc.co.uk)

SUBSCRIPTIONS: Christina Willis

(christina.willis@btc.co.uk)

PUBLISHER: John Jageurs

(john.jageurs@btc.co.uk)

Published by Barrow & Thompkins

Connexion Ltd (BTC)

35 Station Square,

Petts Wood, Kent, BR5 1LZ

Tel: +44 (0)1689 616 000

Fax: +44 (0)1689 82 66 22

SUBSCRIPTIONS:

UK: £35/year, £60/two years, £80/three

years;

Europe: £48/year, £85/two years £127/three

years;

ROW:

£62/year, £115/two years, £168/three years

© 2024 Barrow & Thompkins Connexion Ltd.

All rights reserved. No part of the magazine

may be reproduced without prior consent,

in writing, from the publisher.

With an AI-ready infrastructure in place there's also an operational role for AI in the data centre

to consider. "It is likely that the future of data centres is inextricably linked to AI," according to

Ramzi Charif, VP Technical Operations, EMEA, VIRTUS Data Centres. "As the technology continues

to evolve, it will reshape not only how data centres operate but also their role in the broader

digital economy. Data centres that are proactive in adopting AI will lead the industry into an era

defined by greater efficiency, enhanced security, and heightened sustainability."

AI isn't a panacea for all our IT ills of course, and ultimately is only as good as the data it's

working with, as Kevin Kline, Database Expert at SolarWinds cautions in his article on generative

AI and data governance this issue: "In the rush to leverage GenAI, some organisations have created

their own large language model (LLMs) based on internal data. More often than not, they

are unprepared for this giant leap forward. Without the right data governance in place, there's a

risk of error-prone data or data without proper tagging or categorisation proliferating through

the organisation." And lax data governance often goes hand-in-hand with security vulnerabilities

- as legislation like the GDPR will be quick to remind us. Updating our data literacy for the AI

era through training and database expertise will be key here according to Kevin - which means

there's a role for us humans to play in the great AI gold rush after all. NC

GET FUTURE COPIES FREE

BY REGISTERING ONLINE AT

WWW.NETWORKCOMPUTING.CO.UK/REGISTER

WWW.NETWORKCOMPUTING.CO.UK @NCMagAndAwards NOVEMBER/DECEMBER 2024 NETWORKcomputing 03


NOVEMBER/DECEMBER 2024 VOL 33 NO 04

CONTENTS

CONTENTS

N O V E M B E R / D E C E M B E R 2 0 2 4

NETWORKcomputing

I N F O R M A T I O N A N D C O M M U N I C A T I O N S – N E T W O R K E D www.networkcomputing.co.uk

ON-SITE INSIGHTS

The evolution of in-person IT support

CENTRE OF ATTENTION

Transforming data centres

for the AI era

NETWORK RESILIENCE

Gaining a competitive

edge with resilient IT

AI SECURITY

How to safeguard your

AI deployment

INSIGHTFUL SUPPORT..........18

Traditional in-person IT support is not

disappearing, it's changing, according to

Patrycja Sobera and Vivek Swaminathan

at Unisys

NETWORK RESILIENCE............26

Alan Stewart-Brown at Opengear gives us

his key takeaways for CIOs looking to gain

a competitive edge through resiliant IT

DEDICATED TO UNIFIED

ENDPOINT MANAGEMENT?..12

Nadav Avni at Radix Technologies makes

the case for unified endpoint

management (UEM) for dedicated devices

AI DEPLOYMENT....................20

Our AI feature this issue looks at five

crucial aspects of AI networking, offers

advice on avoiding an AI identity crisis,

and explains why we all need to raise our

data literacy game for Gen AI adoption

FEATURE: DATA CENTRES......28

How AI can provide efficiency and security

for the data centre with a robust, scalable

infrastructure in place, and why power, not

cooling, is the next big challenge facing

the data centre industry

COMMENT.....................................3

Supporting AI networks

INDUSTRY NEWS.............................6

The latest networking news

ARTICLES

THE RECOVERY POSITION..............08

By Stephen Young at Assurestor

ARE YOU READY FOR NIS2?...........10

By Kim Larsen at Keepit

DOUBLE DEFENCE..........................14

By Larry Goldman at Progress

WHY MULTI-GIG MATTERS.............16

By Hugh Simpson at Zyxel Networks

5 THINGS YOU SHOULD KNOW

ABOUT AI NETWORKING...............20

By Linas Dauksa at Keysight

GEN AI AND THE NEED FOR DATA

MANAGEMENT GOVERNANCE......22

By Kevin Kline at SolarWinds

SAFEGUARDING YOUR

ORGANISATION’S AI DEPLOYMENT...24

By Andy Thompson at CyberArk

TRANSFORMING DATA CENTRES FOR

THE AI ERA......................................28

By Andrew Donoghue at Vertiv

THE BALANCE OF POWER..............30

By Gary Tinkler at Northern Data Group

REDEFINING DATA CENTRES WITH

ARTIFICIAL INTELLIGENCE................32

By Ramzi Charif at Virtus Data Centres

UNTANGLING NETWORK

COMPLEXITY.................................34

By Joe Cunningham at Daisy Corporate Services

REVIEWS

PERLE IOLAN SCR256 CONSOLE

SERVER.............................................09

04 NETWORKcomputing NOVEMBER/DECEMBER 2024 @NCMagAndAwards

WWW.NETWORKCOMPUTING.CO.UK



INDUSTRY NEWS

NEWSNEWS

NEWS NEWS

NEWS NEWS NEWS NEWS

NEWS NEWS

LANCOM Systems launches its first Wi-Fi 7 access points

The new LANCOM Wi-Fi 7 series features scan radio,

redundant PoE supply, and intelligent energy management.

The Wi-Fi 7 standard (IEEE 802.11be) offers higher data

capacities, greater channel widths, higher speeds, and modern

functions such as multi-link operation, multi-RU, and

puncturing. In addition to the classic 2.4- and 5-GHz bands,

Wi-Fi 7, like Wi-Fi 6E, uses the Wi-Fi-exclusive 6-GHz band to

ensure interference-free connections with minimal latency and

maximum data throughput. This is particularly important for

real-time applications such as machine control or virtual reality.

Both new models support all three frequency bands (2.4 GHz,

5 GHz and 6 GHz) and thus provide the best possible interface

for all end devices. A fourth radio module functions as an

integrated scan radio that delivers greater service quality and

security as well as a better overview of the network.

Fortintet highlights the need for cyber-aware workforce

Fortinet has released its annual 2024 Security Awareness and

Training Global Research Report, highlighting the crucial role

a cyber-aware workforce plays in managing and mitigating

organisational risk. Key findings from the report include:

As malicious actors use AI to increase the volume and

velocity of their attacks, leaders believe these threats will be

harder for their employees to spot. More than 60% of

respondents expect more employees to fall victim to attacks

in which cybercriminals use AI. However, most respondents

(80%) also say enterprise-wide knowledge of AI-augmented

attacks has made their organisations more open to

implementing security awareness and training.

Employees can be an organisation's first line of defense, but

leaders are increasingly worried that their employees lack

security awareness. Nearly 70% of those surveyed believe

their employees lack critical cybersecurity knowledge, up

from 56% in 2023.

Leaders recognise the importance of security awareness

training but believe specific attributes make some training

programs more effective than others. Three-quarters say they

plan their security awareness campaigns, delivering content

monthly (34%) or quarterly (47%). Executives also point to

high-quality content playing a leading role in the success or

failure of the program.

New Supermicro liquid-cooled AI SuperCluster solutions

Supermicro has unveiled its new line-up of AI SuperCluster

solutions featuring the NVIDIA Blackwell platform. The

SuperClusters will significantly increase the number of NVIDIA

HGX B200 8-GPU systems in a liquid-cooled rack, resulting in a

large increase in GPU compute density compared to their current

liquid-cooled NVIDIA HGX H100 and H200-based SuperClusters.

The company is enhancing the portfolio of its NVIDIA Hopper

systems to address the rapid adoption of accelerated computing

for HPC applications and mainstream enterprise AI.

"Supermicro has the expertise, delivery speed, and capacity to

deploy the largest liquid-cooled AI data centre projects in the

world, containing 100,000 GPUs, which Supermicro and NVIDIA

contributed to and recently deployed," said Charles Liang,

president and CEO of Supermicro. "Using our Building Block

approach allows us to quickly design servers with NVIDIA HGX

B200 8-GPU, which can be either liquid-cooled or air-cooled."

Catapulting quantum innovation into industry

Seven leading businesses have joined Digital Catapult's latest

quantum innovation accelerator to fast track the development

of solutions and accelerate the practical application of deep tech.

Aiming to help solve complex market challenges in major sectors

of the UK economy including transport, defence and telecoms,

the programme convenes unique quantum capabilities and

innovation consultancy to de-risk technology adoption.

The Quantum Technology Access Programme is part of a wider

Innovate UK funded project called 'Quantum Data Centre of the

Future' which aims to embed a quantum computer within a

classical data centre to explore real-world access to quantum

technologies. Partners include ORCA Computing, Riverlane and

PQShield, and the inaugural programme saw a 26% boost in

confidence about quantum computing from industry leaders such

as Rolls Royce, Airbus and the Port of Dover. This year, Digital

Catapult welcomes more household names, including BAE

Systems and Vodafone, signalling growing industrial interest in the

technology and sectors that could benefit from quantum innovation.

06 NETWORKcomputing NOVEMBER/DECEMBER 2024 @NCMagAndAwards

WWW.NETWORKCOMPUTING.CO.UK


EVENT ORGANISERS:

Do you have something coming up that may

interest readers of Network Computing?

Contact dave.bonner@btc.co.uk

FORTHCOMING EVENTS

2025

FORTHCOMING EVENTS

FORTHCOMING EVENTS

12-13

MAR

2-3

APR

3-5

JUNE

1-2

OCT

CLOUD AND AI INFRASTRUCTURE

ExCel, London

www.cloudaiinfrastructure.com/btc

DTX MANCHESTER

Manchester Central

www.dtxevents.io/manchester

INFOSECURITY EUROPE

ExCel, London

www.infosecurityeurope.com

DTX LONDON

ExCel, London

www.dtxevents.io/london


OPINION: DISASTER RECOVERY

THE RECOVERY POSITION

STEPHEN YOUNG, EXECUTIVE DIRECTOR AT ASSURESTOR ASKS "IS

THERE A ROLE FOR A CHIEF INFORMATION RECOVERY OFFICER?"

There are many senior technology roles in

an organisation - from the CIO and

CISO to the CTO and even CPO - all

focused on specific aspects of business security,

risk and compliance or operational efficiency.

The list of three-letter acronyms becomes ever

longer as we look to senior level responsibility

for a company's information and data security.

So, forgive me for suggesting yet another.

There's no single role responsible for disaster

recovery. Why not?

With more than three-quarters of senior IT

professionals in our recent survey admitting

that their organisation has lost data due to a

system failure, human error or a cyberattack in

the past 12 months, there's a clear message

here. Knowing that at some point your data -

and business - will be at risk, focus shifts from

security and prevention to recovery.

We know that there can be huge operational

and financial implications to data loss and

business downtime, while a company's

reputation is also at risk. Look at the global

outage that affected so many organisations

earlier this year, from airlines to healthcare.

While not a traditional cyberattack, it's been

estimated to cost up to $1.5bn. The 2023

Rhysida attack at the British Library also

highlights the impact on an organisation

operating with legacy systems and security in

today's aggressive cyber environment.

Sadly, for some there is no possibility of

recovery. As a business that specialises in

business resilience and data recovery, we've

seen this at first hand. I would argue that there

has never been a more crucial time to consider

a senior role to protect and recover a business

from a potentially catastrophic disaster.

Disaster recovery may be the responsibility of

one, two, or even several of the current C-

suite roles. But the nuances of delivering a

thorough disaster recovery strategy, and

bringing together the many disparate aspects

of IT, may not always be apparent to someone

who does not specialise in this field and is

charged with keeping operations running -

business-as-usual.

Organisations today are investing in the

smooth and efficient running of the business,

together with the wellbeing of staff. The rise of

the Chief Wellbeing Officer since the

pandemic is just one example. Given that the

majority of respondents in our survey lack

confidence in their own recovery systems and

processes, now could be the time to consider a

role primarily dedicated to the protection and

recovery of the business, its data and its staff.

This role of a Chief Information Recovery

Officer could focus resources and expertise,

including staff, technology, solutions and

more, on the singular discipline of recovering

the business from any form of disaster -

significantly increasing their readiness to

address these events. When disaster strikes,

whether fire, flood, user error, or more

commonly a cyberattack (in particular

ransomware) the skills and experience of

managers and IT teams do not always extend

to the often chaotic and business-saving

processes needed to recover.

An unexpected and rapidly escalating attack

will challenge any recovery decisions or

processes. How these are executed in a reallife

disaster scenario are crucial, with

absolutely no room for error. With ransomware

demands rising, and the margin for error

getting smaller, any investment in recovery

solutions is, on balance, a worthwhile one. But

that investment will not be realised if the

technology does not meet the business's

realistic RTOs and RPOs, is not deployed

correctly, managed appropriately and tested

within the parameters of a thorough and

frequent testing regime.

But more than that, the recovery process

needs a guiding hand, as par oft a broader

team that includes IT, security and risk

management, reporting to the Board on the

business' ongoing recoverability status. NC

08 NETWORKcomputing NOVEMBER/DECEMBER 2024 @NCMagAndAwards

WWW.NETWORKCOMPUTING.CO.UK


PRODUCT REVIEW

Perle IOLAN SCR256

Console Server

PRODUCT REVIEW

PRODUCT

REVIEWPRODUCT RE

As data centre demands grow, support

staff rely heavily on secure remote access

to critical infrastructure devices. Console

servers are an essential requirement as they

accelerate troubleshooting by negating the

need for lengthy on-site visits, and out-of-band

(OOB) management ensures core devices are

accessible even during network outages.

Perle Systems specialises in secure connectivity

solutions and its latest IOLAN SCR Console

Servers deliver a wealth of high-level OOB

management features. Targeting top-of-rack

deployments, they focus on access security,

data protection and resiliency.

The IOLAN SCR256 appliance on review has

a device management connection for every

occasion as it presents 24 Gigabit Ethernet

ports, 24 RJ45 RS232 ports, eight USB ports

and two Gigabit SFP uplink ports. The IOLAN

SCR family includes three other options with

48, 32 or 16 RS232 serial ports, all with USB

ports, as well as dual 10GbE SFP+ multi-

Gigabit uplink ports.

There's plenty of power on tap as they team

up an embedded 1.5GHz AMD Ryzen

R1305G CPU with 8GB of DDR4 memory.

Their fanless design means there are no

moving parts to fail. You get dual redundant

PSUs, and a smart feature is their internal

256GB SSD, which allows you to

simultaneously run multiple native Docker

containers and custom applications in VMs that

require real-time responses.

Access interruptions due to network failures

are covered as Perle has models with optional

cellular modules. These provide dual SIM slots

for WAN failover and facilities for sending

SMS messages to connect or disconnect

cellular access, retrieve logs and remotely

reboot the appliance.

We found installation very simple. The console

servers offer zero-touch provisioning by selfassigning

an IP address if it cannot contact a

DHCP server. The appliance can be accessed

from its intuitive web console and via SSH or

Telnet to its CLI. Perle designed this to be

'Cisco-like,' so support staff don't need extra

training to use it.

The appliances provide full IPv4/IPv6 routing

capabilities plus support for RIP, OSPF, and

BGP protocols for easy integration into data

centres including VRRPv3 for network

redundancy support. Their Perle-ARMOR

security shield delivers superb access security

along with an integrated firewall and secure

boot. You can enable 2FA on user accounts

and apply AAA (authentication, authorisation,

accounting) services such as Radius with

RADSec, TACACS+ and LDAP.

User accounts assigned the operator role

are not permitted to manage the appliance,

and you can decide which ports they are

allowed to access. The web console shows all

Ethernet and serial ports along with

connected devices, and the EasyPort Web

feature provides fast, secure management

access to attached devices with one click.

The PerleVIEW Central Management Platform

can be run on-site or via cloud providers, such

as Amazon or Azure, and delivers an incredible

range of remote management services.

Appliances connect to PerleVIEW over secure

VPNs, so only they can communicate with it.

Thus, it presents a single pane of glass for

managing your geographically distributed fleet

of Perle console servers.

Devices can be grouped by type or location

and receive scheduled firmware upgrades.

Their GUIs and console ports can be securely

accessed, and scripts deployed to remotely

configure settings such as ports and routes.

Perle's Network Watchdog service ensures

always-on connectivity by seamlessly switching

to the cellular module when it detects a network

outage. If the cellular connection drops, it

automatically resets the connection.

Perle's IOLAN SCR Console Servers are the

perfect choice for support staff who demand

secure OOB access to core network devices.

The versatile IOLAN SCR256 on review

presents an impressive array of management

ports, delivers top-notch data security, and

offers optional cellular access to ensure alwayson

connectivity. NC

Product: IOLAN SCR256 Console Server

Supplier: Perle Systems

Web site: www.perle.com

Sales: +44 333 0411 102

WWW.NETWORKCOMPUTING.CO.UK NOVEMBER/DECEMBER 2024 09

NETWORKcomputing

@NCMagAndAwards


SECURITY UPDATE

ARE YOU READY FOR NIS2?

KEEPIT CISO KIM LARSEN EXPLAINS WHY UK IT TEAMS ALSO NEED TO PLAN FOR THE EU'S NEW

CYBERSECURITY DIRECTIVE

From October, a new directive designed

to safeguard critical infrastructure and

protect against cyber threats came into

force across the European Union. And

although the United Kingdom is no longer a

member of the EU, it's still really important to

understand the changes: the Network and

Information Systems Directive (NIS2) is highly

relevant, especially for UK businesses

operating in the EU. Not to mention that the

regulations align closely with the UK's own

robust cybersecurity frameworks, including

the anticipated Cyber Security and Resilience

Bill introduced in the King's Speech this

summer. So preparing for changes now could

help when it comes to complying with UK

regulations in the future.

WHY DOES THIS MATTER IN THE UK?

1. Set yourself apart

Like GDPR, NIS2 attempts to unify the way the

whole of the EU approaches data. And, much

like GDPR, it's anticipated that NIS2 will set

global standards that will increasingly become

best practice worldwide. By adopting NIS2

standards early, UK businesses will make it

easier for EU partners to work with them. And,

if nothing else, demonstrating an

understanding of and adhering to high

cybersecurity standards can help businesses

stand out, especially in sectors where security

and trust are crucial.

2.Strengthen business relationships with EU

partners

No business operates in a vacuum, and

many UK organisations rely on strong

relationships with EU partners. These

relationships may increasingly hinge on

following NIS2 standards: as we saw with

GDPR, many EU companies may require their

suppliers and partners to comply with

equivalent cybersecurity measures. Failing to

do so could limit opportunities for

collaboration or result in lost contracts. It

makes sense to start now, and really get to

know the directive, so it's easier to align

cybersecurity practices with NIS2.

3. Align with future regulations

When the Cyber Security and Resilience Bill

was introduced to Parliament, it demonstrated

that although the UK is no longer bound by

EU legislation, it is almost inevitable that the

UK government will introduce similar

regulations to maintain alignment with

international standards. It makes sense.

Given the interconnected nature of global

cyber threats, it's not practical to reinvent or

move away from existing regulation. By

understanding what's coming, and aligning

with NIS2, UK organisations will be much

better prepared for future national regulatory

changes too - and of course better protected

against cyber threats.

4. Build cyber resilience

This goes beyond compliance for

compliance's sake. When it comes into

force, NIS2 is designed to protect

organisations from cyber attacks and can

significantly enhance cyber resilience. With

an emphasis on risk management, incident

response, and recovery, UK businesses that

adopt these practices can better protect

themselves, respond more effectively to

incidents, and, ultimately, safeguard their

operations and reputation.

10 NETWORKcomputing NOVEMBER/DECEMBER 2024 @NCMagAndAwards

WWW.NETWORKCOMPUTING.CO.UK


SECURITY UPDATE

ENTER THE CYBER SECURITY AND

RESILIENCE BILL

But it's not just NIS2 that needs to be on UK

businesses' radar. When the UK

government set out plans for a Cyber

Security and Resilience Bill, it represented a

significant strengthening of the UK's

cybersecurity resilience. If passed, this

legislation aims to fill critical gaps in the

current regulatory framework, which has

been inherited from EU law and needs to

adapt to the evolving threat landscape.

The good news however is that, because

much of the Bill and NIS2 align, the burden

on business isn't as great as it could be. The

Key provisions of the Bill include:

1. Expanded regulatory remit: The Bill

expands the scope of existing regulations to

cover a wider array of services that are

critical to the UK's digital economy. This

includes supply chains, which have become

increasingly attractive targets for

cybercriminals, as we saw in the aftermath

of recent attacks on the NHS and the

Ministry of Defence. This means that more

companies need to be aware of potential

legislative changes.

2. Stronger regulatory framework: The Bill

will put regulators on a stronger footing,

enabling them to ensure that essential cyber

safety measures are in place. This includes

potential cost recovery mechanisms to fund

regulatory activities and proactive powers to

investigate vulnerabilities.

3. Increased reporting requirements: an

emphasis on reporting, including cases

where companies have been held to

ransom, will improve the government's

understanding of cyber threats and help to

build a more comprehensive picture of the

threat landscape, for more effective

national response strategies.

If passed, the Cyber Security and

Resilience Bill will apply across the UK,

giving all nations equal protection.

HOW THE NEW RULES FIT WITH

CURRENT LEGISLATION

This is not a case of completely rewriting

the rule book. The UK already has a

strong foundation when it comes to

cybersecurity. Much of this guidance

actually aligns closely with the principles

of NIS2 and the new Cyber Security and

Resilience Bill. Take, for example, the

National Cyber Strategy 2022, which

focuses on building resilience across the

public and private sectors, strengthening

public-private partnerships, enhancing

skills and capabilities, and fostering

international collaboration. Or the

National Cyber Security Centre (NCSC)

guidance, which complements new rules

with its focus on incident reporting and

response and supply chain security. So

companies already complying with these

rules are starting off strong.

SOBERING LESSONS

This is not just about complying with the

latest regulations. Cyber attacks can be

devastating to the organisations involved

and the customers or users they serve.

When it comes to understanding why

cybersecurity and resilience is important,

there are several high-profile incidents in

the UK that demonstrate the impact of

an attack.

Take for example the ransomware attack

on NHS England in June this year, resulting

in the postponement of thousands of

outpatient appointments and elective

procedures. Or the 2023 cyberattack on

Royal Mail's international shipping business

that cost the company £10 million and

highlighted the vulnerability of the transport

and logistics sector. And how about the

security breach at Capita also in 2023, that

disrupted services to local government and

the NHS and resulted in a £25 million loss.

We've already seen that, when it comes

to data, it's impossible to operate in a

silo. The way we work across borders

and geographies means that legislation

and directives can reach much further

than the countries they're originally

intended for. Understanding NIS2 and

preparing for it means that UK businesses

can better protect themselves against

cyber attacks, that they're more attractive

to European business partners, and that

they're contributing to national cyber

resilience. NC

WWW.NETWORKCOMPUTING.CO.UK @NCMagAndAwards NOVEMBER/DECEMBER 2024 NETWORKcomputing 11


OPINION: UNIFED ENDPOINT MANAGEMENT

DEDICATED TO UNIFIED ENDPOINT MANAGEMENT?

NADAV AVNI, CMO OF RADIX

TECHNOLOGIES MAKES THE

CASE FOR UNIFIED ENDPOINT

MANAGEMENT FOR

DEDICATED DEVICES

For companies that employ hundreds or

thousands of workers worldwide, the

default network connectivity tools include

mobile phones, tablets, and laptops.

Companies also deploy dedicated devices that

assume specific workloads in designated

places. These smart devices come in all

brands, functions, and operating systems (OS).

Thankfully, a unified endpoint management

(UEM) platform is all you need to keep the

entire fleet of smart devices working with the

rest of the network.

For corporations, dedicated devices are a

no-brainer. Situations that sometimes

overwhelm human workers are easy for

machines to handle quickly and efficiently.

Think ATMs, self-service information kiosks,

and self-checkout counters. Dedicated devices

enable companies to deploy services in busy

or inhospitable areas, such as digital signage

displays along busy freeways.

Companies even use dedicated devices as

untiring monitors 24/7. This might include

safety monitors installed on factory floors,

health monitors that record patients'

conditions, or warehouse and delivery service

tablets that track packages and personnel.

Keeping all these devices in top working order

takes a reliable UEM system.

WHY UEM IS A MUST FOR

DEDICATED DEVICES

A unified endpoint management system

enables IT teams to manage, monitor, and

secure a business's end-user devices. Tracking

a company's dedicated devices is a tall order,

especially for those that have large fleets.

Devices will come and go during the

company's lifetime, and suppliers of these

devices are similarly fluid. For the most

12 NETWORKcomputing NOVEMBER/DECEMBER 2024 @NCMagAndAwards

WWW.NETWORKCOMPUTING.CO.UK


OPINION: UNIFED ENDPOINT MANAGEMENT

part, companies will acquire and deploy

dedicated devices from various brands

and operating systems. In some cases,

brands will shift from an older OS to a

more fitting one, such as Android TV.

It is necessary to have the right device

management system to oversee all

company devices and ensure they work

with one another across the network.

Otherwise, you can imagine the chaos

that would unfold if departments can't get

the information they need because

another group's devices aren't compatible

with theirs. Just think of the wasted time

and resources when you manually

perform updates on one device at a time.

A unified endpoint management system

helps companies keep their entire device

fleet in line. You no longer need separate

tools for each equipment brand; a single

UEM platform can manage everything.

Regardless of the device's operating

system or location, a reliable device

manager can accommodate and

manage all units connected to the

corporate network.

WHEN DOES UEM MAKE SENSE

FOR YOUR BUSINESS?

If your business revolves around

dedicated devices, investing in a unified

endpoint management system is wise.

This is especially true if the number of

devices in your fleet ranges from a few

dozen to thousands. At that point,

remote management is the only way to

keep all your devices working their best

realistically. What's more, say you deal

with multiple brands that run on various

operating systems. Only a unified

endpoint management platform will get

the job done. More specifically, to

determine whether your business needs

a UEM solution, you'll want to assess

two areas:

THE NEED FOR A DEVICE

AGNOSTIC MANAGER

Again, companies that deal with

hardware manufacturers and resellers

will often work with multiple vendors

simultaneously. Similarly, companies

that serve a variety of markets will

frequently require different models for

each segment.

For example, banks that deploy ATMs

will want to show off the latest ATM

technology in business districts and

technology hubs. But they'll also install

regular ATMs that use keypads instead of

touchscreens in less congested cities or

remote locations. Then, they might use

modular, in-wall versions in malls and

shopping centres.

In most cases, companies will award

the bids for each device type to the best

vendor. Chances are, each device will

come from a different supplier. This

makes it challenging to manage the

devices with any real efficiency. A unified

endpoint management system can

perform this duty.

THE NEED TO MANAGE DEVICES

IN REMOTE LOCATIONS

The location also plays an important

role in justifying the need for a unified

endpoint management platform. A

good example is companies that sell

digital advertising space to brands.

These companies operate hundreds of

digital display devices across several

locations. A strong remote device

manager can ensure each device runs

the latest operating system and

software versions.

In contrast, deploying an IT team to

each location to perform manual updates

is impractical and costly. You'll waste

valuable time and human resources each

time an update is needed.

HARNESSING THE BENEFITS OF

UNIFIED ENDPOINT MANAGEMENT

Of course, being device-agnostic and

having remote capabilities are just a

few benefits of a reliable UEM

platform. To make the most of this

investment, look for a system that

maintains wireless connectivity through

a cloud solution like Amazon Web

Services. This ensures all connections

stay secure, redundant, and encrypted.

Cloud connectivity also means the

tools and files needed to perform

remote maintenance are always

available.

The ideal UEM software should also

allow for low-level device

management. You should be able to

make single, multiple, or simultaneous

updates across the entire fleet. Admins

will also need the flexibility to assign

access levels to different users to limit

data exposure.

Finally, admins must also be able to

remotely secure units in danger of getting

stolen or harvested for data. They can

shut down or free at-risk devices as

needed. And when everything else fails,

they can remotely erase the unit's

contents to prevent thieves from profiting

off stolen data.

UEM SOFTWARE IS A WORTHY

INVESTMENT

Unified endpoint management software

is an excellent investment that will help

you get the total value from your

dedicated devices. Choose a device

manager that can maintain, manage,

and secure your devices no matter what

software they're running or where they're

located. In doing so, you'll provide end

users with devices that work their best as

often as possible. Customers can also

be confident that their devices are

reliable and safe. NC

WWW.NETWORKCOMPUTING.CO.UK @NCMagAndAwards NOVEMBER/DECEMBER 2024 NETWORKcomputing 13


SECURITY UPDATE

DOUBLE DEFENCE

LARRY GOLDMAN, DIRECTOR, PRODUCT MARKETING AT PROGRESS GIVES US A GUIDE TO THE

POWERFUL SECURITY PROVIDED BY BORDER AND WEB APPLICATION FIREWALLS

As high-profile cybersecurity

breaches continue the risks of

mediocre cybersecurity strategies

are real. With the increasing number of

applications and other services available

via the web, it is critical that organisations

protect the perimeter of their networks

and everything within. To achieve an ideal

level of protection, you should understand

what cybersecurity tools and technology

are needed for the task.

RISKS OF AN INADEQUATE

PERIMETER NETWORK OR WEB

APPLICATION PROTECTION

CISOs and network managers often ask

whether they need the dual coverage of

border and web application firewalls

(WAFs). Each solution provides valuable

functions to maximise security. But both

are vital for any organisation with

applications and other services available

via the web.

Network firewalls function as the

frontline of defence and help to protect

the perimeter of an organisation's

networks. Meanwhile, WAFs provide

specific functionality to protect web

applications plus added security

protections for the servers delivering web

applications to users. While WAFs are

trending, combining both as part of a

multifaceted and layered security defence

strategy is essential for organisations

looking to defend their critical assets.

THE CRITICAL ROLES OF BORDER

FIREWALLS AND WAFS:

BORDER FIREWALLS

In addition to being on the frontline to

counter any incoming threats, permitting

network access to only authorised traffic

and mediating traffic flows, border

firewalls serve as a rules-based controller

of network traffic flow. By analysing and

filtering network traffic based on preconfigured

policies, a firewall can allow

or block specific traffic flows based on

several attributes, including source and

destination IP addresses, ports, protocols

or other criteria.

There are a few different types of

14 NETWORKcomputing NOVEMBER/DECEMBER 2024 @NCMagAndAwards

WWW.NETWORKCOMPUTING.CO.UK


SECURITY UPDATE

firewalls: hardware (for physical devices),

software (installed on servers or devices)

and cloud-based firewalls. These

firewalls are classified based on how

they filter traffic, with two main types:

Packet filtering - This type of firewall

operates like a bouncer at a nightclub.

They check specific identifying

characteristics of network requests,

such as IP addresses, before allowing

or blocking traffic.

Stateful inspection - Stateful inspection

firewalls regularly monitor the

state of network connections by

observing all active connections

passing through the firewall. They

can dynamically open and close

ports based on the connection state

and inspect entire communication

streams for malicious content. They

provide a more granular control

process as they understand the context

of network traffic.

WEB APPLICATION FIREWALLS

WAFs, an essential part of a broad

security strategy, help protect webbased

applications and web servers

from multiple attack types and threats.

Unlike the traditional network firewalls

discussed above, which operate at the

network and transport layers, a WAF

sits between user endpoint devices and

web application servers and focuses on

HTTP/HTTPS traffic.

WAFs understand how web traffic uses

the HTTP/HTTPS protocols and can

inspect network packets to identify

potential threats - that traditional

network firewalls will not detect - before

they can impact the applications.

A WAF primarily monitors, filters and

blocks web traffic identified as a threat

to web applications. It inspects

incoming requests and applies a set of

rules and policies to identify and

prevent common web application

vulnerabilities and attacks. Similarly to

network firewalls, WAF deployment can

occur via physical devices, virtual

machines or the cloud. Commercial

WAFs can be purchased as standalone

software or as an integrated function

within an application load balancer.

WAFs employ various techniques to

monitor and filter traffic flowing to web

application servers, including:

Signature-based detection - This

occurs when WAFs use rules and

analysis of known attack patterns to

detect malicious activity.

Anomaly-based detection - The

WAF will establish a baseline of

regular network activity and any

deviation from this baseline will be

blocked to stop malicious activity

and any deviation from this baseline

will be blocked to stop malicious

activity.

Security models - WAFs can use

both negative (block) and positive

(allow) lists to control traffic flow to

web applications.

In addition to bolstering defences

against web application attacks, WAFs

often include additional features such

as bot attack prevention, DDoS

protection and API security.

BOLSTERING NETWORK FIREWALL

PROTECTION

WAFs augment security provisions in

several ways. The best way to deploy

them is as part of a broad cybersecurity

defence strategy that includes network

firewalls and other complementary

technologies. However, WAFs don't

replace traditional network firewalls.

They enhance the security of existing

tools by enabling an added layer of

security inspections and monitoring

network traffic specific to web

applications and servers.

While deploying both network firewalls

and WAFs is essential to a proactive

cybersecurity strategy, implementing

some added cybersecurity components

and techniques that complement WAFs

and create multi-layered defences that

address a myriad of cyberthreats is

equally critical. These include intrusion

detection systems (IDS), network

detection and response (NDR)

solutions, security information and

event management (SIEM) systems,

identity and authentication

management (IAM) and zero-trust

network access. NC

WWW.NETWORKCOMPUTING.CO.UK @NCMagAndAwards NOVEMBER/DECEMBER 2024 NETWORKcomputing 15


OPINION

WHY MULTI-GIG MATTERS -

AND IT'S NOTHING (MUCH) TO DO WITH AI

HUGH SIMPSON, EMEA MARKETING DEVELOPMENT MANAGER, ZYXEL NETWORKS, EXPLAINS WHY

MULTI-GIGABIT SWITCHES ARE NOW ALMOST ESSENTIAL AT EVERY LEVEL OF THE

INFRASTRUCTURE - AND ARE SET TO BECOME EVEN MORE POPULAR.

Spoiler alert! This article will not try to

convince you that AI is the reason that

multi-gigabit switches are now an

essential component of every network. While

just about every article or blog in every

publication or website will cite AI as being

the transformational technology that is

central to whatever that particular article

is about, this one will avoid it

completely. Well, almost completely.

What this article will talk about is why

multi-gigabit switches are becoming

such a popular choice, why that

matters, and how it makes a difference,

both to users and administrators.

WHAT DO WE MEAN BY 'MULTI-

GIG'?

The reason multi-gig options are selling

well is quite simple - they enable higher

throughput and give customers more

options and investment protection. But

before we go any further, let's make sure we

fully understand what we really mean by multigigabit.

What we are talking about here are

switches that have ports capable of

supporting multiple Ethernet

connection speeds - 1G, 2.5G,

5G, and 10G. What we are

not referring to are switches

that have different ports

that support different

connection speeds. Pretty

much every switch on

the market today falls

into the latter category and it's easy enough to

understand why that's the case.

The aggregation of traffic that happens inside

the switch means that you will always need

more bandwidth on the uplink than you will

need to take out to the endpoint devices. In

the past a switch designed to support clusters

of PC users, would have typically had 16 or

24 ports that supported 1G connections and a

couple of higher speed uplink ports - typically

10G SFP ports.

WHY DO WE NEED IT?

Conveniently, this example of what a basic

switch spec has looked until quite recently

illustrates why we now need switches with

multi-gig ports. Speed and bandwidth

requirements only ever go in one direction and

in recent times the demands and strains put on

the network have grown considerably. The

pandemic compelled everyone to start using

video conferencing and collaboration tools;

HD content is becoming ultra HD content; and

cloud adoption continues apace as

organisations migrate more workloads to the

hyperscalers. The additional multilayered

cybersecurity measures now being used by

many organisations are also pushing up

demand.

New technologies are driving bandwidth

needs up too. We expect to see much wider

adoption of WiFi 7 over the next few months

and to run a WiFi 7 access point you really

need a 2.5G connection. This is a prime

example of where multi-gigabit capability can

be incredibly useful.

16 NETWORKcomputing NOVEMBER/DECEMBER 2024 @NCMagAndAwards

WWW.NETWORKCOMPUTING.CO.UK


OPINION

If you were previously using an access point

that was WiFi 5 or earlier, you would have

been fine with a 1G connection. With a

multi-gigabit switch, when you upgrade to

WiFi 7 there is no need to change anything

except the access point itself. You will still be

able to run at 2.5G and 5G speeds on the

same Ethernet Cat 5 or Cat 6 cable, into the

same switch port. Having that multi-gigabit

support makes it easier and much less

expensive to upgrade than it would have

been otherwise.

Not having to rip out and replace the

cabling is a really significant advantage of

using multi-gig, as you save a great deal of

time, inconvenience and expense as a result.

EASIER UPGRADES

Another advantage of multi-gig is futureproofing.

Customers will always want to have

the ability to upgrade endpoint and edge

devices in the future, and installing a multigigabit

switch will mean they won't have to

worry about changing the switch or the

cabling infrastructure when they upgrade the

WiFi, IP surveillance, or other clusters of

devices at the edge.

Indeed, not having to upgrade the whole

infrastructure is probably the biggest benefit

of multi-gig switches for most customers.

While there is an argument that it makes

sense at some point to use fibre right across

the network, as it will only connect at 1G or

10G - and nothing in between - you either

upgrade everything or nothing. That has

significant implications in terms of cost and

disruption. With multi-gig you can take more

of a graduated approach. Switches that can

support a step-up in speeds, from 1G, to

2.5G to 5G and eventually, 10G, give you

more options.

A NEW ERA IN SWITCHING

It also makes good sense to take a gradual

approach given that we are now entering a

new phase of development in switching

technology. While 10G has been recognised

for some time as the speed you would want

to run across the backbone of the network,

the sharply increased demand on bandwidth

has started to put that level of capacity under

strain.

If there is going to be a bottleneck on the

network today, it's most likely to be here - but

that's not a situation that can be accepted for

long since, if there is some bandwidth-related

latency on the core fabric of the network,

what is the point of upgrading to new

technologies on the end points?

The industry is responding, of course, and

in the next few months we are going to see

more core switches with higher uplink speeds

being brought to market. Zyxel Networks is

planning to launch one that will provide 25G

aggregation ports and up to 100G on the

uplink. That will arrive in Q1 2025 and there

are more options coming later in the year.

EXPANDED OPTIONS

At the same time, we'll be expanding the

multi-gig options available on our switches to

ensure that we provide good flexibility and

investment protection to customers who want

to take advantage of WiFi 7 and all the other

emerging technologies that are driving

bandwidth needs skywards.

The really good news here is that, as multigig

capability becomes more commonplace

on switches, the price of those devices will

start to come down, making multi-gig much

more affordable for every size of

organisation.

One final point worth considering is

configuration. While unmanaged switches

really just need to sit there and keep running,

multi-gig switches that are carrying traffic

between edge devices and the network

backbone really do need to have smart

managed capability that can be accessed

remotely. With Zyxel Networks switches,

admins can do this using our Nebula cloud

management platform and as multi-gigabit

devices become more widespread, we've

seen Nebula being used much more to

optimise their performance.

Multi-gigabit matters because it's what you

need to deliver an appropriate level of

performance and economy on the network

today. And as for AI? Well, it does have a

role to play, but not necessarily in making

multi-gigabit switching more efficient or

effective. We are using artificial intelligence

and machine learning to analyse

performance data from our switches and

make improvements to our products, and to

enhance network security. Other than that,

for now at least, multi-gig is an AI-free zone.

WWW.NETWORKCOMPUTING.CO.UK @NCMagAndAwards NOVEMBER/DECEMBER 2024 NETWORKcomputing 17


OPINION: I.T. SUPPORT

INSIGHTFUL SUPPORT

TRADITIONAL IN-PERSON I.T.

SUPPORT IS NOT

DISAPPEARING, IT'S

CHANGING, ACCORDING TO

PATRYCJA SOBERA, SVP AND

GM, DIGITAL WORKPLACE

SOLUTIONS AND VIVEK

SWAMINATHAN, DIRECTOR

OF PRODUCTS &

SOLUTIONS, UNISYS

Imagine this - it's Friday afternoon when

your client's embedded sensor goes off

on a critical machine. Within a matter of

minutes, manufacturing comes to a halt.

Since you received an immediate alert, a

technician from the labour marketplace with

the proper certifications and credentials is

called in immediately. When the job is

allocated to a technician, the client can

track the request via a mobile application.

Meanwhile, an experience management

office (XMO) runs diagnostics on the

device, telling the technician exactly what

is wrong with the sensor and what parts

need to be fixed. Once on-site, the

technician realises the problem is more

complicated and decides to confer with

their offsite subject matter expert (SME).

The technician fires up the AR/VR app,

enabling the SME to assist them remotely.

The experience management office also

detects voltage fluctuation and semidepleted

batteries, which require a fix, and

alerts the on-site technician. Within a few

hours, everything is resolved, and the

machine is up and running.

This is the future of in-person support,

where humans and machines work hand-inhand

to facilitate an ideal client experience

while ensuring technicians - whether on-site

or remote - have the information they need

to solve problems at the moment.

18 NETWORKcomputing NOVEMBER/DECEMBER 2024 @NCMagAndAwards

WWW.NETWORKCOMPUTING.CO.UK


OPINION: I.T. SUPPORT

THE COST OF DOWNTIME HAS

INCREASED DRAMATICALLY

All managed service providers (MSPs) know

how devastating an idle production line can be

for their clients. In fact, unplanned downtime

costs are much higher today than five years

ago, costing the world's 500 largest companies

11% of their revenues, totaling $1.4 trillion.

In some sectors, downtime costs have soared

and outpaced inflation - in the automotive

industry, downtime costs are nearly twice as

high as they were in 2019, and an hour of

downtime costs $2.3 million an hour - or more

than $600 a second. For a heavy industry

plant, costs have risen to nearly four times

what they were in 2019.

Interestingly, while these costs are staggering,

we have seen the number of incidents

decrease in recent years. Manufacturing now

has 25 downtime incidents a month per facility

compared to 42 in 2019 - and the reason is

smarter IT support.

THE ROLE OF AI AND IOT

With AI dominating the headlines, it's easy to

think it's the solution to all problems, including

tech support. While AI is enhancing the

efficiency and productivity of IT support -

especially help desk support - it's not replacing

human workers but augmenting their

capabilities. AI combined with data from IoT

devices can provide just-in-time information to

help diagnose, troubleshoot and fix issues

before they stop machines. More complex IT

issues that require consultation from an offsite

SME can be solved more efficiently thanks to

technology like AR/VR, mobile computing, and

the cloud. Over time, this shift allows IT

professionals to focus on more strategic and

complex tasks, ultimately improving the overall

quality of support services.

This new IT support evolution can help

businesses move from the old, expensive

support models to more efficient, techpowered

solutions.

A SHIFT IN SCOPE AND SKILLS

In-person support has existed since the dawn

of IT. However, with higher costs, staffing issues

and tighter company margins, it's time to move

away from relying solely on on-site support.

The complexity of delivering timely service and

managing multiple stakeholders across

geographies also adds to this burden.

Yet, on-site support is not going away. It's

simply changing in scope and skills. Some

MSPs are doubling down on in-person

support in some areas, using technology to

provide support and deliver a higher level of

service to capture new business. For example,

an organisation with a large field force can

build on its core strengths by using new

services beyond PC break/fix and network

rack-and-stack.

With the proper support infrastructure -

excellent knowledge management, just-in-time

training, a global workforce and a robust

service management team - an organisation

can explore many business opportunities like

installing and maintaining IoT devices, kiosks,

digital signage and other "smart" devices.

Additionally, in-person, on-site human

support is often needed at challenging or

sensitive sites. Whether tasked with setting up

and maintaining EV charging stations,

supporting medical devices, or managing oil

rigs and government

areas, human ingenuity

cannot be replaced.

KNOW YOUR

STAKEHOLDERS

Gone are the days when

service was delivered in a

silo. Today, stakeholders

consist of not just IT but the

organisation's business

leaders as well. Management

is looking at business

outcomes - like the cost of

unplanned downtime hours - and

the overall experience of working with

outsourced IT support.

The evolution of in-person support highlights

the shifting landscape of IT service delivery.

While traditional on-site support is changing, it

remains crucial to managing complex

environments where remote solutions are

insufficient. AI, IoT and augmented reality are

not replacing the human touch but are

empowering technicians with real-time data

and expertise to solve problems efficiently.

By embracing new tools and enhancing inperson

support capabilities, businesses can

mitigate the rising costs of unplanned

downtime and create new opportunities,

strengthen stakeholder relationships and

deliver superior client experiences. NC

WWW.NETWORKCOMPUTING.CO.UK @NCMagAndAwards NOVEMBER/DECEMBER 2024 NETWORKcomputing 19


FEATURE: AI

5 THINGS YOU SHOULD KNOW ABOUT AI NETWORKING

LINAS DAUKSA, PRODUCT MANAGER, KEYSIGHT TECHNOLOGIES HIGHLIGHTS FIVE CRUCIAL

ASPECTS OF AI NETWORKING AND THE CHALLENGES IT CAN POSE TO LARGE NETWORKS

If your organisation has a data centre, it

is likely AI technology will be deployed

into it soon. Whether the AI system will

be a chat bot, provide the automation of

processes across multiple systems, or

enable the analysis of large data sets, this

new technology promises to accelerate and

improve the way many companies do

business. However, AI can be a confusing

and misunderstood concept. In this article

we'll explore five fundamental things you

should know about how AI networking

works and the unique challenges the

technology faces.

1. A GPU is the brain of an AI computer

In simple terms, the brain of an AI

computer is the graphics processing unit

(GPU). Historically, you may have heard

that a central processing unit (CPU) was

the brain in a computer. The benefit of a

GPU is that it is a CPU that is great at

performing math calculations. When an AI

computer or deep learning model is built, it

needs to be "trained," which requires

solving mathematical matrices with

potentially billions of parameters.

The fastest way to do this math is to have

groups of GPUs working on the same

workloads, and even then, it can take

weeks or even months to train the AI

model. After the AI model is built, it is

moved to a front-end computer system and

users can ask questions of the model,

which is called inferencing.

2. An AI computer contains many GPUs

The best architecture to solve AI workloads

is to use a group of GPUs in a rack,

connected to a switch at the top of the

rack. There can be additional racks of

GPUs all connected in a networking

hierarchy. As the complexity of the

problems being solved increases, the

greater is the need for GPUs with the

potential for some

implementations containing

clusters of thousands of

GPUs. Picture the

common image of a

data centre with

rows and rows of

computing racks.

3. An AI cluster is a mini network

When building an AI cluster, it is

necessary to connect the GPUs so they

can work together. These connections are

made by creating miniature computer

networks that allow the GPUs to send

and receive data from each other.

Figure 1. An AI Cluster

Figure 1 illustrates an AI Cluster where

the circles at the very bottom represent

workflows running on the GPUs. The

GPUS connect to the top-of-rack (ToR)

switches. The ToR switches also connect

to network spine switches at the top of

the diagram, demonstrating a clear

network hierarchy required when many

GPUs are involved.

20 NETWORKcomputing NOVEMBER/DECEMBER 2024 @NCMagAndAwards

WWW.NETWORKCOMPUTING.CO.UK


FEATURE: AI

4. The network is the bottleneck of AI

deployments

Last fall, at the Open Compute Project

(OCP) Global Summit, where participants

were working out the next generation of AI

infrastructure, a key issue that came up

was well articulated by Loi Nguyen from

Marvell Technology: "the network is the

new bottleneck."

GPUs are very effective at solving math

problems or workloads. The fastest way for

these systems to accomplish a task is to

have the GPUs all collaborate on the same

workload in parallel. To do this, the GPUs

need the information that they will work on,

and they need to communicate with one

another. If a GPU does not have the

information it needs, or it takes longer to

write out the results, all the other GPUs must

wait until the collaborated task is complete.

In technical terms, the prolonged packet

latency or packet loss contributed by a

congested network could cause

retransmission of packets and significantly

increase the job completion time (JCT). The

implication is that there can be millions or

tens of millions of dollars of GPUs sitting

idle, impacting bottom line results and

potentially affecting time to market for

companies seeking to take advantage of the

opportunities coming from AI.

5. Testing is critical for successfully

running an AI network

To run an efficient AI cluster you need to

ensure that GPUs are fully utilised so you

can finish training your learning model

earlier and put it to use to maximise

return on investment. This requires testing

and benchmarking the performance of

the AI cluster (Figure 2). However, this is

not an easy task as there are many

settings and interrelationships between

the GPUs and the network fabric which

should complement each other

architecturally for the workloads.

Figure 2: An AI data centre testing platform and

how it tests an AI data centre cluster.

This leads to many challenges in testing an

AI network:

The full production network is hard to

reproduce in a lab due to cost, equipment

availability, skilled network AI

engineer time, space, power, and heat

considerations.

Testing on a production system reduces

available processing capabilities of the

production system.

Issues can be difficult to reproduce as the

types of workloads and the data sets can

be widely different in size and scope.

Insights into the collective communic

tions that happens between the GPUs

can be challenging as well.

One approach to meeting these challenges is

to start by testing a subset of the proposed

setup in a lab environment to benchmark key

parameters such as JCT, the bandwidth the AI

collective can achieve, and how that compares

to the fabric utilisation and buffer

consumption. This benchmarking helps find

the balance between GPU / workload

placement and network design/settings. When

the computing architect and network engineer

are reasonably pleased with the results, they

can apply the settings to production and

measure the new results.

CONCLUSION

In order to take advantage of AI, the devices

and infrastructure of the AI network need to be

optimised. Corporate research labs and

academic settings are working on analysing

all aspects of building and running effective AI

networks to solve the challenges of working

on large networks, especially as best-practices

are continuously evolving. It's only through this

reiterative, collaborative approach that the

industry can achieve the repeatable testing

and agility in experimenting "what-if" scenarios

that is foundational to optimising the networks

that AI is built upon. NC

ABOUT THE AUTHOR

Linas has been part of the enterprise and

service provider networking world for a number

of decades. He has held roles in engineering,

engineering management and product

management. His current responsibility is with

the Network Emulator products (formerly known

as IXIA). Linas holds a degree in electrical

engineering from the University of Toronto.

WWW.NETWORKCOMPUTING.CO.UK @NCMagAndAwards NOVEMBER/DECEMBER 2024 NETWORKcomputing 21


FEATURE: AI

GENERATIVE AI AND THE IMPORTANCE OF

GOVERNANCE IN DATA MANAGEMENT

KEVIN KLINE, DATABASE EXPERT AT SOLARWINDS, EXPLAINS WHY

WE ALL NEED TO RAISE OUR DATA LITERACY GAME IN THE RUSH

TO ADOPT GEN AI

If 2023 was the year that generative AI

(GenAI) took off, then 2024 is when this

game-changing technology really began to

fly. From marketing and sales to human

resources and supply chain management,

GenAI is increasingly being employed to help

organisations seek competitive advantage

while cutting costs, accelerating productivity,

and increasing revenues. And the appetite for

this fast-evolving technology continues to

grow at pace.

These are just some of the conclusions from a

recent McKinsey Global Survey, which found

that 65% of organisations are regularly using

GenAI. But while the report touched on the

benefits, it also zeroed in on some of the

challenges now emerging in areas such as

accuracy, compliance, and security. More than

one in three (36%) organisations reported

experiencing difficulties with data - including

defining processes for data governance,

developing the ability to quickly integrate data

into AI models, and an insufficient amount of

training data - that make it difficult to capture

value from GenAI.

In the rush to leverage GenAI, some

organisations have created their own large

language model (LLMs) based on internal

data. More often than not, they are

unprepared for this giant leap forward. Without

the right data governance in place, there's a

risk of error-prone data or data without proper

tagging or categorisation proliferating through

the organisation.

The result is two-fold: First, an LLM system

built on poor-quality data that can lead to

spurious results. Second, data that is

improperly tagged or categorised means that a

GenAI prompt might retrieve data

inappropriate for the end-user. For example, if

a marketing specialist queries a poorly tagged

internal LLM for some casual marketing

highlights about a client, they might actually

see private, internal sales information. In

situations like this, we all know what happens

when decisions are made on bad or

inappropriate data.

DATA GOVERNANCE GOES HAND IN

HAND WITH GOOD AI

In too many cases, organisations are learning

the hard way about the importance of data

governance - a subject that came to

prominence with the introduction of legislation

such as GDPR, where there's a significant cost

attached to lax data security. The growing focus

on data governance also explains why we're

seeing an uptick in interest around data

engineering - which focuses on the practical

application of data collection, transfer, and

orchestration - and the role of database

administrators (DBAs) who ensure that data is

reliable, accessible, and accurate.

But there's a problem: A shortage of

specialists who do this critical work. What's

more, there's also a chronic problem

concerning data literacy. In fact, I would go as

far as saying most organisations today are

terrible at data literacy. While it's true that

many organisations now embrace data

visualisation tools, often they don't understand

what they see in front of them. In my

experience, too many people simply don't

know how to interpret the data regardless of

how many graphs and charts are included in

dashboards and, by extension, apply that

22 NETWORKcomputing NOVEMBER/DECEMBER 2024 @NCMagAndAwards

WWW.NETWORKCOMPUTING.CO.UK


FEATURE: AI

Consequently, there's

been plenty of discussion

about how AI in all its

iterations may be used to

address the skills shortage and

tackle some of the tasks required

for effective database management and

data analysis.

AI IS NO MATCH FOR DATABASE

EXPERTISE

While I'm in no doubt that AI will play its part

in the future, as it stands today, the

technology is not quite yet mature enough for

many purposes without a careful analysis and

implementation plan. Over the last year or so

I've road-tested some of the products

currently available on the market. And while

some of them are 'OK', I wouldn't depend on

them without having a really talented person

on my team to manage their roll-out and to

properly align their features with our

corporate capabilities.

My view about this technology is clear: AI

can make skilled people more productive,

but it's not ready to enable a low-skill person

to act like a skilled DBA or developer. That

may come someday, but not yet.

So, how do we get around this conundrum?

Better training is one answer. But there also

needs to be a culture shift. Firms at the

highest levels must commit to assessing and

analysing data before making decisions.

Most companies don't do that, with

managers often relying on intuition rather

than data analysis. This simply must change.

Of course, this is easier said than done. It's

hard

to break old

habits. Business leaders

are used to looking at the first two

pages of a spreadsheet to make decisions.

But if we're to address issues around data

compliance and governance, organisations

need to instil a deep appreciation for data

throughout their decision-making process.

Top-level managers need to communicate to

mid-level managers that they've built

dashboards, and then mid-level managers

need to use and learn from them. Left to their

own devices, people will do whatever is

easiest most of the time.

DON'T IGNORE GOVERNANCE AND

SECURITY IN THE RUSH TO AI

Finally, there is the perennial issue of security.

Despite all the threats - and all the warnings -

many companies are still too casual about

security. Part of the problem lies in the siloed

nature of IT. Business leaders tend to assume

- wrongly - that if they have a security team in

place then they are secure.

The problem is that security is much more

complex than that. It is not simply a case of

keeping the bad guys out. An effective

defence requires layers and depth. Which

means security must be multi-layered and

integrated at every step. For instance,

databases need

protection against SQL

injection attacks. These are a type of cyberattack

that targets SQL databases by

inserting or "injecting" malicious SQL code

into a query. They're like leaving your front

door open, and yet we still see massive

attacks using cross-site posting and SQL

injection - to my mind, one of the oldest

tricks in the book.

Until we learn to up our game around data

literacy and create a more savvy data-driven

culture, we will continue to be under threat

from those who wish to do us harm. What's

becoming abundantly clear is that the rapid

uptake of gen AI must go hand in hand with

robust data governance, a commitment to

improve data literacy - as well as a doubling

down on security. But this mustn't be done in

isolation. Instead, organisations must also

invest in training as part of a wider push to

foster a culture of data-driven decisionmaking

to fully harness AI's potential. None

of this is new or radical. The advent of gen AI

has merely shone a light on it and brought it

to a wider audience. NC

WWW.NETWORKCOMPUTING.CO.UK @NCMagAndAwards NOVEMBER/DECEMBER 2024 NETWORKcomputing 23


FEATURE: AI

SAFEGUARDING YOUR

ORGANISATION'S AI DEPLOYMENT

ANDY THOMPSON, OFFENSIVE RESEARCH EVANGELIST AT

CYBERARK LABS, ON HOW TO AVOID AN AI IDENTITY CRISIS

AI is revolutionising

business, driving

innovation and efficiency

across sectors. However, as

companies increasingly rely on

AI for critical operations, they

face unique security

challenges. AI systems require

constant access to sensitive

data and networks, operating

with growing autonomy that

traditional security measures

weren't designed to address. To

protect valuable assets and

maintain competitive advantage,

businesses must develop new

strategies to secure their AI

integrations while maximising the

technology's benefits.

MORE THAN CIRCUITS AND CODE

Every single type of identity has a different role

and capability. Humans usually know how to

best protect their passwords. For example, it

seems quite obvious to every individual that

they should avoid reusing the same password

multiple times or choosing one that's very easy

to guess. On the other hand, machines -

including servers and computers - often hold or

manage passwords, but they are vulnerable to

breaches and don't have the capability to

prevent unauthorised access.

AI entities, including chatbots, are difficult to

classify with regard to cybersecurity. These

nonhuman identities manage critical enterprise

passwords yet differ significantly from traditional

machine identities such as software, devices,

virtual machines, APIs, and bots. So, AI is

24 NETWORKcomputing NOVEMBER/DECEMBER 2024 @NCMagAndAwards

WWW.NETWORKCOMPUTING.CO.UK


FEATURE: AI

neither a human identity nor a machine

identity; it sits in a unique position. It

combines human-guided learning with

machine autonomy and needs access to

other systems to work. However, it lacks the

judgment to set limits and prevent sharing

confidential information.

PUTTING SECURITY FIRST

Businesses are investing heavily in AI, with

432,000 UK organisations - accounting for

16% - reporting they have embraced at

least one AI technology. AI adoption is no

longer a trend; it's a necessity, so spending

on emerging technologies is only expected

to keep rising in the coming years. The UK

AI market is currently worth over £16.8

billion, and is anticipated to grow to £801.6

billion by 2035.

However, the rapid investment in AI often

outpaces identity security measures.

Companies don't always understand the risks

posed by AI. As such, following best practices

for security or investing enough time in

securing AI systems is not always top of the

priority list, leaving these systems vulnerable

to potential cyberattacks. What's more,

traditional security practices such as access

controls and least privilege rules are not

easily applicable to AI systems. Another issue

is that, with everything they already have

going on, security practitioners are struggling

to find enough time to secure AI workloads.

CyberArk's 2024 Identity Security Threat

Landscape Report reveals that while 68% of

UK organisations report that up to half of

their machine identities access sensitive data,

only 35% include these identities in their

definition of privileged users and take the

necessary identity security measures. This

oversight is risky, as AI systems, loaded with

up-to-date training data, become high-value

targets for attackers. Compromises in AI

could lead to the exposure of intellectual

property, financial information, and other

sensitive data.

COUNTERING CLOUD ATTACKS

The security threats to AI systems aren't

unique, but their scope and scale could be.

Constantly updated with new training data

from within a company, LLMs quickly become

prime targets for attackers once deployed.

Since they must use real data and not test

data for training, this up-to-date information

can reveal valuable sensitive corporate

secrets, financial data, and other confidential

assets. AI systems inherently trust the data

they receive, making them particularly

susceptible to being deceived into divulging

protected information.

In particular, cloud attacks on AI systems

enable lateral movement and jailbreaking,

allowing attackers to exploit a system's

vulnerabilities and trick it into disseminating

misinformation to the public. Identity and

account compromises in the cloud are

common, with many high-profile breaches

resulting from stolen credentials and causing

significant damage to major brands across

the tech, banking and consumer sectors.

AI can also be used to perform more

complex cyberattacks. For example, it

enables malicious actors to analyse every

single permission that's linked to a particular

role within a company and assess whether

they can use this permission to easily access

and move through the organisation.

So, what's the sensible next step?

Companies are still at the beginning of the

integration of AI and LLMs, so establishing

robust identity security practices will take time.

However, CISOs can't afford to sit back and

wait; they must proactively develop strategies

to protect AI identities before a cyberattack

happens, or a new regulation comes into

place and forces them to do so.

THE RIGHT APPROACH TO AI

SECURITY

While there is no silver bullet security

solution for AI, businesses can put certain

measures in place to mitigate the risks.

More specifically, there are some key

actions that CISOs can take to enhance

their AI identity security posture as the

industry continues to evolve.

Identifying overlaps: CISOs should

make it a priority to identify areas

where existing identity security measures

can be applied to AI. For example,

leveraging existing controls such as

access management and least privilege

principles where possible can help

improve security.

Safeguarding the environment: It's crucial

that CISOs understand the environment

where AI operates to protect it as

efficiently as possible. While purchasing

an AI security platform isn't a necessity,

securing the environment where the AI

activity is happening is vital.

Building an AI security culture: It's hard

to encourage all employees to adopt

best identity security practices without a

strong AI security mindset. Involving

security experts in AI projects means

they can share their knowledge and

expertise with all employees and ensure

everyone is well aware of the risks of

using AI. It's also important to consider

how data is processed and how the

LLM is being trained to encourage

employees to think of what using

emerging technologies entails and be

even more careful.

As AI reshapes the business world, it

brings both transformative potential and

novel identity security risks. Conventional

cybersecurity measures fall short against AIspecific

vulnerabilities, forcing a paradigm

shift in how we approach digital safety.

Today's security leaders must evolve beyond

traditional threat management to

understand and protect unique AI identities.

Success in the AI era demands a delicate

equilibrium: embracing cutting-edge

innovation while maintaining robust security

protocols to safeguard critical assets. NC

WWW.NETWORKCOMPUTING.CO.UK @NCMagAndAwards NOVEMBER/DECEMBER 2024 NETWORKcomputing 25


OPINION: NETWORK RESILIENCE

STRENGTHENING NETWORK RESILIENCE IN THE FACE OF DISRUPTION

ALAN STEWART-BROWN, VP EMEA AT OPENGEAR GIVES US HIS KEY TAKEAWAYS FOR CIOS

LOOKING TO GAIN A COMPETITIVE EDGE THROUGH RELIANT I.T.

When technology underpins nearly

every aspect of business, the

resilience of IT systems and networks

is paramount. Recent incidents like the

CrowdStrike outage highlight that even

leading organisations are vulnerable to

disruptions from single points of failure and

should serve as a stark wake-up call for Chief

Information Officers (CIOs) to reassess their IT

strategies and bolster their systems against

unforeseen challenges.

In the instance of the CrowdStrike outage, a

software misconfiguration led to widespread

impact affecting roughly 8.5 million devices.

60% of Fortune 500 companies were

affected, costing $5.4 billion in damages. It

illustrates the necessity of secure remote

network access - something which is vital for

quickly addressing and rectifying issues before

they can cascade into more extensive network

failures. The ramifications of these disruptions,

whether financial, reputational, operational or

security-related, are substantial, reinforcing

the need for comprehensive strategies to

ensure network resilience.

During times of disruption, having a resilient IT

system in place enables continuous operations,

rapid recovery, and scalability to meet

unpredictable demand shifts. For CIOs, it's about

more than maintaining uptime statistics; it's about

preparing the network for the unexpected and

ensuring availability and reliability of IT

infrastructure no matter what. A resilient network

acts as a shield, absorbing shocks and allowing

operations to continue seamlessly.

LEARNING FROM RECENT OUTAGES

Strengthening network resilience begins with

learning from incidents like the CrowdStrike

outage. As those responsible for maintaining IT

infrastructure, CIOs are accountable for

ensuring continuity in this context, and they

should conduct thorough assessments of their

IT and network environments to identify

potential single points of failure. This involves

regular system audits, stress testing, and

scenario planning to understand how different

failures could affect operations.

Proactive measures help identify vulnerabilities

and ensure the overall health of the network

infrastructure. By examining configurations,

access controls, and security policies,

organisations can detect weaknesses that might

expose them to cyber threats. Identifying issues

such as outdated software, misconfigurations,

or unpatched systems allows for timely

remediation before malicious actors

exploit them.

26 NETWORKcomputing NOVEMBER/DECEMBER 2024 @NCMagAndAwards

WWW.NETWORKCOMPUTING.CO.UK


OPINION: NETWORK RESILIENCE

Regular audits ensure configurations align

with industry best practices and

organisational policies, preventing errors that

could compromise security or stability.

Continuous monitoring as part of these

assessments enables organisations to stay

ahead of evolving challenges, providing

real-time insights and facilitating rapid

responses to emerging issues. Making

regular audits and assessments the basis of

network management empowers teams to

maintain optimal configurations and

navigate the ever-changing cybersecurity

landscape with confidence.

SECURE REMOTE MANAGEMENT

AND MONITORING

Building on this critical audit and assessment

process, secure remote network access

represents another vital component of

network resilience. In the CrowdStrike

incident, the ability to remotely access and

rectify the misconfiguration swiftly could have

reduced the disruption's extent because it

would allow IT teams to troubleshoot and

resolve issues from any location.

Out-of-band management solutions can

play a vital role here in ensuring secure

remote access and control by providing a

back-up communication channel that works

independently of the primary network. This

means that even if the main network is down

or has been compromised in some way,

administrators can still securely manage

network devices without any interruptions.

Security is, of course, a top priority in this

situation. Strong authentication measures,

such as multifactor authentication, offer a

critical layer of defence against unauthorised

access, while encryption safeguards the

sensitive data exchanged between remote

systems and network devices.

This kind of approach can be strengthened

further through the use of tools that offer realtime

insights into network performance. These

are key in helping to spot issues early,

detecting security threats, and responding

quickly to maintain smooth operations.

Technology is critically important, but the

human dimension must never be neglected.

As remote work continues to expand, it's

essential that remote management solutions

can scale to support geographically

dispersed teams without sacrificing security.

Ultimately, a well-informed team is key.

Educating users on security best practices

boosts the overall effectiveness of any remote

management strategy.

TURNING RESILIENCE INTO

COMPETITIVE ADVANTAGE

While all the above actions are key, achieving

network resilience goes beyond dealing with

current issues. Anticipating future

vulnerabilities is just as important, and CIOs

need to stay ahead of emerging threats by

keeping abreast of technological

advancements and evolving security

landscapes. Investing in automation and

artificial intelligence can provide predictive

insights into potential system failures.

These technologies monitor system

performance in real-time, detect anomalies,

and can even initiate automatic corrective

actions helping to address issues before

they escalate.

Another policy CIOs should implement to

put themselves in a stronger position to tackle

disruptions is the development of clear

incident response plans, outlining steps to be

taken during various outages to ensure teams

can respond quickly and effectively. Regular

drills and updates keep these plans relevant,

and stakeholders prepared.

Addressing the human element is essential

in this context too. With many network

engineers approaching retirement, there's a

looming skills gap that could affect IT

resilience. CIOs should invest in training and

development programmes to upskill existing

staff and attract new talent. Embracing

flexible working arrangements, such as

remote or hybrid models, can help attract a

broader pool of candidates.

A POSITIVE OUTLOOK

By fostering a culture of continuous

improvement, teams feel empowered to

proactively identify and tackle

vulnerabilities before they have an impact.

When departments collaborate, they

combine their unique perspectives, leading

to resilience strategies that are both robust

and comprehensive, while at the same

time addressing risks that might otherwise

be overlooked.

From a financial standpoint, it is critical to

advocate for sufficient budget allocations

dedicated to enhancing IT and network

resilience. While investing in redundant

systems, secure remote access solutions and

advanced monitoring tools does come with

upfront costs, these expenses pale in

comparison to the potential losses from

prolonged outages. In the long run, these are

investments that safeguard an organisation's

stability and reputation - and that's a

compelling justification for making them.

It is equally important to highlight that

maintaining continuous operations isn't just

about preventing losses - it's about gaining

a competitive edge. Customers and

partners increasingly expect uninterrupted

services, and companies that deliver on

this expectation differentiate themselves in

the market.

By proactively strengthening IT and

network resilience, CIOs can ensure their

organisations build trust with stakeholders,

enhance their reputation, and position

themselves as reliable partners. This

strategic approach not only safeguards

against disruptions but also contributes to

long-term business growth and success. NC

WWW.NETWORKCOMPUTING.CO.UK @NCMagAndAwards NOVEMBER/DECEMBER 2024 NETWORKcomputing 27


FEATURE: DATA CENTRES

TRANSFORMING DATA CENTRES FOR THE AI ERA:

IMPERATIVES FOR SUCCESS

WITHOUT A ROBUST AND SCALABLE DATA CENTRE INFRASTRUCTURE, OPERATORS WILL FIND IT

CHALLENGING TO SUPPORT THE COMPLEXITIES THAT AI DEMANDS, ACCORDING TO ANDREW

DONOGHUE, GLOBAL SEGMENT STRATEGY AT VERTIV

The acceleration of AI, driven by GenAI,

is transforming industries at a rapid

pace, creating new opportunities and

demand across sectors. Among the most

significant changes is the impact AI is having

on IT infrastructure, particularly data centres.

For example, according to analyst Gartner,

spending on data centre systems is expected

to increase 24% in 2024 due in large part to

increased planning for GenAI.

As AI's computing requirements grow in

complexity and intensity, data centre

operators are faced with a pressing need to

rethink facility design and operation. Those

who act decisively and strategically in this

area will position themselves at the forefront

of this transformation, enabling both

operational efficiency and future readiness.

There are several critical imperatives that

data centre operators should look to

embrace if they are to stay ahead in the AI

era. From revisiting operating models to

managing power and cooling systems, and

balancing AI's power requirements with

environmental concerns, an effective data

centre strategy will hinge on efficiency,

sustainability and the ability to manage new

risks. Vertiv has developed an AI Imperatives

framework to help guide this approach.

AI DRIVEN TRANSFORMATION

To fully realise the potential of AI, data centre

operators should adopt a mindset that

embraces comprehensive transformation. AI

is not merely a technological tool; it

represents a new paradigm for innovation

across products, services, and customer

interactions. This requires a fundamental

overhaul of existing operating models and

infrastructure. The ability to adapt current

frameworks to meet the ever-increasing

demands of AI-powered applications is a

prerequisite for success.

The ability for operators to prioritise

scalability and flexibility in their designs is key.

AI workloads often involve intensive

computational tasks that require highperformance

computing (HPC) environments,

which in turn demand more sophisticated

cooling systems, higher energy inputs, and

denser infrastructure. AI's usefulness in

industries such as healthcare, finance, and

autonomous driving depends on these

enhanced data processing capabilities.

Without a robust and scalable data centre

infrastructure, operators will find it

challenging to support the complexities that

AI demands.

AI WITHIN THE DATA CENTRE

Data centre operators should also be

prepared to leverage AI technologies to

optimise internal operations, from resource

allocation to proactive and predictive

maintenance. By integrating AI-driven

insights into their own management

processes, operators can enable smoother

performance and reduced downtime, thus

enhancing the customer experience.

POWER AND COOLING SYSTEMS: A

STRATEGIC IMPERATIVE

As data centres accommodate increasingly

dense environments, strategic power and

cooling management becomes an even more

critical aspect of operations. Accelerated

compute systems supporting AI workloads

generate enormous amounts of heat.

Innovative approaches to power and cooling

are therefore essential for preparing data

centres to operate effectively in the AI era.

A shift towards liquid cooling and other

advanced cooling technologies is already

underway. According to industry analyst

Dell'Oro Group, the market for liquid

cooling could grow to more than $15bn over

the next five years. Traditional air-cooling

systems, although likely to be part of the core

infrastructure for some time to come, are

proving insufficient for high-density racks that

can exceed 100kW or more. Liquid cooling

systems, by contrast, offer a more effective

28 NETWORKcomputing NOVEMBER/DECEMBER 2024 @NCMagAndAwards

WWW.NETWORKCOMPUTING.CO.UK


FEATURE: DATA CENTRES

solution for managing the significant heat

loads generated by AI workloads. This shift

is especially crucial for edge computing,

where space constraints and energy

efficiency are paramount.

AI workloads also increase energy

consumption. As data centres scale up to

meet AI's demands, the associated power

requirements present both an operational

challenge and an opportunity. Operators

should look to adopt more efficient energy

management strategies, from integrating

alternative energy sources to

implementing dynamic power allocation

systems that can optimise energy use

based on workload demands. This will

enable data centres to remain both costcompetitive

and environmentally

responsible.

THE AI EFFICIENCY PARADOX:

BALANCING PROCESSING POWER

AND SUSTAINABILITY

One of the most pressing concerns for

data centres in the AI era is the "AI

efficiency paradox". While AI presents

opportunities for innovation and

optimisation, its computational intensity

also raises significant sustainability

challenges. The enormous processing

power required to run AI models can

strain both infrastructure and the

environment, driving up energy

consumption and carbon emissions.

Balancing AI's processing power with

environmental stewardship is no longer an

option but a necessity. For data centres,

prioritising energy efficiency and

alternative energy adoption is essential for

maintaining cost competitiveness whilst

reducing environmental impact. As

regulatory pressure grows, particularly in

Europe and the UK, where governments

are pushing for stringent carbon reduction

targets, data centres must navigate these

challenges carefully.

Operators should explore strategies such

as energy-efficient hardware, AI-driven

resource allocation, and alternative energy

integration. These measures can significantly

reduce the carbon footprint of AI workloads

while enabling power demands to be met.

For example, hyperscale data centres are

already investing in on-site alternative

energy solutions to mitigate the

environmental costs of running such powerintensive

operations.

FUTURE-READY INFRASTRUCTURE

FOR HIGH-DENSITY AI

ENVIRONMENTS

As AI technologies evolve, developing

future-ready infrastructure to handle

higher density environments and

increasing computational workloads is

crucial. Moving beyond traditional tech

refreshes, operators must consider

infrastructure that can support densities

exceeding 100kW per rack and beyond.

This requires a forward-looking approach

to design, scalability, and efficiency.

Data centre operators that succeed in

future-proofing their operations will do so

by embracing cutting-edge technologies

and creating infrastructure that can scale

rapidly. This will mean increasing power

and cooling scalability to accommodate

the next generation of AI models and

applications. Furthermore, planning for

future density requirements is critical to

avoid costly retrofits or over-provisioning

of resources, both of which can lead to

stranded capacity and inefficiencies.

CRITICAL INFRASTRUCTURE

CHALLENGES IN THE AI ERA

Data centre operators will face significant

challenges as they navigate the transition

to AI-ready infrastructure. First, they must

effectively leverage their existing

infrastructure investments while

incorporating new technologies. Retrofits

and upgrades can be complex,

particularly when trying to blend old and

new systems without a common language

or control systems. A deep understanding

of what is technically possible is key to

overcoming these challenges.

Furthermore, designing for power and

cooling scalability that can leap, not just

grow incrementally, is essential to support

the demands of AI. Sustainability

challenges will only increase in scope, and

forward-looking data centre operators are

collaborating with partners that are

investing in research and development

(R&D) to stay closely aligned with

technological advancements.

Maintaining a robust service and

maintenance network is equally important.

With AI workloads accelerating rapidly,

data centres must have trusted partners

with the experience and footprint to

support operations globally. This includes

fault tolerance, minimising downtime, and

maintaining operational efficiency even as

densities increase.

NAVIGATING THE AI ERA WITH

CONFIDENCE

The transformation of data centres for the

AI era is not without its challenges, but

those who embrace change and consider

the real imperatives around AI will be well

positioned for success. Comprehensive

transformations, efficient power and

cooling management, and a commitment

to sustainability are the cornerstones of

future-ready data centres.

As operators balance AI's processing

power with environmental concerns, they

will need to make calculated investments

in infrastructure, innovation, and

partnerships. By doing so, they will not

only navigate the AI era confidently but

also lead the charge in driving the next

wave of technological and operational

breakthroughs. NC

WWW.NETWORKCOMPUTING.CO.UK @NCMagAndAwards NOVEMBER/DECEMBER 2024 NETWORKcomputing 29


FEATURE: DATA CENTRES

DATA CENTRES: THE BALANCE OF POWER

FORGET COOLING, FOR DATA CENTRES IT'S NOW AN ISSUE OF POWER, ACCORDING TO GARY

TINKLER, MD OF DATA CENTRES AT NORTHERN DATA GROUP

When we talk about High-

Performance Computing (HPC),

the fusion of artificial

intelligence (AI) and computational power

is driving incredible innovations. In the

past, we focused mainly on cooling

solutions to keep systems running

smoothly. But now, with AI-driven HPC

systems requiring so much more power,

the real challenge isn't just about keeping

hardware cool; it's about managing an

enormous demand for electricity. This

pivotal shift in the industry is telling us

something important: it's no longer a

cooling problem - it's a power problem.

WHERE ARE WE NOW?

Let's take a closer look at NVIDIA, a giant

in the HPC world. They've created popular

air-cooled systems that have served us

well. However, as AI models get more

complex, the power requirements are

skyrocketing. Reports show that AI training

tasks use 10-15 times more power than

traditional data centres were designed to

handle. Facilities that once operated at 5-

8kW per rack are quickly becoming

outdated. Recently, NVIDIA announced a

major rollout of new GPUs, highlighting

the urgent need for advanced technology

to meet these growing power demands.

To put this into perspective, data centre

operators are now reevaluating their

power strategies because their existing

setups can't keep up. For example, a

facility that used to work well with 8kW

per rack now finds that this just isn't

enough anymore. As AI continues to

advance, we're looking at power needs

soaring to between 50-80kW per rack.

This isn't just a small tweak; it's a major

change in how data centres need to be

designed.

A recent study from the International

Data Corporation (IDC) found that

global data centre electricity

consumption is expected to more than

double from 2023 to 2028, reaching an

astounding 857 Terawatt hours (TWh) by

2028. This underlines the importance of

having data centre facilities that can

support higher power loads if they want

30 NETWORKcomputing NOVEMBER/DECEMBER 2024 @NCMagAndAwards

WWW.NETWORKCOMPUTING.CO.UK


FEATURE: DATA CENTRES

to stay competitive in the fast-paced AI

world. This isn't just a theory - it's a

reality that data centre operators must

face head-on.

STEPS DATA CENTRES CAN TAKE

One of the biggest challenges in this

transition is updating power supply

systems. Traditional Power Distribution

Units (PDUs) aren't built to handle the

demands of these new AI-driven

systems. To meet the required power

levels, data centres can invest in more

advanced PDUs that can manage

heavier loads while boosting overall

efficiency. For many setups today, that

means installing six units that can each

supply 63 amps of power. This shift not

only changes how data centres are built

but also adds complexity to how

everything is arranged inside the racks.

Of course, as facilities rush to meet

these new power needs, we're seeing

innovative solutions come to light.

Ultrascale Digital Infrastructure has

partnered with Cargill for example so

that its data centres can run on 99%

plant-based fluids, eliminating the need

for billions of gallons of water used

annually in cooling, offering new

opportunities for water conservation,

particularly for data centres designed to

rely on water in their operations.

EVOLVING INFRASTRUCTURE FOR

POWER DEMANDS

As power demands rise, the standard

1200mm deep racks are becoming

outdated. To meet this increase we're

likely to see a shift to 1400mm deep

racks. This isn't just about making things

bigger; it's about maximising flexibility

and capacity. Recent reports indicate

that wider rack options - ranging from

800mm to 1000mm - are becoming

more popular, providing standardised

52 Rack Units (RU) that help facilities

scale more effectively.

This change in rack design is crucial

because it directly affects how data

centres can support the evolving demands

of AI and HPC. By optimising the size of

racks, facilities can improve airflow,

streamline power distribution, and

ultimately boost operational efficiency.

Another big challenge is the issue of

"stranded space" in data centres. As

facilities designed for traditional

workloads try to adapt to new HPC

infrastructure, they often find themselves

with wasted space. Older data centres

weren't built to handle the density and

power needs of modern AI workloads.

Even those with upgraded setups, like

indirect cooling solutions that can

support 30kW per rack, are now

proving inadequate as requests now

frequently exceed 60kW. Facilities

operators are rethinking not just their

cooling methods but also how to make

the best use of their available space

while preparing for increasing power

demands.

Traditional data centres were built with

certain assumptions about power needstypically

around 5-8kW per rack. This

led to innovations like aisle

containment, designed to improve

cooling in response to growing

demands. However, as AI keeps pushing

the limits, these outdated assumptions

are no longer enough. HPC

deployments now require facilities that

can handle power outputs of up to

80kW per rack or even more.

We're beginning to see a new wave of

advanced data centres emerge that look

very different - facilities designed from

the ground up to meet these heightened

demands and that can handle diverse

power requirements while ensuring

flexibility for future growth.

WHAT'S NEXT?

As AI continues to reshape what's

possible in HPC, the industry is faced

with a significant challenge at its core:

the power problem. The traditional focus

on cooling just isn't enough anymore.

With exciting new technologies being

developed at a faster pace than ever,

attention is shifting to building a robust

power infrastructure that can support this

new frontier.

Data centres that evolve in their design,

layout, and operational strategies to turn

this power challenge from a roadblock

into an opportunity, can unlock the full

potential of AI in high-performance

computing. The future of HPC looks

bright, but it all depends on our ability to

adapt to these new demands. NC

WWW.NETWORKCOMPUTING.CO.UK @NCMagAndAwards NOVEMBER/DECEMBER 2024 NETWORKcomputing 31


FEATURE: DATA CENTRES

REDEFINING DATA CENTRES WITH AI

AI CAN DELIVER A NEW DAWN OF EFFICIENCY AND SECURITY

FOR THE DATA CENTRE ACCORDING TO RAMZI CHARIF, VP

TECHNICAL OPERATIONS, EMEA, VIRTUS DATA CENTRES

In today's increasingly digital world,

data centres have become the invisible

powerhouses behind our daily

interactions - whether streaming a

movie or running a complex business

operation. With demands for

processing power, storage, and

real-time data analysis continuing

to rise, the industry faces growing

pressure to become smarter and

more efficient.

Artificial Intelligence (AI) is the

latest transformative technology

that promises to elevate the data

centre industry to new heights,

unlocking greater efficiency,

sustainability, and resilience.

AI: THE ENABLER OF DATA

CENTRE EVOLUTION

AI has been making waves in sectors like

healthcare and finance for some time and

when it comes to data centres, the

conversation is all about how they have to

evolve to cope with the growth of AI.

However, data centres too are looking at

how AI can help to improve operations.

Leveraging AI goes beyond simply

automating routine tasks; it introduces a new

level of predictive intelligence that can

monitor, learn, and respond to

environmental changes in real time. Data

centres equipped with AI can make

autonomous decisions, such as dynamically

adjusting power and cooling systems based

on live operational conditions.

AI's ability to predict equipment failures,

optimise energy usage, and bolster security is

already reshaping the operational model of

data centres. A 2023 report from Uptime

Institute highlights how AI is accelerating the

adoption of autonomous infrastructure

32 NETWORKcomputing NOVEMBER/DECEMBER 2024 @NCMagAndAwards

WWW.NETWORKCOMPUTING.CO.UK


FEATURE: DATA CENTRES

management, reducing downtime and

boosting overall operational resilience

across the industry. According to the same

report, AI-driven systems in newer data

centres have already reduced manual

intervention, freeing up engineers to focus

on high-value tasks.

OVERCOMING RELUCTANCE:

LEARNING FROM THE PAST

Despite AI's undeniable potential, the data

centre industry has historically been

cautious in adopting transformative

technologies. The fear of disrupting uptime

- a crucial metric for the industry - has

long deterred operators from embracing

major changes. This hesitation mirrors the

initial reluctance seen with cloud

computing, when organisations were

unsure about the security and reliability of

outsourcing data storage. But today, cloud

computing has become ubiquitous, and AI

appears to be on the same trajectory.

Contrary to fears that AI will replace

human jobs, it is proving to be an

invaluable support tool, especially for

overworked data centre operators. AI takes

on repetitive, mundane tasks - like

adjusting cooling settings or monitoring

network traffic - allowing staff to focus on

strategic improvements and innovation.

Far from removing the need for people, AI

is rapidly becoming a trusted resource in

data centre operations, enhancing human

capacity rather than displacing it.

EFFICIENCY AND SUSTAINABILITY:

AI'S ROLE IN GREEN DATA CENTRES

Energy efficiency has always been a

critical concern for data centres, and

with the likes of Goldman Sachs

predicting that AI is poised to drive a

160% increase in data centre power

demand as well as escalating energy

costs and the growing focus on reducing

carbon footprints, this situation is

unlikely to change any time soon.

AI adoption offers a solution by optimising

the most energy-intensive processes, such

as cooling. Traditional methods of cooling,

where air conditioning systems operate at

full blast regardless of demand, can be

inefficient and environmentally harmful. AI,

on the other hand, uses machine learning

algorithms to predict cooling needs based

on historical and real-time data, only

consuming energy when necessary.

In this age of heightened environmental

awareness, embracing AI for energy

efficiency isn't just a financial imperative -

it's a corporate responsibility. Regulatory

pressures around sustainability are

intensifying, and businesses that fail to

meet these expectations may soon face

penalties. Data centres that integrate AI

into their energy management strategies

can expect to see both reduced

operational costs and enhanced

reputations as sustainability champions.

MINIMISING DOWNTIME WITH

PREDICTIVE AI

Downtime is the bane of any data centre as

it could be prevented. Even a brief service

interruption can cost millions in lost revenue

and damaged reputation. The predictive

capabilities of AI mitigate this risk by

identifying potential equipment failures

before they occur. AI tools can sift through

vast amounts of historical and real-time

data to forecast failures with remarkable

accuracy, allowing operators to address

issues preemptively, before they escalate

into costly breakdowns.

In recent years, AI-driven predictive

maintenance has become a transformative

tool for data centres, allowing operators to

significantly reduce downtime. By using AI

algorithms to monitor performance data

and detect potential equipment failures

before they occur, data centres can

schedule maintenance during off-peak

times, preventing costly disruptions. This

proactive approach not only enables

continuous operations but also extends

the life of key infrastructure.

AI AND CYBERSECURITY: A

DIGITAL FORTRESS

With cyber threats growing in

sophistication, the need for robust

security within data centres has never

been greater. AI's ability to process and

analyse vast streams of data in real time

makes it an essential tool for enhancing

cybersecurity. Traditional security

measures often struggle to keep pace

with the evolving threat landscape, but AI

offers dynamic, adaptive defences.

AI algorithms can identify and respond

to suspicious activity, such as unusual

login attempts or data access patterns, in

real-time. According to a 2023 report by

Gartner, data centres employing AIdriven

security systems saw a reduction in

security breaches compared to those

relying solely on traditional methods.

Large language models (LLMs)

continuously learn from each threat they

encounter, improving the speed and

accuracy of threat detection.

THE FUTURE OF DATA CENTRES

It is likely that the future of data centres is

inextricably linked to AI. As the

technology continues to evolve, it will

reshape not only how data centres

operate but also their role in the broader

digital economy. Data centres that are

proactive in adopting AI will lead the

industry into an era defined by greater

efficiency, enhanced security, and

heightened sustainability.

The question is no longer whether AI

should be integrated, but how quickly

operators can move to embrace it. As the

demands on data centres grow, AI provides

the blueprint for a more agile, resilient, and

future-ready infrastructure. NC

WWW.NETWORKCOMPUTING.CO.UK @NCMagAndAwards NOVEMBER/DECEMBER 2024 NETWORKcomputing 33


OPINION: NETWORK MANAGEMENT

UNTANGLING NETWORK COMPLEXITY

JOEL CUNNINGHAM AT DAISY CORPORATE SERVICES EXPLAINS WHY SIMPLICITY IS THE KEY TO

SMARTER NETWORK MANAGEMENT

The IT landscape has changed

dramatically in recent years. The shift to

fully remote or hybrid working models

means organisations now need to provide

seamless IT experiences for users, whether they

are in the office or at home, regardless of the

number of applications or devices they use.

At the same time, organisations must support

a growing number of technologies, from

cloud-based applications to the Internet of

Things (IoT) and edge computing, with

scalable and secure network infrastructure.

For IT network teams, these changes have

led to a continually growing demand for

connectivity. Today, organisations must keep

employees productive and deliver the digital

experiences that customers expect, all while

reducing costs and improving operational

resilience. Achieving this balance is

challenging, especially for those burdened

with legacy equipment and complex network

infrastructures.

THE COMPLEXITY PROBLEM

From a networking standpoint, cybersecurity

risks continue to present a substantial

challenge for IT teams. Recent Daisy research

shows that more than two-thirds of UK

organisations have seen an uptick in network

security threats over the past 18 months. The

heightened threat landscape has increased the

risk of hackers using ransomware, malicious

scripts and phishing attacks to steal sensitive

data and hold businesses to ransom for large

sums of money.

Today's remote and hybrid working patterns

have contributed significantly to this

heightened risk level, as 85% of organisations

say remote and hybrid working has

contributed to an increase in network security

threats. At a time when the network perimeter

is becoming increasingly virtual and a

growing number of business processes and

applications are operating online, it's more

important than ever to maintain and manage

a secure boundary between your network and

the outside world. Simply hoping remote

employees will enable a VPN outside the

office does not constitute a robust network

security strategy.

Organisations' security postures aren't

helped by the fact that nearly 9 in 10 (87%)

have networking landscapes comprised of a

patchwork of technologies from different

vendors. So it is hardly surprising that 88%

say simplifying their networking infrastructure

is a priority.

Despite the increasing need to deliver

seamless connectivity through simplified

networks, organisations are still spending

nearly a third (30%) of their IT budget on

simply maintaining legacy hardware. This is

costly, and also at odds with many

organisations' current sustainability goals, as

legacy network technology tends to consume

a disproportionate amount of power

compared to modern hardware.

OVERCOMING NETWORK STRAIN

Supporting the growing business use of cloud

applications is also placing further strain on

network performance, with 81% of UK

organisations saying this is a concern. As

organisations rely more heavily on the cloud,

traditional wide-area networks (WANs) often

lack the capabilities necessary to ensure

reliable, secure, and efficient connectivity

between various locations. Cloud applications

are increasingly integral to modern businesses'

operations, and any network performance

issue or downtime stands to negatively impact

employee productivity and the bottom line.

Software-Defined Wide Area Network (SD-

WAN) has emerged and evolved into a

transformative solution. This powerful

technology not only solves the problem of

optimising network performance but enhances

security measures, giving organisations an

edge in the ever-evolving threat landscape

they now operate in. With SD-WAN,

traditional hardware-centric networking

models can be replaced with a software-based

approach, making it easier to manage

network traffic and ensure seamless

connectivity between various locations.

Alongside the latest SD-WAN offerings,

single vendor Security Access Service Edge

(SASE) solutions can enable organisations

more effectively manage the threats presented

by hybrid working patterns. Today's solutions

increasingly use artificial intelligence and

machine learning to aggregate data from

various sources, enabling proactive analysis

and threat detection. This empowers

organisations to swiftly respond to potential

security breaches, reducing detection time

from months to hours.

Connectivity is at the core of supporting

organisations' current and future digital

ambitions. Yet, untangling complex network

infrastructure remains a significant

challenge for many. However, by adopting

innovative networking technologies such as

SD-WAN and SASE, organisations can take

a positive step towards increasing efficiency

and decreasing costs, and ultimately,

reducing the inevitability of cyber risk. In

2024, the true litmus test for organisations

will be how quickly and effectively they can

simplify their infrastructure and achieve

smarter network management. NC

34 NETWORKcomputing NOVEMBER/DECEMBER 2024 @NCMagAndAwards

WWW.NETWORKCOMPUTING.CO.UK



Technology is constantly evolving. In our mission to keep the Awards relevant,

appropriate and useful we think carefully about the categories that should be

included. This often means that we introduce some new categories. Equally, it

sometimes makes sense that we rename or discontinue some categories.

As we plan for the Awards of 2025 we will be looking closely at the levels of

nominations and votes that the various categories attracted in the Awards of

2024. However, historical data cannot guide us about what NEW categories

should be introduced for the future. This is where your input will be very

welcome. We want to hear and take on board your views. What new categories

would you like to see in the Awards of 2025? Please share your views with

david.bonner@btc.co.uk

WWW.NETWORKCOMPUTINGAWARDS.CO.UK

ATTENTION VENDORS:

Our nominating and voting model means that it’s very definitely an advantage if

your solutions are well known and well-liked – and of course it should be !

However, the Network Computing Awards also provide opportunities for you to win

recognition by impressing a Judge.

The BENCH TESTED PRODUCT OF THE YEAR is a judged category for all

solutions that have been reviewed for Network Computing in the year leading up to

the Awards ceremony. To book your solution(s) in for review contact

david.bonner@btc.co.uk

Chief Event Sponsor:

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!