NC Jan-Feb 2022

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

NETWORKcomputing<br />

I N F O R M A T I O N A N D C O M M U N I C A T I O N S – N E T W O R K E D www.networkcomputing.co.uk<br />


Predicting the year ahead in cybersecurity<br />


Time to step away from<br />

legacy systems?<br />


Assessing the hidden cost<br />

of network downtime<br />


Uniting business and IT<br />

at the network edge<br />

JANUARY/FEBRUARY <strong>2022</strong> VOL 31 NO 01










Transform the future of your business.<br />

After two-years out, the global event of choice for everyone<br />

committed to the design, build and management of digital<br />

initiatives and technology architecture is back. If you’re a<br />

technologist or a business leader in the public, voluntary<br />

and private sector, this is the place to give your future a<br />

jet-propelled boost of inspiration, ideas and innovation.<br />

All the very best suppliers and providers combine with<br />

expert-delivered content in one unmissable event.<br />

Learn about the As-A-Service Model, Digital Acceleration,<br />

Emerging Tech, Hybrid and Multi-cloud, Sustainable<br />

Cloud and more. Come and see your digital<br />

transformation soar.<br />

Register for your FREE ticket today:<br />

www.cloudexpoeurope.com/BTC-CEE<br />


EUROPE<br />

2 - 3 March <strong>2022</strong> ExCel, London<br />

www.cloudexpoeurope.com<br />

PART OF<br />

I<strong>NC</strong>ORPORATING<br />


2 - 3 March <strong>2022</strong> ExCeL, London<br />

techshowlondon.co.uk<br />

BY<br />

DEVOPS<br />

LIVE<br />



BIG DATA<br />

& AI WORLD<br />




GOLD<br />






In <strong>Jan</strong>uary the UK Government unveiled its 'Cyber Security Strategy: <strong>2022</strong> to 2030', the<br />

first governmental strategy of its kind, backed by £37.8 million in investment "to help<br />

local authorities boost their cyber resilience." This eight-year strategy represents a muchneeded<br />

boost to public sector security, which has been subject to increasing security attacks<br />

since the start of the pandemic.<br />

According to Calvin Gan, Senior Manager, Tactical Defence at F-Secure, these attacks<br />

"Revealed the weakness in security implementation, while the impact has been devasting for<br />

some institutions (leak of public health data, high ransom payment, or systems used for<br />

scam activities). Since the increase in attacks, it has become apparent that the public sector<br />

systems would need further security strengthening while staffs' cybersecurity awareness has<br />

to be elevated further." Calvin continued, "With the call for better security practices, controls,<br />

and management in these institutions, the new strategy is a welcomed move especially<br />

when dedicated budgets are being allocated to improve the cybersecurity posture. It is<br />

with the hope that lack of resources would no longer be the main blocker for better security<br />

improvements. Perhaps a first step is to look again at the entire estate of public sector<br />

systems and identify the current risks that are posed to them."<br />

The Goverment Strategy was also welcomed by Zeki Turedi, CTO EMEA, CrowdStrike,<br />

who commented that "The UK, in common with every democracy in the world, faces significantly<br />

increasing cybersecurity threats. The government is right to take action as these<br />

threats from state sponsored adversaries and criminal groups continue to grow, annually.<br />

The Cyber Security Strategy is a step in the right direction, especially with its emphasis on<br />

collecting events and identifying them before they become more serious incidents or<br />

breaches. Hand-in-hand with this is enhancing the UK's detection abilities, which the strategy<br />

also identifies. This is critical as the faster there is visibility into the initial stages of an<br />

attack, the better chance there is to stop breaches."<br />

You'll find more on the security outlook for <strong>2022</strong> in our predictions feature this issue,<br />

where a panel of industry experts share their cybersecurity forecasts for the next twelve<br />

months. We also have an article from Phil Dunlop at Progress on the need for proactive<br />

network monitoring for potential security breaches. The parameters and perimeters of our<br />

networks are evolving, with the need for hybrid working and edge networks becoming<br />

increasingly apparent as a result of the pandemic. It's safe to predict that the threat landscape<br />

will evolve alongside them too. <strong>NC</strong><br />

REVIEWS:<br />

Dave Mitchell<br />

DEPUTY EDITOR: Mark Lyward<br />

(netcomputing@btc.co.uk)<br />

PRODUCTION: Abby Penn<br />

(abby.penn@btc.co.uk)<br />

DESIGN: Ian Collis<br />

(ian.collis@btc.co.uk<br />

SALES:<br />

David Bonner<br />

(david.bonner@btc.co.uk)<br />

Julie Cornish<br />

(julie.cornish@btc.co.uk)<br />

SUBSCRIPTIONS: Christina Willis<br />

(christina.willis@btc.co.uk)<br />

PUBLISHER: John Jageurs<br />

(john.jageurs@btc.co.uk)<br />

Published by Barrow & Thompkins<br />

Connexion Ltd (BTC)<br />

35 Station Square,<br />

Petts Wood, Kent, BR5 1LZ<br />

Tel: +44 (0)1689 616 000<br />

Fax: +44 (0)1689 82 66 22<br />


UK £35/year, £60/two years,<br />

£80/three years;<br />

Europe:<br />

£48/year, £85/two years £127/three years;<br />

ROW:<br />

£62/year, £115/two years, £168/three years;<br />

Subscribers get SPECIAL OFFERS — see subscriptions<br />

advertisement; Single copies of<br />

Network Computing can be bought for £8;<br />

(including postage & packing).<br />

© <strong>2022</strong> Barrow & Thompkins<br />

Connexion Ltd.<br />

All rights reserved.<br />

No part of the magazine may be<br />

reproduced without prior consent, in<br />

writing, from the publisher.<br />




WWW.NETWORKCOMPUTING.CO.UK @<strong>NC</strong>MagAndAwards JANUARY/FEBRUARY <strong>2022</strong> NETWORKcomputing 03



J A N U A R Y / F E B R U A R Y 2 0 2 2<br />

SOFTWARE QUALITY.............20<br />

Dr. Gareth Smith at Keysight Technologies<br />

discusses why software quality now<br />

determines business success - and how<br />

organisations can take steps to improve theirs<br />

CYBERSECURITY IN <strong>2022</strong>.....08<br />

We asked a panel of industry experts for their<br />

cybersecurity predictions for the year ahead,<br />

while Phil Dunlop at Progress explains how<br />

proactive network monitoring tools can block<br />

breaches before they can occur<br />

THE NETWORK EDGE...........22<br />

Our feature on edge computing looks at<br />

the benefits of Smart Edge Monitoring and<br />

application-aware networks, and offers a<br />

guide to uniting business and IT at the edge<br />

COMMENT.....................................3<br />

Time for a security booster<br />

INDUSTRY NEWS.............................6<br />

The latest networking news<br />


DEMYSTIFYING 5G..........................16<br />

By David Fraser at Intel<br />


YOUR I.T.?......................................17<br />

By Erik Sonnerskog at zsah<br />

A CLEARER VIEW OF VDIs................19<br />

By Robert Belgrave at Pax8 UK<br />

GAINING THE SMART EDGE............22<br />

By Sanjay Radia at NETSCOUT<br />

LIVING ON THE EDGE.....................23<br />

By Russ Kennedy at Nasuni<br />

UNIFIED AT THE EDGE....................24<br />

By Reggie Best at NS1<br />


NETWORKS....................................25<br />

By Daniel Blackwell at Pulsant<br />

ASSESSING THE RISK OF IoT...........27<br />

By Matthew Margetts at Smarter Technologies<br />

TALES OF THE UNEXPECTED............32<br />

By David Higgins at CyberArk<br />

DATA ARCHITECTURE............30<br />

Is it time for UK businesses to step away<br />

from legacy systems and migrate to a new,<br />

fit-for-purpose data architecture? Toby<br />

Balfre at Databricks shares his thoughts<br />


DOWNTIME........................34<br />

Alan Stewart-Brown at Opengear<br />

considers the impact of network outages<br />

on staff wellbeing and explains how Smart<br />

Out-of-Band management can help ease<br />

the outage load<br />




One of the largest cloud-managed network<br />

installations in Sweden enables secure, reliable<br />

public Wi-Fi and simplified network management<br />


Superfast IT protects profitability for Wilkes<br />

Tranter with Arcserve ShadowProtect<br />


VERITAS BACKUP EXEC.......................14<br />

SOLARWINDS SQL SENTY...............26<br />

04 NETWORKcomputing JANUARY/FEBRUARY <strong>2022</strong> @<strong>NC</strong>MagAndAwards<br />







Progress introduces WhatsUp Gold Free Edition<br />

Progress has announced a free edition of Progress WhatsUp<br />

Gold, its award-winning IT infrastructure monitoring software.<br />

WhatsUp Gold empowers operations teams to monitor and<br />

manage their business applications and the resources that<br />

support them to ensure high levels of performance and<br />

availability. The Free Edition includes network discovery,<br />

mapping, alerting, reporting and virtual monitoring for up to 20<br />

devices simultaneously or 10 devices when advanced add-on<br />

features such as Network Traffic Analysis, Application<br />

Performance Monitoring or Log Management are in use.<br />

"The volume of connected endpoints for the typical enterprise<br />

network has exponentially grown over the past several years," said<br />

Jason Dover, VP, Product Strategy, Enterprise Application<br />

Experience, Progress. "By providing the Free edition of WhatsUp<br />

Gold, Progress enables smaller organisations and Dev/Test teams<br />

to gain control and visibility that was previously unattainable."<br />

Perle expands support for device access<br />

Perle Systems, a global manufacturer of secure device<br />

networking hardware, have announced IOLAN SCG<br />

Console Servers that incorporate EIA-232, EIA-422, and EIA-<br />

485 signals on a single RJ45 interface with support for up to 48<br />

ports. The ability to individually configure each interface type<br />

using software commands enables organisations to access and<br />

support a variety of serial-based devices in their network.<br />

I<br />

OLAN SCG Console Servers help organisations increase the<br />

value of their serial-based equipment by enabling secure serial<br />

data transmission across existing Ethernet networks without costly<br />

or complex infrastructure changes. Easy to set up and manage,<br />

the IOLAN SCG's software-selectable RS232/422/485<br />

interfaces simplify configuration and eliminate mechanical<br />

tampering associated with DIP switches. And, because a<br />

standard unit can be shipped across multiple sites, regardless of<br />

specific serial devices deployed at each location, last-minute<br />

hardware configurations are minimised and total cost of<br />

deployment ownership reduced.<br />

A united network solution for Old Trafford<br />

Extreme Networks had entered a multi-year partnership with<br />

Manchester United to become the Club's Official Wi-Fi<br />

Network Solutions Provider and Official Wi-Fi Analytics Provider.<br />

The installation of Extreme Wi-Fi 6 access points at Old Trafford<br />

will begin later this year to transform the fan experience with fast,<br />

reliable Wi-Fi connectivity and increase the Club's capability to<br />

deliver high-performance, low-latency and secure digital<br />

services. Additionally, Extreme will help Manchester United<br />

access real-time network analytics to drive more personalised<br />

and informed decisions around both the fan experience and<br />

overall venue operations.<br />

The deployment of Extreme Wi-Fi 6 access points at Old<br />

Trafford Stadium will power faster wireless speeds and low<br />

latency, providing the highest quality connection and a<br />

performance boost for secure fan-facing technology such as<br />

mobile ticketing and other digital offerings. ExtremeAnalytics will<br />

provide Manchester United with rich data sets and insights<br />

around performance and usage, dwell time and location-based<br />

services – in real-time. As a result, the club can continuously<br />

review and optimise venue management by identifying stadium<br />

bottlenecks, overcrowded concessions and other consumer<br />

traffic patterns, while gaining insights into fan activity to better<br />

customise experiences and pinpoint sponsorship opportunities.<br />

Juniper Networks expands its SASE architecture<br />

Juniper Secure Edge is the newest addition to Juniper Networks'<br />

Secure Access Service Edge (SASE) architecture. This new<br />

solution delivers Firewall-as-a-Service (FwaaS) as a single-stack<br />

software architecture, managed by Security Director Cloud, to<br />

empower organisations to secure workforces, wherever they are.<br />

Secure Edge provides unified policy management from a single<br />

UI for all security use cases, meaning that policies can be created<br />

once and applied anywhere and everywhere, including user- and<br />

application-based access, IPS, anti-malware and secure web<br />

access within a single policy.<br />

06 NETWORKcomputing JANUARY/FEBRUARY <strong>2022</strong> @<strong>NC</strong>MagAndAwards<br />



Juniper Secure Edge supports the remote workforce whether<br />

employees are in the office, at home or on the road with secure<br />

user access to the applications and resources needed to do their<br />

job effectively. Security policies go with the user, protecting the<br />

them, their device and applications without having to copy over or<br />

recreate rulesets. Organisations can leverage existing investments<br />

and seamlessly transition to a full SASE architecture at their pace.<br />

Neos to deliver new dark fibre network for North-West UK<br />

Neos Networks has been chosen by Jisc - suppliers of a<br />

digital network and supporting services for the UK's higher<br />

education and research sector - to deliver a new Dark Fibre<br />

network spanning the North-West of the UK. The new network<br />

will replace Jisc's existing <strong>Jan</strong>et North network which currently<br />

serves the region. It will provide gigabit capability to all the sites<br />

using the new network, with some seeing a ten-fold speed<br />

increase compared to their current offering and all achieving<br />

high-capacity speeds up to 100Gbps.<br />

The contract was awarded following a competitive tendering<br />

process and is the latest instalment in Jisc's ongoing overhaul and<br />

rationalisation of 15 regional networks connected into the<br />

organisation’s national backbone infrastructure. As well as this<br />

latest North-West network contract, Neos has previously secured<br />

the contracts to upgrade and merge two Midlands networks into<br />

one new high-speed, high-capacity network, and also the<br />

contract covering Jisc's South of England network.<br />

CMS Distribution launches Cloud Powered by Flexiscale<br />

CMS Cloud Powered by Flexiscale is a fully-featured cloud<br />

platform enabling the channel to provide a full range of<br />

cloud services under their own brand, without the time, cost and<br />

complexity of running their own infrastructure. Resellers can now<br />

provide their own cloud services portfolio to support digital<br />

transformation, including pre-packaged virtual appliances, IaaS,<br />

PaaS, Hosted Desktop and Hybrid Working Solutions, with full<br />

predictability of costs, whilst maintaining full commercial and<br />

contractual control.<br />

"Flexiscale bringing CMS Cloud to our reseller base is a really<br />

exciting proposition," said Nick Bailey, Director of Vendor<br />

Alliances at CMS Distribution. "CMS is a real trusted source in IT<br />

and now we have the perfect vehicle to help VARs and resellers<br />

of all sizes to address their customers' digital transformation<br />

needs, whilst keeping the full relationship with that customer.<br />

CMS and Flexiscale can fully support an opportunity and<br />

provide local expertise quickly and reliably. Offering this service<br />

means that our partner base can more easily move to<br />

consumptive IT and all the benefits that go with that."<br />

Cloud Expo Europe London <strong>2022</strong> is on the horizon<br />

Ciloud Expo Europe London <strong>2022</strong> is approaching fast. The<br />

leading global gathering of cloud specialists, service<br />

providers, innovators, and business leaders takes place at ExCel<br />

London from 2-3 March. Attendees willl be able to discover the<br />

latest products and services from all the leading suppliers, and<br />

immerse themselves in over one hundred hours of expert-delivered<br />

conference sessions and meet with all your peers!<br />

Visitors to Cloud Expo Europe will meet hundreds of leading<br />

service providers, including IBM, Wasabi Technologies, Maria DB,<br />

Fujitsu, IONOS Cloud, OVH Cloud and Tencent Cloud to name a<br />

few. Learn from hundreds of hours of conference sessions,<br />

delivered by industry-leading speakers sharing their challenges,<br />

successes, and latest hardware. Don’t miss Tim Berners-Lee,<br />

inventor of the World Wide Web, alongside speakers from Zoom,<br />

WHO, Lloyds Banking Group, McDonald’s and Marks & Spencer.<br />

Cloud Expo Europe <strong>2022</strong> sits at the heart of the most complete<br />

and exciting technology event gathering on the planet, Tech<br />

Show London, encompassing DevOps Live, Cloud & Cyber<br />

Security Expo, Big Data & AI World and Data Centre World –<br />

creating an unmissable technology event. Join your peers,<br />

partners and friends face-to-face at Cloud Expo Europe, for the<br />

first time in 2 years by registering for your free ticket at:<br />

www.datacentreworld.com/BTC<br />


NEWS<br />


NEWS<br />

WWW.NETWORKCOMPUTING.CO.UK @<strong>NC</strong>MagAndAwards JANUARY/FEBRUARY <strong>2022</strong> NETWORKcomputing 07







<strong>2022</strong> CYBERSECURITY<br />


Mike Sentonas, Crowdstrike CTO<br />

2021 was a challenging year for security<br />

teams. Ransomware remains one of the<br />

most lucrative forms of cybercrime around<br />

and unfortunately, with cybercriminals<br />

becoming more sophisticated and<br />

advancing intrusion techniques, it is<br />

expected to continue in <strong>2022</strong>.<br />

Along with gaining control of company<br />

systems and exfiltrating sensitive data, in the<br />

past year we have seen threat actors increase<br />

their use of the double extortion ransomware<br />

model. Here, threat actors threaten to leak<br />

this sensitive data, increasing the pain<br />

threshold for the victim. This is because<br />

companies don't want sensitive data leaked<br />

to the internet for competitors and journalists<br />

to see. According to CrowdStrike's 2021<br />

Global Security Attitude Survey, 88% of UK<br />

businesses who paid an initial ransom were<br />

extorted for more money, paying out an<br />

additional $497,826 USD on average -<br />

when the initial ransom payment was<br />

already, on average, a hefty $1.22 million to<br />

$1.45 million USD. We anticipate that this<br />

double extortion ransom model will continue<br />

to grow in sophistication in <strong>2022</strong>.<br />

Another trend we have observed is the<br />

growth of the underground economy built<br />

around data exfiltration and extortion. The<br />

stress of a cyber breach very rarely stops<br />

once the organisation has or has not paid<br />

up. In addition to sensitive company data<br />

ending up on a public data leak site, some<br />

criminals have been known to sell files to<br />

each other or even to a competitor in a<br />

foreign market. This means that even if a<br />

company has paid one criminal gang,<br />

another could emerge from the shadows and<br />

demand precisely the same thing.<br />

Ransomware is a pervasive problem and for<br />

ill-equipped businesses, there doesn't seem<br />

to be an end in sight. Cybercriminals are<br />

revamping their entire infrastructure of<br />

tactics, techniques and procedures (TTPs)<br />

and will continue to feast on under-resourced<br />

and unprepared security teams. It is vital for<br />

security teams, in <strong>2022</strong>, to better position<br />

themselves by patching any gaps in their<br />

cybersecurity posture to combat these<br />

persistent attacks.<br />

08 NETWORKcomputing JANUARY/FEBRUARY <strong>2022</strong> @<strong>NC</strong>MagAndAwards<br />



distributed environment that is coming to<br />

shape our reality. Breaking out of this binary<br />

perspective and realising that networking<br />

technology is much more powerful and<br />

nuanced will be the key to success for firms in<br />

<strong>2022</strong> and beyond.<br />

John Morrison, Senior Vice President EMEA<br />

Extreme Networks<br />

Traditionally, most businesses have<br />

considered networks to only consist of two<br />

separate layers: software and hardware. In<br />

<strong>2022</strong>, organisations will begin to discover<br />

the value of viewing their networks holistically<br />

and will come to appreciate how their<br />

networks are in fact multi-layered.<br />

Going forward, networks will only continue<br />

to become more intricate and complex, with<br />

many more parts now comprising the whole.<br />

Thus, companies must reflect on their<br />

infrastructure in the same way - as a whole.<br />

They can do this by finding ways to combine<br />

the power of cloud management with nextgeneration<br />

switches and access points,<br />

utilising the likes of AI and ML and deciding<br />

whether public cloud, private cloud, and/or<br />

on-premises solutions best cater to them. This<br />

approach allows them to achieve both<br />

diverse business connectivity and their<br />

commercial needs.<br />

These actions are vital for firms to futureproof<br />

themselves and become what we call<br />

'infinite enterprises' - enterprises which are<br />

capable of scaling, meeting users wherever<br />

they are and delivering a consumer-centric<br />

experience where technology revolves<br />

around the user's needs. Making possible<br />

networks that can meet these goals reliably<br />

and securely will keep people connected,<br />

engaged and productive in the more<br />

Guy Podjarny, Co-Founder & President, Snyk<br />

2021 proved that supply chains are more<br />

susceptible than ever to cyberattacks. The risk<br />

is growing largely because of the increasing<br />

reliance on proprietary and open source<br />

code and is compounded by the speed and<br />

complexity of modern apps, as well as the<br />

increasing sophistication of potential<br />

intruders. In <strong>2022</strong> we'd expect to see this<br />

trend continue, with geopolitical tensions still<br />

high and COVID continuing to drive<br />

businesses to become digital and embrace<br />

cloud faster.<br />

However, there are things developers can<br />

do to mitigate further risk. They need to<br />

identify and fix weaknesses in the<br />

components they use, and invest in strong<br />

security hygiene practices. Security teams<br />

should embrace a DevSecOps approach,<br />

focusing on helping the people doing the<br />

work make secure decisions and investing in<br />

breaking silos and increasing automation.<br />

While developers can't stop people from<br />

attempting to hack and exploit their systems,<br />

they can stop them from succeeding. Putting<br />

security at the heart of the development<br />

process is the only way to achieve that at scale.<br />

David Maidment, Senior Director Secure Device<br />

Ecosystem, Arm (a PSA Certified co-founder)<br />

As the growing number of IoT devices has<br />

soared, the ecosystem has uncovered a<br />

number of security challenges in the bid to<br />

make devices more secure, while adhering to<br />

the maturing regulatory landscape. In the last<br />

three years, an ecosystem of over 50 partners<br />

have collaborated around PSA Certified in<br />

order to provide a common framework<br />

around IoT security, which is critical to our<br />

connected future. Having a program that<br />

encourages broad adherence to regulations<br />

and that drives a common language in the<br />

growing ecosystem is vital.<br />

In <strong>2022</strong>, we expect perceptions of IoT security<br />

to shift from it being a cost to a necessary value.<br />

With laws, regulations and baseline<br />

requirements changing the way we see security,<br />

there's a growing recognition of the importance<br />

of best-practice security and the risks of inaction.<br />

Third-party evaluation and certification<br />

frameworks will continue to play an increasingly<br />

central role in driving consistency across<br />

markets and to building trust and assurance in<br />

connected devices.<br />

This year we anticipate that the ecosystem will<br />

take proactive IoT measures to protect devices<br />

based on the Root of Trust. Moving away from<br />

siloed approaches to hardware security,<br />

leveraging cross-industry collaboration and<br />

embracing a secure-by-design culture will act as<br />

a catalyst for trusted IoT deployment at scale.<br />

WWW.NETWORKCOMPUTING.CO.UK @<strong>NC</strong>MagAndAwards JANUARY/FEBRUARY <strong>2022</strong> NETWORKcomputing 09


risk exposure is unacceptable. This means<br />

beginning to evaluate and implement the<br />

principles of a secure enterprise, starting first<br />

and foremost with understanding security<br />

compromises will happen as cyber hackers<br />

deploy more sophisticated attacks. Tech<br />

pros should also implement detection,<br />

monitoring, alerts, and response along the<br />

kill chain and engage in red team/tabletop<br />

exercises to measure effectiveness.<br />

implementation. Additionally, the third-party<br />

cloud providers used by companies must be<br />

scrutinised for their data protection<br />

methodology and overall security culture.<br />

Thomas LaRock, Head Geek, SolarWinds<br />

Cybercrime has reached a new peak with<br />

the onslaught of ransomware attacks and<br />

data breaches in the last several months.<br />

The 2021 SolarWinds IT Trends Report<br />

details how organisations experienced<br />

medium exposure to enterprise IT risk over<br />

the past year. Although the survey<br />

respondents felt their existing risk mitigation<br />

and management policies/procedures were<br />

sufficient, it's absolutely critical for<br />

organisations and tech pros to adopt a<br />

mentality where even "medium" risk<br />

exposure is unacceptable.<br />

We expect to see two trends emerge in<br />

<strong>2022</strong> in response to the evolving threat<br />

landscape. As the rate of attacks continues<br />

to accelerate in lockstep with hackers' attack<br />

methodologies and schemes developing at<br />

scale, more tech professionals and<br />

organisations will look to cloud service<br />

providers, managed service providers<br />

(MSPs) and managed security service<br />

providers (MSSPs), and other third-party<br />

security tools (like those offered by Microsoft<br />

365® subscriptions) to supplement their<br />

own IT policies and keep pace with the new,<br />

more effective security measures.<br />

Tech pros and the IT community at large<br />

will better secure the enterprise by<br />

normalising a sense of risk aversion - that is,<br />

moving from simply accepting the current<br />

exposure to a mindset where any level of<br />

Craig Lurey, Co-Founder & CTO,<br />

Keeper Security<br />

2021 saw a record number of cyberattacks<br />

and data breaches. We expect this to<br />

escalate in <strong>2022</strong> with the permanent shift to<br />

a remote workforce for many organisations.<br />

There are growing concerns around data<br />

leaks as employees remotely access<br />

corporate data and infrastructure from<br />

company-issued and personal devices like<br />

laptops and mobile phones. These devices<br />

and employees are, unfortunately, prime<br />

targets for data leakage and device<br />

infection. Additionally, the expanded usage<br />

of cloud-based services and data storage<br />

also expands the footprint and potential<br />

sources of data leaks, whether accidental or<br />

through 3rd party breaches.<br />

The most important thing business leaders<br />

can do when it comes to remote work<br />

vulnerabilities is to develop strong access<br />

management protocols. This means<br />

establishing a zero-trust framework as a<br />

non-negotiable component of any security<br />

Daniel dos Santos, Research Manager at<br />

Forescout Research Labs<br />

Ransomware will continue to dominate the<br />

cybersecurity space in <strong>2022</strong>. As a relatively<br />

simple form of attack that can be highly<br />

effective and profitable, bad actors will start<br />

broadening the devices and technologies<br />

they will go after.<br />

We will see more attacks on vulnerable IoT<br />

devices that will be used as a gateway to<br />

gain access to company systems. Third-party<br />

software and devices with known<br />

vulnerabilities that are hard to fix will also<br />

increasingly move into the spotlight as they<br />

allow cybercriminals to cause huge<br />

disruptions. And compromised Operational<br />

Technology systems will be the Holy Grail<br />

for many bad actors, giving them an iron<br />

grip on the organisation they want to extort.<br />

As ransomware attacks will evolve in <strong>2022</strong><br />

and beyond, so will the cyber defences that<br />

companies need to invest in to adequately<br />

protect themselves. These must include full<br />

device visibility and control tools, ongoing<br />

cybersecurity audits and maintenance,<br />

stringent policies that are regularly updated<br />

as well as powerful network segmentation<br />

solutions that, in the worst-case scenario,<br />

can limit the fallout of an attack. <strong>NC</strong><br />

10 NETWORKcomputing JANUARY/FEBRUARY <strong>2022</strong> @<strong>NC</strong>MagAndAwards<br />


MARCH 9TH & 10TH <strong>2022</strong><br />







It's not enough to know the likelihood of<br />

a cyber attack, its impact or average<br />

cost. It's not even enough to have<br />

security alerts set up on all systems, with<br />

false positive distracting teams. The only<br />

way to protect an organisation's network<br />

from security breaches, internally or<br />

externally, is with the most proactive<br />

network monitoring. Enterprises need to<br />

take their investment in network monitoring<br />

seriously to secure their tech assets.<br />

Cybercriminals are not choosy about<br />

industry or organisation size, and while the<br />

large enterprises might be able to afford to<br />

take the hit of an attack, a smaller<br />

company can be wiped out by one. It's fair<br />

to say that organisations don't truly value<br />

their networks as a critical tech asset - but<br />

they hold the key to minimising the risk of<br />

cyber attacks.<br />

As businesses expanded their digital<br />

ecosystems during the pandemic, adding<br />

in apps, technology and remote-based<br />

users, this increase in touch points<br />

revealed more attack surfaces for<br />

cybercriminals to target. Critical data is at<br />

risk, with hackers gaining in intelligence<br />

about networks, connections, and<br />

vulnerabilities to break through any chink<br />

in your security armour. To add to this,<br />

users have become more complacent -<br />

doubling the risk.<br />

It's vital to have the most proactive<br />

network monitoring tools which can detect<br />

suspicious activity and thwart any potential<br />

breaches before they happen. Using the<br />

right security tools, the network can offer a<br />

heads-up that a breach is occurring and<br />

clues to conduct forensics to learn the<br />

details and block further attacks. The main<br />

considerations to be a step ahead of<br />

attack agents are to obtain deeper visibility<br />

and holistic mapping of your network<br />

infrastructure and attached applications,<br />

services, and devices. Knowing your<br />

vulnerabilities, the potential threats and<br />

the earliest ways to detect network<br />

breaches, is vital.<br />


Data breaches have been around ever<br />

since the existence of data, but the facts<br />

remain that they are getting bigger, and on<br />

the rise. The UK Government's Cyber<br />

Breaches Survey 2021 reported that four in<br />

ten businesses (39%) have experienced<br />

cyber security breaches or attacks in the<br />

last 12 months.<br />

No industry sector is safe from cyber<br />

threats, with even 26% of charities having<br />

12 NETWORKcomputing JANUARY/FEBRUARY <strong>2022</strong> @<strong>NC</strong>MagAndAwards<br />



a breach in the last 12 months, and hardly<br />

a day goes by without yet another breach<br />

notification by an organisation or cyber<br />

attack alert against a country by a nationstate<br />

actor.<br />



According to the 2021 IBM Cost of a Data<br />

Breach Report, the cost of data breaches<br />

has risen 10% in the last year, the biggest<br />

increase in the last seven years. The cost of<br />

a breach rose from $3.86 million to $4.24<br />

million, the highest recorded.<br />

Moreover, costs were even higher when<br />

remote working was presumed to be a<br />

factor in causing the breach, increasing to<br />

$4.96 million. According to IBM's report,<br />

the average cost was $1.07 million higher<br />

in breaches where remote work was a<br />

factor in causing the breach, compared to<br />

those where remote work was not a factor.<br />

The percentage of companies where<br />

remote work was a factor in the breach<br />

was 17.5%.<br />

It's also taking longer for tech teams to<br />

diagnose breaches. The IBM Report found<br />

that it takes on average 287 days to<br />

discover, identify and contain a healthcare<br />

data breach and healthcare and financial<br />

industries spent the most time in the data<br />

breach lifecycle.<br />



You know your own network better than<br />

anyone else - or should do. There are<br />

some instrumental ways to deflect attacks<br />

by using this network knowledge to your<br />

advantage. An effective network traffic<br />

analysis solution will help spot the hackers<br />

and avoid business disruption, a six or<br />

seven figure bill, or even worse.<br />

1. Set and enforce network security policies<br />

If you don't already, you should have good<br />

network hygiene in place to ensure that<br />

everything is disciplined and set up as it<br />

should be. This means ensuring that the<br />

network is configured and running through<br />

policies, thresholds, and alerts.<br />

Setting up important policies, such as<br />

how bandwidth is allocated, the network<br />

segregated, or websites blocked are<br />

established, is critical. Your network<br />

monitoring tool should make regular<br />

checks that compliance is met, and flag to<br />

IT administrators if any requirements are<br />

outstanding.<br />

2. Finding rogue devices<br />

Through the process of discovery,<br />

automated network monitoring can find<br />

new devices such as Wi-Fi access points<br />

and secure these entry points. New wireless<br />

routers are a well-known hacker's<br />

goldmine, so it's important to identify and<br />

secure them, or take them offline.<br />

3. Spot the distributed Denial of Service<br />

(DDoS) attacks early<br />

Distributed Denial of Service (DDoS)<br />

attacks are among the most common and<br />

devastating form of attack. These<br />

malicious attempts to disrupt the normal<br />

traffic of a targeted server, service or<br />

network by overwhelming the target with a<br />

flood of Internet traffic, prevent traffic from<br />

moving. Computers as well as other<br />

networked resources such as IoT devices<br />

can be affected. Acting on early signs is<br />

vital to mitigating its impact. Since network<br />

monitoring continually tracks all your traffic<br />

flows and alerts IT with any anomalies, you<br />

might notice traffic increasing beyond the<br />

point of your preset baselines. The system<br />

has insight into what constitutes a normal<br />

traffic spike, and what indicates a problem<br />

such as DDoS.<br />

In locating these traffic spikes and what<br />

devices may be flooded, you're<br />

immediately a step ahead. Applications<br />

will be slowing, packets are lost, and the<br />

network is suffering from unacceptable<br />

latency. Without this continual network<br />

monitoring, DDoS attacks can easily go<br />

unnoticed.<br />

4. Spot data exfiltration and dark web use<br />

Bandwidth is a key indicator for many<br />

security and performance issues, therefore<br />

Network Traffic Analysis is vital. In<br />

analysing NetFlow, NSEL, S-Flow, J-Flow,<br />

and IPFIX you can see details of<br />

resources, departments, groups or even<br />

individuals using the bandwidth. In<br />

tracking these trends, any suspicious<br />

behaviour shows up, such as botnet<br />

attacks and network takeovers, exfiltration<br />

of data by cybercriminals, DDoS attacks,<br />

and data mining.<br />

Network Traffic Analysis is invaluable for<br />

security forensics, discovering unauthorised<br />

applications, tracking traffic volumes<br />

between specific pairs of source and<br />

destinations, and finding high traffic flows<br />

to unmonitored ports. It can monitor all<br />

network sources for known Tor ports and<br />

spot or block access to the Dark Web.<br />


GUARD<br />

Early detection is vital to preventing or<br />

minimising attack impact. An organisation's<br />

network can be the key indicator for a<br />

potential breach, spotting signs and<br />

patterns that flag to forensics to learn the<br />

details and prevent further attacks.<br />

The network is the main IT highway on<br />

which attacks traverse, and it is a premium<br />

attack surface for cybercriminals. It's<br />

therefore critical to be a step ahead of<br />

hackers through using deep visibility and<br />

holistic mapping of your network<br />

infrastructure and attached applications,<br />

services, and devices. Only by gaining this<br />

visibility through advanced network<br />

monitoring can you safeguard vulnerable<br />

areas and head off breaches as they<br />

attempt to gain access. <strong>NC</strong><br />

WWW.NETWORKCOMPUTING.CO.UK @<strong>NC</strong>MagAndAwards JANUARY/FEBRUARY <strong>2022</strong> NETWORKcomputing 13


Veritas Backup Exec<br />




The most valuable asset that an<br />

organisation possesses is<br />

information. Its stock in trade -<br />

physical assets, products and cash in the<br />

bank - can be counted, but without<br />

access to the information it needs to run<br />

its business, it is powerless and<br />

incapacitated. It cannot run its accounts,<br />

progress work, complete its projects or<br />

pay its employees.<br />

Consequently that data has to be<br />

protected from all threats to its integrity,<br />

from computer and software<br />

malfunctions, operator negligence,<br />

communication and procedure problems<br />

to criminal activity. It also has to be easy<br />

to implement and mandatory in its<br />

processes. Each and every employee in<br />

an organiaation, whether remote or on<br />

site, has to be part of the solution, and<br />

every bit of data stored needs to be<br />

backed up so that, in the event of a<br />

failure anywhere, the most recent<br />

information is immediately available to<br />

restore the system to full operating<br />

efficiency.<br />

This is the focus of a full, holistic, data<br />

management and security system, Backup<br />

Exec, developed by Veritas, who have just<br />

released their latest software revision.<br />

Holistic implies a total solution with all<br />

elements interconnected and made<br />

effective by reference to the whole.<br />

Whether a company is running all types<br />

of physical systems, operating within<br />

Windows, Linux, UNIX and AIX, and using<br />

any kind of data storage device from<br />

disk, tape, VTL, Cloud, HCI, OST and<br />

others, every format and function is<br />

subordinate to the process of saving and<br />

securing data.<br />

As valuable as data is, however,<br />

different organisations have different<br />

requirements, and Veritas offers its unified<br />

data security solution at different levels,<br />

allowing companies to choose what level<br />

of protection they need, what to back up,<br />

where to store it and how to pay for it.<br />

This is probably particularly relevant in<br />

the current working environment, with<br />

increasing numbers of people opting to<br />

work from home, using insufficiently<br />

secured private computer systems.<br />


One of the main features of the latest<br />

release is a focus on ransomware.<br />

Criminal organisations like to take the<br />

easy way out. They used to mount<br />

physical assaults on a company's assets<br />

but become exposed by having to turn<br />

those assets into cash. Hence the<br />

exponential increase in the use of<br />

ransomware, which describes exactly<br />

what it does - the freezing of an<br />

organisation's information by accessing<br />

and overwriting its data files, pending the<br />

payment of a large ransom. Gaining<br />

access to a company's information is<br />

quick and easy if it resides in an<br />

unprotected environment, and little<br />

further action is required but to wait for<br />

the ransom to be paid - or otherwise.<br />

To counter this, Veritas has introduced<br />

Ransomware Resilience - a feature of<br />

Veritas Backup Exec, the leading data<br />

management and security solution, which<br />

has been providing organisations with<br />

simple and secure data protection for<br />

some time. Backup Exec's Ransomware<br />

Resilience prevents data files on a wide<br />

range of media servers from being<br />

modified by unauthorised processes. It<br />

uses AI processes to monitor and actively<br />

inform administrators about data attacks.<br />

Ransomware Resilience is just one of a<br />

number of vital data management tools<br />

available in the unified backup and<br />

recovery solution. Information is<br />

perpetually in a fluid state and can be<br />

held on private, public or hybrid clouds in<br />

Microsoft, Linux, UNIX or virtual<br />

workloads. Integrated with VMware,<br />

Microsoft and Linux platforms Backup<br />

Exec can protect one to thousands of<br />

servers and virtual machines from one<br />

user console.<br />

14 NETWORKcomputing JANUARY/FEBRUARY <strong>2022</strong> @<strong>NC</strong>MagAndAwards<br />



V<br />


The speed of recovery is also crucial.<br />

Backup Exec provides Instant Recovery<br />

and Recovery Ready capabilities for<br />

VMware and Hyper-V virtual machines,<br />

and Instant Cloud Recovery with<br />

seamless failover for Microsoft's Azure<br />

Cloud in case of disaster. Support is also<br />

available for other generic S3<br />

compatible cloud storage solutions like<br />

AWS and Google.<br />

That's, potentially, a lot of data flying<br />

about at some speed, and to cut down on<br />

disk space and data transmission rates,<br />

Backup Exec uses deduplication wherever<br />

it can, which eliminates unnecessary<br />

duplication and, with it, disk space and<br />

data processing requirements.<br />


Veritas, and earlier, its well known<br />

predecessor, Symantec, have been<br />

providing structured data protection for<br />

the last 30 years. During that time they<br />

have developed their solutions to be<br />

simple and manageable, enabling users<br />

to quickly spot, track and monitor every<br />

backup and recovery.<br />

Dashboards and wizards are used<br />

throughout Backup Exec to provide job<br />

and backup status and to progress them<br />

with a few simple clicks. Users can be<br />

assured that their data is fully protected<br />

against any system or physical threat,<br />

and, just as important, that they have<br />

achieved regulatory requirements - a fact<br />

usually ignored that might enable<br />

organisations to reduce their corporate<br />

insurance costs. As a unified solution,<br />

implementation of Backup Exec also<br />

eliminates the need to source niche or<br />

other third party applications to plug any<br />

missing gaps in the protection offered.<br />


PLANS<br />

Not every organisation needs the same<br />

level of functionality or license terms. The<br />

former comes at Bronze, Silver and Gold<br />

levels, which determine the level of<br />

support required, and organisations can<br />

opt to buy a perpetual license, which will<br />

be upgraded automatically as new<br />

versions are released, or fixed term<br />

subscription licenses which come with the<br />

same level of support but only for the<br />

term of the license. Upgrades are, of<br />

course, always available to suit a<br />

customer's growing needs.<br />

Veritas Exec Backup is available in 190<br />

countries, offering unified data protection<br />

to many organisations within each. It goes<br />

beyond this, however, as compliance<br />

regulations are not the same in each of<br />

these countries. Veritas suggests that in<br />

those, countries where this may be of<br />

some concern to check with Veritas to<br />

ensure that their chosen solution conforms<br />

to local regulations - for example, cloud<br />

deduplication capabilities are now<br />

supported for Google Cloud in Delhi,<br />

Melbourne and Toronto, and Microsoft<br />

Azure in East Asia (Hong Kong), Korea<br />

Central (Seoul), Norway East (Oslo) and<br />

Switzerland North (Zurich).<br />

There are probably three key<br />

parameters that you need to remember<br />

when you consider Veritas Backup Exec -<br />

multi-cloud, virtualisation and security.<br />

Multi-cloud relates to a unified solution<br />

that covers the vast majority of temporary<br />

and permanent systems that you would<br />

find in any organisation. Virtualisation<br />

reflects the fluid state of information as it<br />

progresses through an organisation, and<br />

security, quite simply, is the adoption of<br />

an effective backup and recovery solution<br />

that guarantees complete and utter data<br />

protection. Veritas Backup Exec 21.4 is<br />

the enhanced 'go-to' application to<br />

protect your data with fast and effective<br />

protection or recovery. <strong>NC</strong><br />

Product: Veritas Backup Exec<br />

Supplier: Veritas<br />

Web site: www.veritas.com<br />

WWW.NETWORKCOMPUTING.CO.UK @<strong>NC</strong>MagAndAwards JANUARY/FEBRUARY <strong>2022</strong> NETWORKcomputing 15

OPINION: 5G<br />






The 5G network paves the way to major<br />

innovation in the field of<br />

telecommunications, from industrial<br />

automations, telemedicine, self-driving cars<br />

to augmented reality. It also enables the<br />

implementation of a connected network<br />

between Internet of Things (IoT), Edge and<br />

the Cloud to meet demand and enable realtime<br />

optimisation.<br />

In the UK, the 5G network was established in<br />

2019 and was initially introduced by two<br />

network providers - EE and Vodafone. Since<br />

then, all four major communication services<br />

providers (now including Three and O2) are<br />

rolling out 5G across the UK. Ofcom's annual<br />

Connected Nations Report published last<br />

December reveals a significant increase in the<br />

use of 5G devices over the past twelve months<br />

in the UK. The report also highlights the<br />

majority of UK homes are located in an area<br />

with outdoor 5G coverage. 1<br />

However, for 5G to reach its full potential,<br />

there are a number of misconceptions<br />

surrounding the technology which need to<br />

be clarified.<br />

1: Previous network generations have similar<br />

capabilities to 5G<br />

In reality, each network generation is classified<br />

based on a set of telephone network standards<br />

and compared to older cellular and wireless<br />

standards, 5G is designed to produce better<br />

connectivity between people and businesses.<br />

For example, 2G has a frequency of<br />

2.4Ghz, while 5G has a frequency of 5Ghz.<br />

What this means is that 5G has larger<br />

coverage capabilities and provides an<br />

enhanced connectivity speed. Additionally,<br />

older network generations are limited to a<br />

smaller number of channels compared to 5G,<br />

which means that users operating within the<br />

5G range can have multiple devices<br />

accessing the same network without the fear<br />

of overcrowding. With 5G allowing many<br />

more devices to connect to the network, it has<br />

enabled businesses to gather and act on a<br />

greater amount of previously untapped data.<br />

5G also delivers up to 1,000x more capacity<br />

than 4G, creating a favourable environment<br />

for IoT deployment. This benefits a number of<br />

industries on a global scale from<br />

manufacturing, agriculture, retail to<br />

healthcare and smart city infrastructure.<br />

2: The deployment of the 5G network<br />

increases security risks<br />

This is in fact true: adopting any new<br />

technology is likely to increase the risk of<br />

security breaches. Perimeter-based security is<br />

no longer sufficient to secure the core 5G<br />

network, due to its extended surface and the<br />

amount of entry points that can be exploited.<br />

However, security professionals have<br />

developed integrated solutions to help<br />

prevent potential security risks at an early<br />

stage. Planning and investment early on are<br />

crucial for any new technology as it helps to<br />

create a sustainable cybersecurity strategy.<br />

3: The 5G roll-out is too slow to ever reach<br />

industry expectations<br />

The truth is that industries are in a constant<br />

state of evolution and 5G is just at the<br />

beginning of its implementation. 5G has huge<br />

revenue-generating potential for businesses<br />

who can develop quality personalised services<br />

in hours, rather than weeks or months.<br />

However, the speed of the roll-out is strongly<br />

influenced by the global network infrastructure.<br />

We must switch to a cloud-native, softwaredefined<br />

infrastructure to be able to reach the<br />

full potential of 5G.<br />

4: The capabilities of 5G are limited to<br />

phone devices<br />

5G goes far beyond the use of a cell phone.<br />

Society today is constantly evolving and with<br />

the introduction of technology such as 5G,<br />

we're striving to become more connected then<br />

ever. In a world of connected devices, smart<br />

homes and cities will eventually be powered by<br />

the 5G network. From houses that give<br />

personalised energy-saving suggestions which<br />

reduce environmental impact to traffic lights<br />

that change their patterns based on traffic flow,<br />

5G applications relying on added capacity will<br />

be available on all network connected devices.<br />

Undoubtedly, the future of technology will be<br />

shaped by the mass roll-out of 5G. However,<br />

for it to be successful, demands fundamental<br />

changes to be made to the Cloud, the network<br />

and our devices. <strong>NC</strong><br />

1 Connected Nations 2021: UK report<br />

(ofcom.org.uk)<br />

16 NETWORKcomputing JANUARY/FEBRUARY <strong>2022</strong> @<strong>NC</strong>MagAndAwards<br />








Kubernetes is a standardised, opensource<br />

program for managing<br />

containerised workloads and<br />

programs. It is claimed to be many things<br />

and has a great many fans in the tech<br />

space - and with good reason.<br />

Kubernetes has a whole raft of benefits. It is<br />

incredibly efficient, improving workloads and<br />

response times across broad IT infrastructures,<br />

and ultimately resulting in more portability<br />

(although not a cure), shortened software<br />

development cycles, and reduced cloud-data<br />

consumption. Naturally, this in turn leads to<br />

quicker and cheaper IT projects.<br />

According to a 2021 study by VMWare,<br />

95% of respondents reported clear benefits<br />

from using Kubernetes, with 56% choosing<br />

resource utilisation (56%) as the main<br />

advantage, and 53% pointing to shortened<br />

software development cycles.<br />

Despite this clear show in confidence in this<br />

open-source tool, not every business will find<br />

it the cure-all solution it is hyped up to be.<br />

Kubernetes architecture is a good fit for webscale<br />

organisations, and there is no doubt<br />

about that - trust me, I am certainly an<br />

advocate. However, to recommend it to all<br />

businesses is not fair or accurate - and may<br />

simply be overkill. After all, there is a reason<br />

that Google, Spotify, Airbnb, Tinder, Reddit,<br />

and many other giant web-based<br />

organisations use it.<br />


Kubernetes is not easy to configure manually,<br />

monitor or optimise. Its preferred and default<br />

settings are automated, and doing things any<br />

other way is not easy. These setting favour<br />

larger scale deployments, the kind that<br />

wouldn't configure individual workloads<br />

manually anyway. This again becomes an<br />

issue when it comes to monitoring. For<br />

example, if servers in a cluster were running<br />

at 25% capacity, it wouldn't tell you -<br />

meaning you are wasting money on an overprovisioned<br />

infrastructure. Again, as this is an<br />

issue that gets more problematic the smaller<br />

your organisation and IT system is, it favours<br />

the bigger players.<br />


Firstly, Kubernetes is fractured, and comes<br />

with many parts that aren't considered simple<br />

to put together and utilise. Although a slightly<br />

chaotic development may be characteristic of<br />

open-source projects, this one, unlike others,<br />

lacks a streamlined way to control it. Your<br />

typical Linux distribution consists of many<br />

different pieces of software too, but unlike<br />

Kubernetes, you can install and manage all<br />

of them in a more centralised way.<br />

Furthermore, these different parts in other<br />

open-source software are more similar than<br />

they are different. For example, Red Hat and<br />

Ubuntu are different, but similar, whereas if<br />

you wanted to go from OpenShift to VMware<br />

Tanzu (both within Kubernetes) you'd face a<br />

significant amount of learning to do. Not<br />

ideal for a one-man-band IT team.<br />

Secondly, it runs purely on code. Every input<br />

need unique code written, specially<br />

augmented for each purpose. Again, this isn't<br />

an impossibility, it is just a lot of effort, skill<br />

and time spent - so naturally benefits largescale<br />

uses and users that have time, expertise<br />

(and ideally a whole team) dedicated to it.<br />

Do you see a pattern here? Ultimately,<br />

Kubernetes requires a certain level of<br />

dedication that is just not worth it unless the<br />

pay-off is significant, or the infrastructure<br />

impacted expansive. This means that its<br />

complex nature can cause problems<br />

concerning one's local development<br />

environment, which could then impact<br />

productivity throughout the business.<br />


However, by no means are these limitations a<br />

criticism of its brilliance. In fact, Kubernetes,<br />

when used in the appropriate context, and<br />

with realistic and relative aims, can work<br />

wonders for businesses looking for costeffectiveness.<br />

If you were to look back three<br />

or five years ago it would have been a bold<br />

move to thrust Kubernetes into operations.<br />

Back in those times, it was an unknown with a<br />

lot to prove, but today, this could not be<br />

further from the case - with hundreds of<br />

thousands of IT teams using Kubernetes daily<br />

within their operations. <strong>NC</strong><br />

WWW.NETWORKCOMPUTING.CO.UK @<strong>NC</strong>MagAndAwards JANUARY/FEBRUARY <strong>2022</strong> NETWORKcomputing 17





In partnership with NetNordic, Extreme<br />

Networks has established one of the largest<br />

cloud-managed network infrastructures in<br />

Borås Stad, Sweden, transforming the<br />

municipality into a smart city. The new<br />

infrastructure delivers faster and more<br />

advanced connectivity, extending secure public<br />

Wi-Fi for its citizens, local government,<br />

schools, and services, while automating and<br />

simplifying network management for the IT<br />

team. The transition to smart cities is designed<br />

to provide more sustainable resources to<br />

residents, while improving quality of life and<br />

fueling business innovation.<br />

Municipalities in Sweden are required by law<br />

to provide critical welfare services such as<br />

schools, childcare, social services, and elderly<br />

care, among others. The departments and<br />

institutions that power these services require a<br />

robust, secure network infrastructure to share<br />

information seamlessly and securely. As a<br />

result of the global pandemic, Borås Stad has<br />

also worked to roll out new services, such as<br />

Wi-Fi connected medical wristbands which<br />

allow immediate contact with doctors, realtime<br />

heartbeat monitoring, and user location<br />

information - making reliable, high-speed Wi-<br />

Fi critical for proper care.<br />

The ExtremeCloud IQ platform reduces the<br />

complexity of network management,<br />

streamlines operations, lowers maintenance<br />

costs, and provides visibility into actionable<br />

data and insights from network usage to<br />

performance.<br />

Borås Stad as a smart city is another example<br />

of how Extreme Networks is helping to lay the<br />

groundwork to integrate 5G and Wi-Fi and<br />

deliver cloud-based networking services to<br />

gain more visibility and better manage<br />

networks across the city. Extreme helps provide<br />

a seamless connectivity and authentication<br />

experience between 5G and Wi-Fi networks,<br />

ensuring uninterrupted connectivity for users.<br />

Key Benefits of the new infrastructure include:<br />

Advanced public Wi-Fi connectivity: Borås<br />

Stad has deployed approximately 3,500<br />

ExtremeWireless Wi-Fi 6 Access Points to<br />

deliver reliable coverage, improved<br />

network capacity, and faster data speeds<br />

across the city's services. As a result, Borås<br />

Stad can deliver seamless digital<br />

experiences in sectors such as education<br />

and healthcare, attracting businesses to the<br />

city and supporting rapid economic growth<br />

in the region<br />

Streamlined network management and<br />

insightful data: ExtremeCloud IQ has given<br />

the city's IT staff full oversight of its network<br />

infrastructure, enabling nearly all<br />

operations to be accessed and viewed in a<br />

single cloud network management<br />

solution. By reducing the complexity of<br />

managing the networking infrastructure,<br />

the 3-person engineering team can scale,<br />

manage, and maintain over 3,500 access<br />

points and millions of connected devices<br />

with ease. Additionally, ExtremeAnalytics<br />

enables the team to optimise the network<br />

and leverage insightful usage trends to<br />

improve consumer experiences.<br />

Andrzej Kardas, Chief Technology Officer,<br />

Borås Stad "Extreme Networks has been a key<br />

partner in helping us to build a smart city that<br />

meets the current and future demands of our<br />

citizens. Leveraging Extreme's solutions, we've<br />

created an advanced cloud-managed network<br />

that helps us roll out new initiatives through<br />

seamless, world-class public Wi-Fi - with<br />

minimal overhead, management, and<br />

maintenance required on our end. We're<br />

proud to have established Borås Stad as a<br />

modern and dynamic smart city."<br />

Boris Germashev, Senior Regional Director<br />

Northern & Eastern Europe at Extreme<br />

Networks added, "Borås Stad was struggling<br />

to provide its growing population with fast and<br />

reliable public Wi-Fi, resulting in connectivity<br />

issues that absorbed a lot of the IT teams'<br />

time. But with the deployment of<br />

ExtremeCloud IQ, Borås Stad has proven that<br />

with the right cloud-managed network and<br />

solutions, a city can cost-efficiently streamline<br />

all networking operations, increase<br />

connectivity, and enhance services without<br />

complicating its fundamental infrastructure.<br />

Borås Stad has now established a strong and<br />

functional network to provide the best<br />

possible public service benefits<br />

for its users."<br />

18 NETWORKcomputing JANUARY/FEBRUARY <strong>2022</strong> @<strong>NC</strong>MagAndAwards<br />








Working from home provided many<br />

benefits for both employers and their<br />

employees. The ability to work from<br />

the comfort of their own homes and skip the<br />

time-consuming commutes has enabled<br />

people to cut costs and simultaneously increase<br />

comfort. Employers could focus on internal<br />

operational efficiency while reducing overhead<br />

costs of their companies, and employees could<br />

concentrate on working comfortably without<br />

fear of work being disrupted.<br />

However, with these shifts and the evergrowing<br />

societal drive towards efficiency,<br />

companies are being propelled towards new<br />

technologies to facilitate these changes. There<br />

is a need to implement virtual desktop<br />

infrastructures (VDI), creating an accessible<br />

workspace anytime, anywhere, that can drive<br />

the productivity of workforces in a simpler,<br />

cheaper and easier way that also integrates<br />

with cloud-based resources. Cloud technology<br />

is one of the innovative technologies<br />

pioneering this adaptation to virtual working.<br />



While it has many benefits, the difficulties<br />

surrounding remote working landscapes cannot<br />

be ignored, and it is the duty of enterprises to<br />

find ways to combat them for a productive,<br />

proactive and protected workforce to exist.<br />

A number of these problems are centred<br />

around companies providing the provision of<br />

adequate portable devices to a company's<br />

workforce to enable them to work from home<br />

efficiently. This creates financial spikes which<br />

many businesses cannot afford or sustain.<br />

There is a vital need for in-house software and<br />

devices connected to work servers through<br />

cloud integration so that day-to-day actions<br />

can be carried out seamlessly, without the<br />

provision of multiple devices and software.<br />

With industries becoming increasingly<br />

competitive, it is imperative for companies to<br />

brainstorm innovative solutions to engage the<br />

global market successfully, while their<br />

employees work away from the office.<br />

Businesses are finding that investing in VDIs<br />

prevents dependency on localised environments<br />

and gives access to desktop management<br />

through a virtual environment instead.<br />



Access to VDIs not only aids big business but<br />

also smaller corporations that have less<br />

funding. The utilisation of cloud technology is<br />

pivotal for companies to offer their employees<br />

the ability to flawlessly work from home during<br />

long periods of time, from any location<br />

globally. The cloud allows the virtual desktop<br />

to exist without concern and allows companies,<br />

big or small, to not have to provide laptops<br />

and other devices to each employee during<br />

remote working. With the internet, they have<br />

constant access to the cloud, and therefore to<br />

their work servers from a singular device.<br />

As the ramifications of remote working were<br />

weighed up last year, studies showed that 59%<br />

of workers believed that in-office working felt<br />

substantially more cyber secure than when they<br />

worked remotely from home. However,<br />

connecting to a specific desktop environment<br />

from one secure server allows online traffic<br />

monitoring to be far easier, also making it<br />

simpler for security patches to be made. Cyber<br />

attacks on systems hosted on the cloud are far<br />

more difficult to achieve, as cloud systems<br />

utilise more robust cybersecurity measures than<br />

a single household PC that only uses internal<br />

hard drives. In addition, having this centralised<br />

system ensures that regular back-ups occur so<br />

that employees cannot lose data from<br />

forgetting to save and backup their files.<br />

As well as being safer and increasingly more<br />

accessible, VDI's also provide much more<br />

flexibility. Troubleshooting and installations can<br />

be carried out by system administrators without<br />

detriment to the employees, and any upgrades<br />

that are needed can be actioned having no<br />

effect on the end user's productivity.<br />


Overall, the remote working landscape is<br />

becoming commonplace and is popularising<br />

globally, due to the recent pandemic.<br />

Contractors and mobile employees are better<br />

able to access their files and the resources they<br />

require in order to do their job effectively and<br />

efficiently. Distance is no longer an issue, nor is<br />

the type of device accessible to them to<br />

undertake the tasks.<br />

VDIs increase simplicity, allowing tasks to be<br />

completed faster and with a smoother user<br />

experience while keeping sensitive data and<br />

personal information secure. The future of<br />

enterprise is sure to find success with the<br />

utilisation of hybrid cloud ecosystems and virtual<br />

desktops which certainly do not break the bank.<br />

Cloud computing has certainly made the VDI<br />

landscape more inviting, and with the level of<br />

scalability that it provides, the possibilities of<br />

work infrastructure are becoming increasingly<br />

endless. It is now an accelerated, integral part<br />

of many organisations where IT strategies<br />

across the globe can be more accessible,<br />

more secure and much safer to use. <strong>NC</strong><br />

WWW.NETWORKCOMPUTING.CO.UK @<strong>NC</strong>MagAndAwards JANUARY/FEBRUARY <strong>2022</strong> NETWORKcomputing 19





The pandemic has rapidly accelerated<br />

the pace of digital transformation<br />

and software has become vital to<br />

how we work, live, and learn. As the<br />

world becomes more digitised and<br />

dependent on digital products, this has<br />

put the quality of software in the spotlight.<br />

With rapid digitalisation showing no sign<br />

of slowing, software-based innovation and<br />

development will continue. And with poor<br />

software quality estimated to have cost the<br />

US economy a staggering $2 trillion in<br />

2020 organisations must find a way to<br />

balance the speed of release with<br />

software quality.<br />

To understand more about software<br />

quality, we asked Dr. Gareth Smith,<br />

Keysight's General Manager of Software<br />

Test Automation, to explain why software<br />

quality now determines business success<br />

and how organisations can take steps to<br />

improve it.<br />

1. Why is software quality important?<br />

For the last decade, organisations have<br />

focused on releasing new apps and<br />

services as quickly as possible to keep up<br />

with rapidly changing demands and<br />

support digital transformation. However,<br />

with the push for speed of delivery,<br />

software quality has often lagged behind.<br />

The quality of software is critical in a<br />

digital-first world. For example, an<br />

undetected flaw can trigger system<br />

outages, and a misconfiguration of cloud<br />

platforms can result in a data breach or<br />

data loss. Software defects drastically<br />

increase the cost of development. And,<br />

once software is released, the cost of<br />

finding and fixing is significantly higher than<br />

during the design/development phase.<br />

2. How can organisations improve the<br />

quality of their software?<br />

With rapid software development, testing<br />

and monitoring must be prioritised to<br />

provide a frictionless, high-quality (omnichannel)<br />

digital experience that results in<br />

successful user outcomes. The nextgeneration<br />

software testing platforms<br />

support this by incorporating the latest AI<br />

techniques that learn from real<br />

application usage, historical bug patterns,<br />

and which application behaviours yield<br />

the most critical business outcomes.<br />

These platforms can automatically<br />

generate tests that focus on the user<br />

journeys in the application that are the<br />

most important to business success. This<br />

end-to-end intelligent test automation<br />

within a DevOps framework allows<br />

companies to deliver improved quality<br />

software faster while freeing up teams to<br />

increase their productivity.<br />

3. How is DevOps impacting testing<br />

strategies?<br />

DevOps is about breaking down silos<br />

between different teams to coordinate and<br />

collaborate to produce better, more<br />

reliable products faster. By adopting a<br />

DevOps philosophy, teams have increased<br />

confidence in the applications they build,<br />

are better able to meet customer needs,<br />

and achieve business goals faster.<br />

The success of DevOps is intrinsically<br />

linked to test automation, as manual<br />

testing cannot address the ever-expanding<br />

test surface with increasing release<br />

frequencies. However, it's not enough to<br />

automate a handful of tests or<br />

administrative processes. To succeed in<br />

the digital age, development and test<br />

automation engineers must collaborate<br />

with the operations team to ensure<br />

software and applications deliver on their<br />

ultimate goal of delighting users.<br />

4. How is AI changing test automation<br />

strategies?<br />

AI enables test automation to move<br />

beyond its scope of simple rule-based<br />

automation. It utilises algorithms to<br />

efficiently train systems using large data<br />

sets. Through the application of<br />

reasoning, problem-solving, and machine<br />

learning, an AI-powered test automation<br />

tool can mimic human behaviour and<br />

reduce the direct involvement of software<br />

testers in mundane tasks.<br />

Intelligent test automation evaluates the<br />

functionality, performance, and usability<br />

of digital products rather than simply<br />

verifying code. It incorporates AI, ML, and<br />

analytics to test and monitor the user<br />

experience (UX); it analyses apps and real<br />

data to auto-generate and execute user<br />

journeys. The result is a smarter way to<br />

continuously test software and apps,<br />

whatever they are running on.<br />

AI-based tools eliminate test coverage<br />

overlaps, optimise existing testing efforts<br />

with more predictable testing, and<br />

accelerate progress from defect detection<br />

to defect prevention. This, in turn,<br />

improves software quality.<br />

20 NETWORKcomputing JANUARY/FEBRUARY <strong>2022</strong> @<strong>NC</strong>MagAndAwards<br />



5. Why is there a shift towards continuous<br />

quality?<br />

With the reliance on digital, testing must<br />

shift from a verification-driven activity to a<br />

continuous quality process. Teams must<br />

incorporate quality into every phase of<br />

software development and automate the<br />

process. Continuous quality is about<br />

adopting a systematic approach to finding<br />

and fixing software defects throughout the<br />

entire software development lifecycle<br />

(SDLC). It reduces the risk of security<br />

vulnerabilities and bugs by helping find<br />

and fix problems as early as possible.<br />

6. To improve the quality of software, do<br />

you need to add more technical<br />

resources?<br />

No. AI is making the process of<br />

designing, developing, and deploying<br />

software faster, better, and cheaper. It's<br />

not that robots are replacing<br />

programmers. Instead, AI-powered tools<br />

make project managers, business<br />

analysts, software coders, and testers<br />

more productive and more effective,<br />

enabling them to produce higher-quality<br />

software faster and at a lower cost.<br />

Here at Keysight, our intelligent<br />

automation platform allows citizen<br />

developers to easily use our no-code<br />

solution that draws on AI and analytics to<br />

automate test execution across the entire<br />

testing process. It empowers and enables<br />

domain experts to become automation<br />

engineers. The AI and ML take on<br />

scriptwriting and maintenance as a<br />

machine can create and execute<br />

thousands of tests in minutes, unlike a<br />

human tester.<br />

7. What are some of the future trends you<br />

expect to see related to software quality?<br />

The importance of software quality will<br />

continue to grow as the pace of digital<br />

adoption accelerates. Every digital<br />

organisation must continuously monitor<br />

the performance of digital properties and<br />

how users are interacting to ensure it<br />

delivers the best possible experience. Here<br />

are 5 trends that we believe will happen<br />

in the world of QA in the next 3 years:<br />

1. Quality Assurance will become a profit<br />

centre rather than a compliance<br />

function. Unless your software is<br />

released first, has an amazing UX,<br />

flawless functionality and great<br />

responsiveness, your business will likely<br />

struggle or fail. But if you manage to<br />

achieve those goals, you will succeed.<br />

As such, leveraging QA to continuously<br />

measure this and predict a hit or a miss<br />

is a profit centre - not just a<br />

compliance function.<br />

2. User Experience is the key differentiator<br />

for your business. Your UX is your shop<br />

window - it draws your customers and<br />

needs to keep them there. It had better<br />

be excellent, or you'll be left behind.<br />

3. Performance. If you have performance<br />

delays of greater than 3 seconds at any<br />

point, your business will fail.<br />

Millennials have little patience,<br />

Generation Z has even less! 3 seconds<br />

is the amount of time your customers<br />

will wait for a delay before heading to<br />

a competitor. Better and continuous<br />

load and performance testing are<br />

needed to ensure scale and<br />

responsiveness.<br />

4. The Digital Nemesis. Testing must<br />

become even smarter, a digital nemesis<br />

can find the weak spots intelligently in<br />

any system using AI-powered "chaos<br />

engineering," highlight them and allow<br />

them to be fixed before anyone ever<br />

knows. This applies to functionality,<br />

performance, UX and security.<br />

5. End-to-end Fusion Testing. From<br />

hardware to UX. Gone are the days of<br />

testing one layer of your stack or one<br />

type of testing. Testing the 5G handset,<br />

testing the 5G base station, testing the<br />

network load, testing the application<br />

ability to handle load, functional<br />

testing, API testing, performance<br />

testing, security testing, testing on iOS,<br />

testing android, testing cloud testing<br />

Windows etc. etc. etc.<br />

But what about testing the entire endto-end<br />

system with all layers, end-toend<br />

workflows and interaction points?<br />

Without doing so, we never truly test the<br />

system in production; we never truly can<br />

isolate a problem because it might not<br />

happen without the interaction between<br />

different layers or under different<br />

interacting tests conditions. So now we<br />

need to take testing to the next level -<br />

with multi-layer fusion testing - bringing<br />

together the skills of the hardware,<br />

network, software and UX testers into<br />

one end-to-end framework. <strong>NC</strong><br />

About the author<br />

Dr. Gareth Smith leads Keysight's software<br />

test automation group. Previously Gareth<br />

was CTO at Eggplant - the pioneer in<br />

intelligent test automation, acquired by<br />

Keysight in June 2020. Gareth has a rich<br />

history of innovation in software, serving<br />

in leadership roles at Apama, Software<br />

AG and Progress Software. <strong>NC</strong><br />

WWW.NETWORKCOMPUTING.CO.UK @<strong>NC</strong>MagAndAwards JANUARY/FEBRUARY <strong>2022</strong> NETWORKcomputing 21





With pandemic-driven remote and<br />

hybrid work becoming a long-term<br />

fixture for many businesses, mass<br />

migration to cloud services and the network<br />

edge has put a large strain on corporate<br />

network services. In addition to this, largescale<br />

performance interferences make it<br />

harder for ITOps teams to identify and resolve<br />

the problem. Smart Edge Monitoring (SEM)<br />

solutions are the most effective way to fix<br />

these disruptions.<br />


Numerous IT organisations have been able to<br />

move past the initial events of the COVID-19<br />

pandemic with new ways to keep up with the<br />

changing needs of business technology<br />

infrastructure such as SaaS adoption, digital<br />

transformation, hybrid work models, and cloud<br />

migrations. These events have sparked the<br />

widescale adoption of cloud service and rapid<br />

migration to the network edge. However, the<br />

repercussions of this have caused poor network<br />

performance and lack of visibility, which has<br />

decelerated the rate at which problems can be<br />

resolved within networks.<br />

Many IT teams are struggling to keep up with<br />

these new challenges which include dealing<br />

with increased maintenance costs,<br />

inconsistencies with network access<br />

permissions, low quality service performance,<br />

lack of controls, and poor visibility. When new<br />

cloud services are integrated within an<br />

established environment, this can make the use<br />

of these capabilities much less straightforward.<br />

To meet the needs of today's dynamic<br />

infrastructure, IT teams need full visibility from<br />

all angles.<br />

IT teams mitigating cyberthreats, network<br />

outages and service delays can have<br />

widespread implications to business continuity.<br />

Additionally, the credibility of an IT team can be<br />

put at risk when network services seriously fail.<br />

When new endpoints, remote devices,<br />

providers, and applications are introduced to<br />

an established network, this only increases the<br />

level of difficulty for IT teams to manage all at<br />

once. If this wasn't already complicated, the<br />

initial costs of sustaining and overseeing<br />

networks also increases.<br />


Solutions like Smart Edge Monitoring (SEM)<br />

allow full visibility throughout multi-cloud<br />

environments to mitigate poor performance<br />

issues. SEM is effective in general ITOps<br />

management through monitoring and<br />

identifying issues within a digital network<br />

environment across organisational and<br />

technological boundaries. SEM ensures a highquality<br />

end-user experience that is accessible<br />

from any location, service, or network for users<br />

within an organisation.<br />

Multi-vendor environments are complex and<br />

pose one of IT's most difficult challenges in<br />

detecting and resolving related problems.<br />

SEM seamlessly tackles this through<br />

integrated analysis, which can quickly<br />

recognise what the end-user is experiencing<br />

and can detect the exact issue and why it is<br />

occurring. With this capability, the solution<br />

makes considerable reductions in real time to<br />

resolve application issues such as video, data,<br />

VoIP, UCaaS, and SaaS.<br />



Organisations can gain limitless visibility<br />

throughout various application domains with<br />

SEM solutions as they can remove the obstacles<br />

preventing optimum network performance.<br />

Additionally, SEM can run network performance<br />

analyses and is capable of troubleshooting<br />

errors across dense multi-cloud environments.<br />

As global companies continue with their<br />

hybrid workforce model, SEM can detect<br />

performance disruptions in its early stages<br />

within a transaction ecosystem. Its capabilities<br />

allow it to reach anywhere, for example a user's<br />

home network, WAN, data centre, database<br />

servers, and SaaS and cloud providers to<br />

identify the cause of the issue.<br />

Cloud migrations create inconsistencies in<br />

visibility which can result in post-launch<br />

performance disputes. Before, during, and<br />

after a user's experience, SEM can examine<br />

this and utilise cloud-based applications to<br />

support and resolve any issues. Users working<br />

from home, in remote offices, in regional<br />

branches, or on the main campus often<br />

experience a low performing network when<br />

using programmes such as Microsoft Teams.<br />

SEM can detect the cause of the problem<br />

ranging from poor call quality, audio<br />

interference and join-meeting delays.<br />


Information collected within multiple clouds and<br />

the network edge offers IT teams a more vivid<br />

picture into cloud services at multiple levels. A<br />

more accurate cloud service surveillance and<br />

faster detection of cloud-related problems will<br />

effectively speed up repairs. This data can be<br />

helpful during thorough analyses, resulting in<br />

shorter problem-solving times, earlier threat<br />

detection capabilities and active network<br />

optimisation. To achieve this complete picture,<br />

IT teams should consider investing in highquality<br />

solutions and prioritise bandwidth<br />

deployment, device status, traffic movement,<br />

and the end-user experience. <strong>NC</strong><br />

22 NETWORKcomputing JANUARY/FEBRUARY <strong>2022</strong> @<strong>NC</strong>MagAndAwards<br />








Pre-pandemic, the enterprise edge was<br />

typically a regional office. But, as work<br />

habits change, we are becoming a<br />

remote office of one - working from<br />

multiple locations, with a day or two in the<br />

office or a customer site, and the rest of the<br />

time working from home.<br />

But such a wholesale shift to very fluid,<br />

hybrid/remote working demands that IT<br />

teams must ensure instant access to<br />

resources, files, and applications, wherever<br />

the employee is working. However, evolving<br />

these dynamic hybrid working models are<br />

often hamstrung by unseen costs coming<br />

from different directions.<br />

Take business-critical applications: their<br />

performance could be degraded if massive<br />

amounts of data are recalled from a public<br />

cloud service. And if IT is having to<br />

maintain multiple file copies and backup<br />

files across different offices, this will quickly<br />

consume capacity licenses. Meanwhile,<br />

globally-distributed teams working on the<br />

same datasets simultaneously often cause<br />

latency and file version control issues which<br />

could be calamitous during a complex<br />

design project or a big tender.<br />

In a recovering economy, organisations<br />

must ensure they have a practical hybrid<br />

working model and enough staff to handle<br />

increased workloads. But they also need<br />

more effective strategies for managing<br />

their data and optimising this remote<br />

working at scale. Enterprises adapted<br />

skilfully to the pandemic's early phases;<br />

now with so many resources deployed at<br />

the network edge, they need smart<br />

thinking to make these operations more<br />

profitable in the longer-term.<br />

This shift to companies working optimally<br />

at the edge while controlling costs is being<br />

accelerated by new cloud-native storage<br />

and file system infrastructures which enable<br />

organisations to store, protect, synchronise,<br />

and collaborate on files at any scale, across<br />

any number of global offices. Companies<br />

can now consolidate data across multiple<br />

public clouds, modernise their apps without<br />

hurried rewrites, and deliver faster<br />

application performance without them<br />

being tripped up by cost and capacity<br />

hurdles as demand returns.<br />

This advance is being achieved in three<br />

main ways. First, these platforms enable IT<br />

teams to cut business and storage process<br />

costs by eliminating unnecessary copies of<br />

data - without compromising their ability to<br />

recover from outages and ransomware. For<br />

example, a UK charity migrated to a single,<br />

cloud-native global file system at the start<br />

of the pandemic and standardised<br />

information for 1,500 employees.<br />

Staff still have fast access to files, whether<br />

they work from home during restrictions or<br />

flexibly afterwards, because the new<br />

platform's edge appliances cache files<br />

locally. In addition, the IT team has been<br />

able to do away with time-consuming<br />

processes to replicate workflow data or<br />

provide backups to a secondary data centre<br />

for disaster recovery needs, saving IT<br />

resources for more productive work.<br />

Second, cloud native storage standardises<br />

information across the globe so that teams<br />

in scientific research or engineering as well<br />

as organisations adopting new machine<br />

learning and IoT applications, can<br />

collaborate on the same datasets or<br />

enterprise applications, irrespective of their<br />

location and as operations are scaled up.<br />

In a striking example, a global<br />

engineering firm that had previously<br />

suffered project delays as end users<br />

struggled to access big design files by<br />

WAN, replaced its data silos with a global<br />

storage and file system. Company<br />

engineers at locations worldwide now enjoy<br />

LAN-speed access to the same files - they're<br />

working at the edge but are doing so more<br />

productivity across what is now a virtual,<br />

global office.<br />

Thirdly, cloud file storage means that<br />

organisations can deal with ransomware<br />

attacks without operations being disrupted<br />

or paying any ransom demanded. New<br />

cloud-based file storage systems allow 'roll<br />

back' of business-critical files to the exact<br />

time prior to an incident and quickly restore<br />

files locally. In our emerging world, where<br />

distributed workforces are the norm,<br />

organisations know their critical knowledge<br />

assets are safe from an attack or outage<br />

and employees will resume productive work<br />

much sooner - sometimes in minutes.<br />

Cloud native file storage is enabling a<br />

new economy where we can work at the<br />

edge and do so more dynamically and<br />

profitably. <strong>NC</strong><br />

WWW.NETWORKCOMPUTING.CO.UK @<strong>NC</strong>MagAndAwards JANUARY/FEBRUARY <strong>2022</strong> NETWORKcomputing 23





The business interests and technology<br />

objectives of organisations can no longer<br />

operate in siloes. They must come together<br />

with the shared goal of delivering the best<br />

possible digital experiences to users, regardless<br />

of where they are located. This can only happen<br />

if all stakeholders work together to modernise<br />

application delivery and connectivity - but<br />

achieving this can be a challenge to coordinate.<br />

Let's look at why business and technology<br />

executives are working towards the same<br />

goal, what barriers they are encountering and<br />

how they can push past these barriers to build<br />

new business value, enable innovation and<br />

meet customer requirements.<br />

OPTIMISING APP PERFORMA<strong>NC</strong>E<br />

Applications are essential to customer and<br />

employee experiences. Customers rely on apps<br />

to stream content, make purchases, manage<br />

their accounts, seek support and engage with<br />

brands. Likewise, employees have never relied<br />

more on apps to communicate with each other<br />

and remain productive.<br />

Optimal app performance, therefore, is<br />

business-critical, directly impacting employee<br />

efficiency as well as customer satisfaction,<br />

brand loyalty, and revenue, however it<br />

requires resilience, performance, agility, and<br />

scale. These are recognisable IT objectives,<br />

but they are now just as relevant to business<br />

leaders, and if they want to deliver superior<br />

user experiences, business and technology<br />

executives must collaborate.<br />



To ensure NetOps and DevOps teams have<br />

the resources they need to innovate, business<br />

and technology executives should consider<br />

three important factors: modern customers,<br />

modern applications, and the need for a<br />

modern "connectivity fabric". Customers today<br />

are in highly distributed locations using<br />

applications on multiple devices. What is<br />

common is their demand for superior app<br />

performance. By delivering this, companies<br />

improve loyalty and give themselves a<br />

foundation to grow their customer base.<br />

Such a decentralised audience demands a<br />

distributed application delivery infrastructure<br />

with minimal latency. The best approach is to<br />

use a network of multiple cloud and CDN<br />

providers who can bring content and resources<br />

close to users with redundancy built-in should<br />

one cloud or CDN provider go down. Bringing<br />

all this together is the "connectivity fabric" - the<br />

underlying foundational technologies that<br />

support application infrastructures so that the<br />

expectations of every audience are served,<br />

regardless of location or device.<br />



The main barrier to achieving this is existing<br />

hybrid non-cloud-native architecture that is<br />

split between on-site and cloud. This might<br />

mean that they have to allow for the cost of<br />

maintaining legacy infrastructure, including the<br />

core network, while seeking a way to optimally<br />

deliver apps in a world that demands a more<br />

flexible, distributed approach.<br />

Some businesses use technologies that solve<br />

specific problems but fail to integrate. Others<br />

are overwhelmed by the massive amounts of<br />

data their infrastructure generates. In addition,<br />

there can be a misconception that these issues<br />

will be managed by employing a larger<br />

workforce. The truth is only automation can<br />

make effective millisecond-to-millisecond<br />

decisions based on massive amounts of data.<br />


Companies must change how they think and<br />

push their strategies out to the edge if they<br />

want to remain competitive by providing the<br />

best digital experiences. Edge connectivity<br />

allows resources to be delivered as close as<br />

possible to the people who need them, across<br />

multiple different platforms. When considering<br />

infrastructure, IT and business teams must also<br />

shift to a "cloud-native" mentality, even as part<br />

of a hybrid environment. The cloud allows for<br />

rapid scaling, both up and down, based on<br />

real-time demand.<br />

To ensure that DevOps and NetOps teams<br />

can quickly and reliably access resources, it is<br />

essential to decentralise access to core network<br />

services. This introduces agility, avoids the<br />

bottleneck of waiting for a central authority to<br />

coordinate access to infrastructure and network<br />

resources, and boosts autonomous innovation<br />

while still upholding security best practices.<br />

Business and technology executives want to<br />

deliver applications to customers in a resilient,<br />

distributed fashion and keep their<br />

development and network teams innovating.<br />

Moving away from a siloed approach,<br />

aligning objectives, and working together will<br />

create a strong foundation for the future of<br />

their organisations. <strong>NC</strong><br />

At NS1 Reggie Best spearheads efforts<br />

supporting product strategy, messaging, and<br />

positioning, while building relationships with<br />

key stakeholders across all business units to<br />

ensure success.<br />

24 NETWORKcomputing JANUARY/FEBRUARY <strong>2022</strong> @<strong>NC</strong>MagAndAwards<br />









PERFORMA<strong>NC</strong>E ISSUES<br />

The shift towards hybrid working has led to<br />

a rise in the adoption of SaaS, cloud<br />

services and distributed applications. As a<br />

result, it's now more important than ever that<br />

organisations know where applications are<br />

moving throughout their infrastructure, and<br />

how best to manage and control them to<br />

deliver optimal performance.<br />

No longer limited to a central location,<br />

enterprise applications are growing in volume,<br />

complexity, and distribution. All of which is<br />

increasing pressure to ensure performance,<br />

reliability, and security. Today's modern<br />

application stack requires a modern network.<br />

And for network operators, this means a shift<br />

towards application-aware networks with<br />

detailed reporting and intelligence to route<br />

applications down the best path.<br />


In today's digital economy, application<br />

experience can make or break a business. Yet,<br />

achieving visibility over applications isn't easy,<br />

and often the time to troubleshoot and identify<br />

the root cause of a latency or performance<br />

issue and develop a resolution can eat up<br />

valuable time. Greater visibility from an<br />

application-aware network would allow<br />

businesses to understand and fix application<br />

issues faster, saving the time and cost of<br />

traditionally complex troubleshooting.<br />


A fundamental building block for creating an<br />

application-aware network is the<br />

implementation of SD-WAN (software-defined<br />

networking in a wide area network) that gives<br />

visibility over applications and enables<br />

organisations to control and direct traffic<br />

intelligently and securely from a central<br />

location across the WAN.<br />

Unlike traditional WAN architectures which<br />

lack the central visibility and control required<br />

for distributed IT environments, SD-WAN<br />

delivers a step change for businesses,<br />

providing the agility for changes to be made to<br />

simultaneous devices at the simple push of a<br />

button, saving time and increasing efficiency.<br />

Organisations can enforce their own policy,<br />

based on user experience, with priority given to<br />

the most business-critical applications so they<br />

avoid problems such as jitter, lag or brownouts.<br />

And because they are able to reduce the time<br />

required for configuration and troubleshooting,<br />

businesses employing SD-WAN<br />

benefit from significant operational cost<br />

savings. As more organisations adopt SaaS<br />

and cloud-based services, SD-WAN, and<br />

application-aware networking are therefore<br />

becoming business-critical necessities.<br />


In fact, the application-aware network requires<br />

the use of tools such as SD-WAN. By<br />

understanding what applications are used<br />

across the network, organisations can classify<br />

and apply appropriate application tuning to<br />

ensure optimum performance for each user.<br />

However, application-aware networking can<br />

also work alongside an edge computing<br />

strategy to drive further efficiencies.<br />

Edge computing is the confluence of cloud<br />

and physical data, which exists wherever the<br />

digital and physical world intersect, and<br />

enables data to be collected, generated, and<br />

processed to create new value. It either works<br />

independently from SD-WAN and applicationaware<br />

networking, or in conjunction to enable<br />

organisations to identify and prioritise<br />

application traffic.<br />


Spurred on by the pandemic, the need for<br />

application-aware networks has never been<br />

more prominent. As networks become<br />

increasingly software-defined, businesses have<br />

an opportunity to use greater levels of<br />

application intelligence to improve business<br />

connectivity, efficiency, and performance.<br />

By combining the operational and visibility<br />

benefits of application-aware networks and<br />

SD-WAN, with the latency benefits of edge<br />

computing, organisations can deliver a<br />

more reliable and consistent experience to<br />

users, regardless of the strength of network<br />

connection. At the same time,<br />

organisations can benefit from quicker<br />

troubleshooting, ultimately reducing strain<br />

on workloads and freeing teams to focus<br />

on more strategic priorities. <strong>NC</strong><br />

WWW.NETWORKCOMPUTING.CO.UK @<strong>NC</strong>MagAndAwards JANUARY/FEBRUARY <strong>2022</strong> NETWORKcomputing 25


SolarWinds SQL<br />

Sentry<br />




Vast numbers of businesses rely on<br />

Microsoft SQL Server to deliver essential<br />

services, but performance issues and<br />

downtime will result in a poor customer<br />

experience and loss of revenue. Performance<br />

monitoring in larger organisations is a primary<br />

role of database administrators but to achieve<br />

smooth operations, they cannot rely on<br />

manual diagnostics.<br />

There are plenty of database monitoring tools<br />

on the market and SolarWinds SQL Sentry<br />

stands out as not only does it provide full<br />

visibility across SQL Server, Azure SQL<br />

Database and SQL Server Analysis Services,<br />

but also monitors and reports on Windows<br />

Hyper-V and VMware hosts. This allows it to<br />

provide a complete picture of all your<br />

databases and associated host systems,<br />

making it easy to troubleshoot, fix problems<br />

and optimise performance.<br />

It's simple to deploy as we loaded the SQL<br />

Sentry client and portal on a Windows Server<br />

2019 Hyper-V VM in thirty minutes. It will<br />

require a separately purchased SQL Server<br />

database to store all diagnostics, performance<br />

and reporting data, but for our lab testing<br />

environment we used the free SQL Server 2019<br />

Express which worked fine.<br />

Further configuration is undemanding as after<br />

a brief onboarding process, we started<br />

declaring monitored targets to the SQL Sentry<br />

client. Once we'd added details of our local<br />

SQL Server hosts, the client checked for<br />

availability of Windows metrics such as CPU,<br />

memory, processes and storage activity and<br />

then added them to the explorer menu to the<br />

left for easy selection.<br />

SQL Sentry started monitoring our databases<br />

immediately and revealed a wealth of valuable<br />

information in its central pane. Graphs are<br />

provided for database activity, waits, memory<br />

usage and I/O plus host system network, CPU,<br />

memory and disk activity.<br />

You can see all the action in real time or<br />

choose a specific time period by selecting start<br />

and end dates and times from the upper ribbon<br />

menu. We particularly liked the colour coded<br />

graphs as we could easily see whether specific<br />

activities were being caused by a database<br />

instance or other non-related host processes.<br />

You can zoom in and out of the panel, and<br />

if you drag the mouse across an area of<br />

interest in one graph, SQL Sentry<br />

automatically highlights the relevant areas in<br />

all the others. Under each database instance<br />

in the left pane are options to view top SQL,<br />

blocking SQL or deadlock metrics in an<br />

Outlook-style calendar. You can drill down<br />

deeper for more information.<br />

SQL Sentry neatly solves the knotty problem of<br />

deadlock diagnosis as you can view these in<br />

considerable detail. The playback feature is<br />

quite brilliant as this shows you the sequence of<br />

events that caused the deadlock, successful and<br />

unsuccessful lock requests, code rollbacks and<br />

what the victim was.<br />

Proactive alerting features are provided by<br />

linking general, failsafe, audit and advisory<br />

conditions with actions. Choices for the latter are<br />

extensive as if a condition threshold is breached,<br />

SQL Sentry can execute a program, script or<br />

SQL command, log events, issue an SNMP trap<br />

and send emails to multiple recipients.<br />

The SQL Sentry portal provides a great<br />

overview of your entire monitored environment<br />

with its home page presenting a handy health<br />

score, along with at-a-glance status views of<br />

each monitored database and Windows server.<br />

Alerts can be pulled up for a chosen date<br />

range and custom dashboards created for<br />

database and host activity.<br />

SolarWinds SQL Sentry provides the<br />

information businesses demand to ensure their<br />

databases run smoothly and won't impact on<br />

customer services. It's easy to deploy, provides<br />

an incredible amount of information about<br />

database, OS, virtualisation and cloud<br />

operations, and its proactive alerting ensures<br />

minor database issues won't turn into<br />

productivity sapping emergencies. <strong>NC</strong><br />

Product: SQL Sentry<br />

Supplier: SolarWinds<br />

Web site: www.solarwinds.com<br />

Price: £1,100 per monitored database exc VAT<br />

26 NETWORKcomputing JANUARY/FEBRUARY <strong>2022</strong> @<strong>NC</strong>MagAndAwards<br />








The number of businesses using<br />

connected technology and the Internet<br />

of Things (IoT) is growing at a fast<br />

pace. These days, most organisations are<br />

using some kind of IoT technology in their<br />

day-to-day operations.<br />

Improved connectivity provides businesses<br />

with almost unlimited benefits, from greater<br />

efficiency, lower overheads and greater<br />

potential for profitability. However, it also<br />

introduces new avenues for cybersecurity<br />

attacks. The cost of connectivity is that attackers<br />

with nefarious intentions are looking to exploit<br />

vulnerabilities in IoT technology. That's why it's<br />

imperative to conduct a risk assessment.<br />

Assessing risk in your organisation is a<br />

continuous process of discovering<br />

vulnerabilities and detecting threats-from the<br />

individual, to individual devices,<br />

applications, sites, data networks and the<br />

organisation as a whole.<br />

In an ever-evolving cybersecurity threat<br />

landscape, security is not a one-time action.<br />

Conducting a holistic risk assessment allows<br />

for current and future-forward risk mitigation.<br />

A good risk assessment includes up-front<br />

technical measures along with ongoing<br />

practices that enable organisations to<br />

evaluate their cybersecurity risks and establish<br />

actions and policies that minimise threats<br />

over time. Along with finding vulnerabilities,<br />

it's equally important to prepare staff and<br />

equip the organisation with processes and<br />

practices to respond quickly and efficiently as<br />

soon as a vulnerability is discovered. Security<br />

should come standard with your data<br />

network, and with a virtually impenetrable IoT<br />

network, you can build in defence by default.<br />


Your data network should use a zero trust<br />

model to ensure that unknown entities are not<br />

able to gain any access. By design, devices<br />

and users are not automatically trusted.<br />

Instead, the system constantly checks users<br />

and devices when they try to gain access to<br />

any data, at both a network and device level.<br />

END-TO-END E<strong>NC</strong>RYPTION<br />

End-to-end encryption (E2EE) prevents third<br />

parties from accessing data while it's being<br />

transferred from one end device or system to<br />

another. With E2EE in place, only the intended<br />

recipient can decrypt the data being<br />

transferred. Along the way, it's secured against<br />

any tampering from any entity or service.<br />

IoT technology provider Smarter<br />

Technologies owns the private Orion IoT<br />

Data Network, the world's first fully end-toend<br />

IoT low-power radio network solution.<br />

This unique and proven system was<br />

developed alongside a long-standing<br />

involvement in the tracking and recovery of<br />

high-value assets such as cash in transit.<br />

Smarter Technologies conducted a cyber<br />

risk assessment for the Financial Conduct<br />

Authority (FCA), a financial regulatory body in<br />

the United Kingdom. The FCA receives<br />

information from many sources, including the<br />

Met Police, City of London Police and I-IMRC.<br />

These information partners required<br />

significant assurances that sensitive<br />

information shared would be appropriately<br />

secured; otherwise, they would stop sharing<br />

information with the FCA.<br />

The FCA were planning to move their<br />

business intelligence and information storage<br />

to a cloud-based platform. They required<br />

assurances as to the risks associated, as well<br />

as a secure migration strategy. Smarter<br />

Technologies conducted a full vendoragnostic<br />

IS1&2 Risk Assessment complete<br />

with treatment plan, development of a<br />

security strategy and ethical phishing<br />

roadmap. A specific cloud security<br />

assessment fed into the cloud security<br />

strategy. The key findings from both of these<br />

deliverables were detailed in a report for key<br />

stakeholders to discuss the findings and<br />

implement a risk mitigation strategy.<br />



The world continues to be more connected<br />

than it has ever been. A holistic, and ongoing<br />

focus on cybersecurity is a requirement, not a<br />

nice-to-have. Defending against the IoT<br />

threats of today and tomorrow requires<br />

continual risk assessment to secure your IoT<br />

solutions. As the threat landscape evolves, so<br />

must you. Partnering with an expert in risk<br />

management empowers your organisation to<br />

manage risk so that you can focus on<br />

harnessing the true business value of your IoT<br />

solutions and products. <strong>NC</strong><br />

WWW.NETWORKCOMPUTING.CO.UK @<strong>NC</strong>MagAndAwards JANUARY/FEBRUARY <strong>2022</strong> NETWORKcomputing 27






Aserver failure is a potential nightmare for<br />

any business. For an accountancy<br />

practice such as Wilkes Tranter, if<br />

employees are unable to access the applications<br />

they need, client services and profitability are at<br />

risk. Thanks to Superfast IT and Arcserve<br />

ShadowProtect, when one of Wilkes Tranter's<br />

vital servers failed, the firm experienced less than<br />

30 minutes of downtime. With minimal impact<br />

on employee productivity, Wilkes Tranter was<br />

able to continue to serve its clients, safeguarding<br />

satisfaction and profitability.<br />

From IT to cloud, Superfast IT helps its<br />

customers establish and manage solutions that<br />

are secure, reliable and cost-effective. From its<br />

office in Stourbridge, the company works with<br />

more than 65 clients across the West<br />

Midlands, Worcestershire, Staffordshire and<br />

Shropshire. Superfast specialise in IT support,<br />

managed cyber security, cloud, backup,<br />

Microsoft 365 and connectivity to help<br />

business owners simplify IT.<br />


Superfast's customers include Wilkes Tranter,<br />

which offers accountancy, audit and tax<br />

services to local businesses across multiple<br />

industries including construction,<br />

manufacturing, retail and service. The<br />

company's 36 employees use their specialist<br />

skills to provide clients with first-class<br />

services and advice to help them develop<br />

and grow their businesses. As an<br />

accountancy practice, employees use a wide<br />

variety of business applications that are<br />

accessed via remote desktops to simplify IT<br />

management and updates.<br />

With billing generated from the re-charge of<br />

time, Wilkes Tranter cannot afford for its IT<br />

systems to be unavailable. "Even an hour's<br />

downtime could impact profitability for the<br />

accountancy firm," confirms James Cash,<br />

Managing Director at Superfast IT. "An IT<br />

outage would also be hugely frustrating for<br />

employees and could affect client satisfaction."<br />

So when the company's remote desktop server<br />

suffered a hardware failure and employees<br />

were unable to access their critical applications,<br />

Superfast IT had to act fast to get them back up<br />

and running as quickly as possible.<br />


Superfast IT has worked with StorageCraft for<br />

more than 10 years to ensure all its clients have<br />

reliable backup capabilities. "Arcserve<br />

ShadowProtect has been a crucial tool for us for<br />

the last 10 years," explains Cash. "We've<br />

standardised all our clients on ShadowProtect as<br />

it's so flexible. We can virtualise backup images<br />

on the fly, restore to alternative hardware and<br />

even use it to resolve issues with laptops, as we<br />

can use ShadowProtect to port the operating<br />

system to another device while we fix the<br />

original." In total they protect more than 70 TB<br />

of data across all its clients with ShadowProtect,<br />

taking hourly backups as standard. These<br />

backups include 2 TB of data from Wilkes<br />

Tranters' essential accounting applications, tax<br />

processing systems, email and file servers. Using<br />

ShadowProtect, Superfast IT backs up servers to<br />

local storage, then replicates data to its own<br />

datacenter and a secondary offsite location.<br />

Thanks to these measures, when Superfast IT<br />

received an automated alert early one<br />

morning that a Wilkes Tranter server was<br />

down, it was able to respond quickly. Superfast<br />

IT engineers Mark Poulding identified remotely<br />

that the RAID card had failed and promptly<br />

went onsite to restore services. "We needed a<br />

replacement part before we could repair the<br />

server, which would take some time," he<br />

explains. "Using the ShadowProtect backup,<br />

however, I was able to virtually boot the server<br />

from my laptop and run all the essential<br />

systems from there with immediate effect." The<br />

replacement server part was delivered that<br />

afternoon and the server was back up and<br />

running later that day.<br />


Thanks to ShadowProtect and Superfast IT,<br />

Wilkes Tranter experienced just 30 minutes of<br />

downtime instead of eight hours. Employees<br />

were able to continue working without loss of<br />

billing hours, and clients were completely<br />

unaware of any issues. "Working with Superfast<br />

IT and ShadowProtect provides us with<br />

confidence that when an IT incident occurs, we<br />

can keep serving our clients," comments James<br />

Ellwood, Director at Wilkes Tranter. "It is vital to<br />

business continuity." The ability to help its clients<br />

restore business continuity so quickly is key to<br />

Superfast IT's reputation and competitive<br />

advantage. With ShadowProtect, the team has<br />

great visibility and control over backups and the<br />

replication process across all its clients and can<br />

rapidly recover their systems and data should<br />

they need to. It is also extremely scalable.<br />

"Arcserve's MSP subscription model means we<br />

can quickly provision and terminate servers<br />

when we update client infrastructure," adds<br />

Cash. "We have a great relationship with<br />

Arcserve, which we see continuing into the<br />

future so we can continue to provide our clients<br />

with peace of mind that their critical systems<br />

and data are protected." <strong>NC</strong><br />

28 NETWORKcomputing JANUARY/FEBRUARY <strong>2022</strong> @<strong>NC</strong>MagAndAwards<br />




We believe that you - the readers of Network Computing - have always been splendid judges.<br />

Very shortly we will be asking you to put forward the products, companies and people that<br />

have most impressed you for the Awards of <strong>2022</strong>.<br />

Look out for further announcements from us.<br />



We invite you to put forward your customer success stories for the NETWORK PROJECT OF<br />

THE YEAR. Also, independent product reviews must be completed by the end of May in order<br />

for a solution to be a contender for the BE<strong>NC</strong>H TESTED PRODUCT OF THE YEAR. The<br />

review process itself can take around a month so you should make a booking sooner rather<br />

than later. Contact dave.bonner@btc.co.uk<br />

A BIG THANK YOU.........<br />

For becoming the Chief Event Sponsor for the Awards of <strong>2022</strong>










It is no secret that many businesses find it<br />

difficult to break away from legacy IT<br />

systems in favour of more modern data<br />

architectures. But like many relationships on<br />

the rocks, getting the confidence to actually<br />

initiate the breakup is the biggest challenge.<br />

So, for any UK businesses needing that<br />

extra push or final reason to break up with<br />

legacy systems, such as Apache Hadoop, let<br />

me explain why you should rip the plaster<br />

off. Simply put, these systems can be hard to<br />

manage and costly. They are incredibly<br />

resource-intensive with the need for highly<br />

skilled people to manage and operate the<br />

environment. With exponential data growth<br />

across many businesses, and the need for<br />

more advanced analytics like machine<br />

learning (ML) and artificial intelligence (AI),<br />

there will be fewer advanced analytics<br />

projects deployed in production on older<br />

software such as Hadoop.<br />

30 NETWORKcomputing JANUARY/FEBRUARY <strong>2022</strong> @<strong>NC</strong>MagAndAwards<br />



So, how can UK businesses build a new<br />

fit-for-purpose data architecture and where<br />

can they start?<br />


If you speak to many CIOs, the<br />

shortcomings of Hadoop are<br />

acknowledged and understood. According<br />

to a global Databricks and MIT study<br />

surveying chief data officers, chief analytics<br />

officers and chief information officers, 50%<br />

said they are currently evaluating or<br />

implementing a new data platform to<br />

address their current data challenges. A<br />

common problem lies in presenting the<br />

alternative data architectures that are<br />

available, and how CIOs can migrate<br />

seamlessly. Making that first jump away<br />

from on-premise can be a daunting task<br />

and if a new migration is deemed<br />

unsuccessful, too slow or too costly, this<br />

could have serious ramifications.<br />

The future lies in a modern data and AI<br />

architecture that can seamlessly scale and<br />

go hand in hand with the cloud. It also<br />

needs to be straightforward to administer<br />

so that data teams can focus on building<br />

out use cases, not managing infrastructure.<br />

Crucially, the architecture needs to provide<br />

a reliable way to deal with all kinds of data<br />

to enable predictive and real-time analytics<br />

use cases.<br />

A lakehouse platform is increasingly<br />

becoming the architecture of choice. It<br />

provides a structured transactional layer to<br />

a data lake to add data warehouse-like<br />

performance, reliability, quality, and scale<br />

but for all data. It allows many of the use<br />

cases that would traditionally have required<br />

legacy data warehouses to be<br />

accomplished with a data lake alone.<br />

So, migrating to a lakehouse platform<br />

sounds ideal, but many CIOs will be asking<br />

how easy it is to get to this point. Let us<br />

look at some simple steps to take when<br />

migrating off legacy systems.<br />

1. Get talking<br />

Before any successful migration happens,<br />

data teams, CIOs and CDOs need to talk<br />

about it. Some logical questions to start<br />

with are 'Where are we now?' and 'Where<br />

should we be?' Teams can then go away<br />

and assess the state of the current<br />

infrastructure and plan for a new one.<br />

There will undoubtedly be a lot of<br />

experimentation and new learnings at this<br />

early stage. Organisations that want to<br />

undertake a successful migration need to<br />

have the right conversations internally to<br />

understand why their business wants to<br />

migrate, who needs to be involved and<br />

how the migration fits into an overall cloud<br />

strategy, to name but a few things.<br />

2. Run a migration assessment<br />

It's not going to be an 'instant spark' with<br />

any migration project. The most realistic<br />

approach for most will be to migrate<br />

project by project. Organisations will have<br />

to understand what jobs are running and<br />

what the code looks like. In many<br />

scenarios, organisations will also have<br />

to build a business case for any<br />

migration, including calculating the<br />

cost for a new lakehouse platform,<br />

for example.<br />

3. Get the technical building<br />

blocks right and evaluate<br />

At the technical phase, businesses<br />

need to think through their target<br />

architecture and ensure it will support<br />

business needs for the long term.<br />

Typically, the process entails<br />

mapping older<br />

technologies to new<br />

ones or simply<br />

optimising them.<br />

Organisations must<br />

also take stock of<br />

how to move their<br />

data to the cloud<br />

with the<br />

workloads.<br />

Finally,<br />

organisations need to carry out some form<br />

of evaluation, target demos and<br />

production pilots to approve an approach<br />

for the new data architecture.<br />

4. Execute<br />

The final thing to consider is the actual<br />

execution phase. Migration is not easy,<br />

however getting it done right the first time<br />

is essential to how quickly the organisation<br />

can start to scale its analytical practices,<br />

cut costs and increase data team<br />

productivity. To ensure continuity,<br />

businesses should consider running<br />

workloads on both their old system and the<br />

new data architecture, ensuring everything<br />

is identical. Over time, the decision can be<br />

made to completely cut over to the new<br />

data architecture and decommission the<br />

use case from the older one completely.<br />

As organisations look to empower their<br />

teams to do more with data and AI, it is time<br />

for UK businesses to stop just thinking about<br />

adopting new data architecture and<br />

have the confidence to finally<br />

take this step. The longer<br />

organisations wait to 'rip<br />

the plaster off' and make<br />

this move, the more<br />

painful it will feel when<br />

they cannot keep up with<br />

growing customer<br />

expectations and<br />

competitive pressures. <strong>NC</strong><br />

WWW.NETWORKCOMPUTING.CO.UK @<strong>NC</strong>MagAndAwards JANUARY/FEBRUARY <strong>2022</strong> NETWORKcomputing 31






Unsurprisingly, 2021 saw no<br />

shortage of cybersecurity<br />

moments. Attacks that affected<br />

millions of people made headlines<br />

recurringly, as companies grappled with<br />

the aftermath of disruption and breaches.<br />

The Colonial Pipeline cyber attack,<br />

alongside a series of other high-profie<br />

ransomware attacks dominated the<br />

media and conversations. However, there<br />

were countless other significant incidents<br />

- with the potential for far-reaching<br />

privacy, regulatory and even human<br />

safety implications - that didn't make<br />

headlines. While most were<br />

overshadowed by competing news, or<br />

simply brushed aside, it's now time to<br />

take another look at these attacks, as<br />

many could provide lessons still waiting<br />

to be learnt. Here are the three most<br />

significant cyber attacks that merit<br />

reflection:<br />


ATTACK<br />

Beware of widespread vulnerabilities in<br />

industrial control systems.<br />

In <strong>Feb</strong>ruary 2021, a threat actor<br />

attempted to poison the water supply of a<br />

Florida city. Reminiscent of a Hollywood<br />

movie scene, the cursor on a local water<br />

plant operator's computer screen began<br />

moving independently and accessing<br />

applications that controlled water<br />

treatments. The attacker behind this<br />

allegedly boosted the concentration of<br />

sodium hydroxide in the water by a factor<br />

of 100.<br />

No one was injured as a result of the<br />

operator's prompt discovery and<br />

immediate steps to stabilise the water<br />

levels. However, the "might haves"<br />

loomed large, and the attack underlined<br />

how serious cybersecurity issues within<br />

critical infrastructure remain.<br />

For a variety of reasons, the public<br />

utilities industry is particularly vulnerable<br />

to threat actors. For one thing, much of<br />

the infrastructure that controls industrial<br />

control systems - the systems supporting<br />

key services - was developed in the<br />

1980s or 1990s. Because of the crucial<br />

nature of utility operations, the creators<br />

of these systems had to prioritise system<br />

availability and interoperability over<br />

security. As these systems got more<br />

integrated with internet-connected IT over<br />

time, they became more appealing<br />

targets for hackers.<br />

Despite increased spending on<br />

cybersecurity operations and<br />

maintenance by both the government and<br />

private sector, many utility firms are still<br />

struggling to keep up with increasingly<br />

sophisticated and highly targeted attacks.<br />

And the stakes are high; public safety is<br />

potentially in danger, as proven by this<br />

episode, in addition to negative publicity,<br />

brand harm, and hefty regulatory fines.<br />

"Unfortunately, that water treatment<br />

facility is the rule rather than the<br />

exception," wrote Christopher Krebs,<br />

former director of the US Cybersecurity<br />

and Infrastructure Security Agency (CISA),<br />

following the attack. "Even the basics in<br />

cybersecurity are often out of reach when<br />

a business is battling to make payroll and<br />

keep systems running on a generation of<br />

technology produced in the last decade."<br />

32 NETWORKcomputing JANUARY/FEBRUARY <strong>2022</strong> @<strong>NC</strong>MagAndAwards<br />




Don't underestimate the dark side of IoT.<br />

The Internet of Things (IoT) provides<br />

threat actors a large attack surface and<br />

continues to pose a daunting cybersecurity<br />

problem for businesses, with billions of<br />

connected devices (and counting).<br />

Attackers infiltrated Verkada, a cloudbased<br />

video security firm, in March 2021,<br />

demonstrating how IoT devices, like other<br />

sensitive network assets, pose a danger.<br />

The attackers were able to traverse through<br />

live feeds of over 150,000 cameras<br />

stationed in factories, hospitals,<br />

classrooms, jails, and more, while also<br />

obtaining sensitive footage belonging to<br />

Verkada software clients, using authentic<br />

admin account credentials found on the<br />

internet. It was later confirmed more than<br />

100 people within the organisation had<br />

"super admin" access, each of whom could<br />

access thousands of customer cameras -<br />

demonstrating the potential dangers of<br />

overprivileged users.<br />

Fortunately, the incident caused only minor<br />

damage, but things could have been much<br />

worse. The breach was only the tip of the<br />

iceberg, demonstrating how dangerous<br />

unsecure IoT may be. This has raised new<br />

questions and fuelled ongoing privacy<br />

debates about how surveillance technology<br />

should be used, how sensitive data - such as<br />

bedside footage of a hospital patient or<br />

proprietary manufacturing processes in action<br />

- should be stored, and how access to this<br />

data should be managed.<br />

While the incident did not receive much<br />

attention, it should not be overlooked, as<br />

daily life becomes more networked, the<br />

subject of "who watches the watchmen" will<br />

undoubtedly resurface.<br />


Understand the importance of least<br />

privilege access.<br />

Twitch, a popular video game streaming<br />

network, was the subject of a potentially<br />

catastrophic data breach in October 2021.<br />

Threat actors allegedly took the platform's full<br />

source code, as well as 125GB of sensitive<br />

data, including top user pay out information,<br />

and leaked it online in order to "promote<br />

further disruption and competition in the online<br />

video streaming industry."<br />

The problem was prompted by a "server<br />

configuration change that permitted improper<br />

access by an unauthorised third party,"<br />

according to a corporate statement. Such<br />

misconfigurations, particularly in cloud-based<br />

environments, are very common, and can<br />

potentially open a path to sensitive assets<br />

such as source code and other intellectual<br />

property. Traditional change control<br />

procedures for correct configuration are<br />

exceedingly problematic in the cloud due to<br />

its dynamic nature.<br />

While Twitch later stated user passwords and<br />

bank account information were not accessed<br />

or disclosed as a result of the incident, privacyconscious<br />

users were not waiting to find out.<br />

On the day the news broke, global web<br />

searches for "how to delete Twitch" increased by<br />

733 percent, implying the platform's popularity<br />

could suffer as a result of the hack. The attack<br />

highlighted the numerous issues businesses<br />

face when it comes to safeguarding cloud<br />

environments, as well as the importance of<br />

least privilege access in reducing risk and<br />

defending against internal and external attacks.<br />

As it is often said, "history doesn't repeat itself,<br />

but it sure does rhyme". These 2021 cyber<br />

attacks faded from view fast, but the battle at<br />

the cyber front continues. As cyber tactics<br />

evolve and threat vectors increase, a look at<br />

the past gives us valuable lessons that are<br />

critical to future wins. <strong>NC</strong><br />

WWW.NETWORKCOMPUTING.CO.UK @<strong>NC</strong>MagAndAwards JANUARY/FEBRUARY <strong>2022</strong> NETWORKcomputing 33






Much has been written about the<br />

financial costs of network downtime.<br />

Many businesses have lost millions of<br />

pounds from outages, and the resulting fines.<br />

Depending on the industry and length of<br />

downtime incurred, the reputation of the<br />

business can be hit, leading to further financial<br />

damage. What has been less extensively<br />

reported, however, is that outages also have a<br />

significant impact on every organisation's most<br />

valuable asset - their staff.<br />

Through the pandemic, the wellbeing of staff<br />

has, or at least should have, become a top<br />

priority for every organisation. The global<br />

safety assurance specialist, Lloyds Register,<br />

surveyed 5,500 individuals across 11 countries<br />

to understand the impact of changing working<br />

conditions caused by COVID-19. Its report<br />

found that 69% of employees reported higher<br />

levels of work-related stress while working from<br />

home, driven by increased workloads and<br />

changes to working patterns.<br />

Against that backdrop of strained mental<br />

health, the stress of coping with an outage and<br />

its aftermath including having to deal with<br />

unhappy or angry customers, can prove all but<br />

unbearable for service staff. Many IT support<br />

teams, for example, have had a great deal to<br />

cope with through the pandemic. From the<br />

outset, they had to provide remote access and<br />

IT support for other remotely-located employees,<br />

often while having to adapt to homeworking<br />

and the isolation it can bring themselves.<br />

Many organisations rushed to implement<br />

cloud computing in the early days of COVID.<br />

For all the manifest benefits of cloud, mistakes<br />

and misconfigurations were inevitable. That<br />

potentially gave hackers opportunities to exploit.<br />

At the same time, with many networks under<br />

growing strain from increased traffic and surges<br />

in demand as digitalisation accelerated, the<br />

potential for outages to occur also increased.<br />

Just keeping networks operational has been<br />

an ever-present concern for these staff.<br />

Cybersecurity threats have been on the rise<br />

since COVID-19 and the resulting outages and<br />

downtime can take their toll on engineers<br />

facing long journeys to investigate outages,<br />

followed by a battle against time to get systems<br />

up and running again. With travel still restricted<br />

in parts of the world, sending engineers out to<br />

remote sites to address downtime issues may<br />

still risk compromising their health and safety,<br />

or entail the need for quarantine.<br />

Over and above the financial drivers,<br />

organisations also need to consider the<br />

human cost of outages. That in turn highlights<br />

just how important it is that when disruption<br />

occurs, companies have an IT business<br />

continuity plan that enables them to recover<br />

quickly. They above all need to ensure their<br />

network is resilient.<br />

One priority must be ensuring businesses<br />

have visibility and the agility to pivot as<br />

problems occur. Many are not proactively<br />

notified if something goes offline. Even when<br />

they are aware, it may be difficult to<br />

understand which piece of equipment at which<br />

location has a problem. To solve errors, an<br />

organisation might need to perform a quick<br />

system reboot remotely. If this does not work,<br />

there may be a problem with a software<br />

update or other serious issue. That's where the<br />

concept of Out-of-Band comes into play.<br />

In this context, when outages occur,<br />

organisations can use Smart Out-of-Band<br />

(OOB) management to establish an alternative<br />

path into the network and then start working<br />

on resolving the problem, without having to<br />

send in engineers to visit the relevant site and<br />

fix affected devices in person. The OOB<br />

management network is separate from the<br />

main production network so even if the<br />

business is infected internally, it will still have a<br />

healthy OOB management network.<br />

OOB allows network admins to provision,<br />

maintain and manage components such as<br />

servers, WAN and security devices and resolve<br />

malfunctions via secure remote access. If there is<br />

an issue with connectivity, out-of-band solutions<br />

offer a failover solution, with cellular often<br />

providing an alternative to wired connectivity.<br />

In short, having an effective Smart OOB<br />

management network in place will enable the<br />

business to securely access the affected<br />

network and devices, resolve problems and<br />

support business continuity. In addition, a<br />

network automation or NetOps approach can<br />

also help in automating responses to specific<br />

malicious occurrences. It will additionally<br />

provide real-time visibility of events regardless<br />

of the production networks state.<br />

Such an approach to delivering network<br />

resilience is critical if businesses are to drive<br />

network uptime, ensure business continuity and<br />

significantly reduce the impact downtime can<br />

bring to employee health and wellbeing. <strong>NC</strong><br />

34 NETWORKcomputing JANUARY/FEBRUARY <strong>2022</strong> @<strong>NC</strong>MagAndAwards<br />




Building Blocks<br />

Construct Your Optimized IT Solutions with Our Twin Family of Multi-Node Servers<br />

Featuring 3 rd Gen Intel ® Xeon ® Scalable Processors<br />

Learn More at www.supermicro.com<br />

© Supermicro and Supermicro logo are trademarks of Super Micro Computer, Inc. in the U.S. and/or other countries.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!