ST Sep-Oct 2025
Transform your PDFs into Flipbooks and boost your revenue!
Leverage SEO-optimized Flipbooks, powerful backlinks, and multimedia content to professionally showcase your products and significantly increase your reach.
STORAGE
MAGAZINE
The UK’s number one in IT Storage
September/October 2025
Vol 25, Issue 5
CASE STUDY:
Silverstone Racetrack revs up
THE RANSOMWARE
PAYMENTS BAN:
Will it work?
WHY AI FACES NEW
GROWING PAINS:
Or the AI pipeline fails
ROLE OF DATA IN
ENTERPRISE AI:
Storage ecosystem needs to
be a solution
COMMENT - RESEARCH - INTERVIEWS - CASE STUDIES - OPINIONS - PRODUCT REVIEWS
Breakthrough
Areal Density
At Your Doorstep
HAMR-POWERED INNOVATION
Reduce TCO and future-proof storage infrastructure.
SUPERIOR POWER EFFICIENCY
3× the power efficiency per TB compared to typical drives.
BREAKTHROUGH AREAL DENSITY
First drive with 3 TB per platter, perfect for AI and big data.
BUILT FOR SUSTAINABILITY
Made with more renewable energy and recycled materials than
any previous Seagate product.
Data
Done Right
Discover data storage
solutions built on trust,
affordability and ease.
+ Storage you can count on
+ Best-in-class value and service
+ Seamless end-to-end integration
+ Operational simplicity
Read more:
CONTENTS
CONTENTS
COMMENT….....................................................................4
OPINION: THE RANSOMWARE PAYMENT BAN:.....….6
Will it work? Commvault offers researched insight
06
MANAGEMENT: SIX ESSENTIAL CYBER
STACK INGREDIENTS:.........................................................8
Infinidat runs through core components of cyber resilience
OPINION: RANSOMWARE HAS EVOLVED, SO MUST
OUR DEFENCES:......................................................…….10
CTERA cautions on the next frontlines
10
CASE STUDY: REVVING UP RELIABILITY:.........……..12
Silverstone Racetrack overhauls server infrastructure with Synology
OPINION: LEAP TO QUANTUM COMPUTING:....……14
Innovec highlights why you don’t have to be a quantum physicist to understand quantum
computing
14
TECHNICAL: OPEN-E AND WESTERN DIGITAL
TEAM UP:…...............................................................……18
Brings NVMe-based data storage boost
OPINION: WHY AI FACES NEW GROWING PAINS:…....20
How PEAK:AIO is overcoming AI growing pains at the Zoological Society of London
22
MANAGEMENT STRATEGY: THE FUTURE IS STILL
SPINNING:...............................................................................22
Toshiba strategises on how hard drives still remain indispensable in data centres
ROUNDTABLE: EMERGING MEMORY STRATEGIES:....24
Expert perspectives gathered in one roundtable
TECHNICAL: STORAGE IS TRANSFORMING AI........32
Supermicro considers the role of data in enterprise AI
24
CASE STUDY: NATIONAL LIBRARY OF SCOTLAND...34
Trusts cultural treasures to Scality
www.storagemagazine.co.uk @STMagAndAwards Sept/Oct 2025
STORAGE
MAGAZINE
03
COMMENT
EDITOR: Sharon Munday
editor@storagemagazine.co.uk
SUB EDITOR: Mark Lyward
mark.lyward@btc.co.uk
REVIEWS: Dave Mitchell
PUBLISHER: John Jageurs
john.jageurs@btc.co.uk
LAYOUT/DESIGN: Ian Collis
ian.collis@btc.co.uk
SALES/COMMERCIAL ENQUIRIES:
Lucy Gambazza
lucy.gambazza@btc.co.uk
Stuart Leigh
stuart.leigh@btc.co.uk
MANAGING DIRECTOR: John Jageurs
john.jageurs@btc.co.uk
DISTRIBUTION/SUBSCRIPTIONS:
Christina Willis
christina.willis@btc.co.uk
PUBLISHED BY: Barrow & Thompkins
Connexions Ltd. (BTC)
Suite 2, 157 Station Road East
Oxsted. RH8 0QE
Tel: +44 (0)1689 616 000
Fax: +44 (0)1689 82 66 22
SUBSCRIPTIONS:
UK £35/year, £60/two years,
£80/three years;
Europe: £48/year, £85 two years,
£127/three years;
Rest of World: £62/year
£115/two years, £168/three years.
Single copies can be bought for £8.50
(includes postage & packaging).
Published 6 times a year.
No part of this magazine may be
reproduced without prior consent, in
writing, from the publisher.
©Copyright 2025
Barrow & Thompkins Connexions Ltd
Articles published reflect the opinions
of the authors and are not necessarily those
of the publisher or of BTC employees. While
every reasonable effort is made to ensure
that the contents of articles, editorial and
advertising are accurate no responsibility
can be accepted by the publisher or BTC for
errors, misrepresentations or any
resulting effects
While most of the country was watching the spectacle of President Trump's
second state visit, those of us in data storage should have had our eyes
elsewhere: the announcement of a UK/US Tech Prosperity Deal bringing £31
billion in fresh US investment into UK data and cloud infrastructure.
For our industry, the detail is what really matters. Microsoft has pledged £22 billion
over the next four years for AI and cloud projects, including a new supercomputer hub
in Essex with Nscale. Nvidia will deploy 120,000 GPUs across UK data centres - one
of its largest European rollouts. OpenAI's "Stargate UK" initiative promises expanded
capacity and local GPU deployments, while Google is set to invest around £5 billion in
a new Waltham Cross data centre and scaling DeepMind's AI research. Importantly,
this isn't all confined to London and the South East: an AI Growth Zone in
Northumberland is on the cards, bringing 5,000 jobs and new energy-backed
infrastructure to Blyth.
This surge in US investment is more than political theatre. It could mark a pivotal
moment in the UK's bid to anchor sovereign compute and storage on home soil -
though "could" is the key word, as it comes with all the familiar challenges of power,
cooling, and resilience that we often grapple with in Storage Magazine.
Against that backdrop, this issue continues our focus on forward-thinking articles and
commentary. The standout research comes from Commvault, with a timely UK survey
that throws the spotlight on government plans to enforce ransomware non-payment
across both public and private sectors under the forthcoming Cyber Security and
Resilience Bill. Supermicro explores the role of customised large language models in
enterprise AI, showing how techniques like RAG can enable rapid, reliable data
retrieval. And still on the AI theme, PEAK:AIO explains how to deliver petabytes of raw
information to GPU cores without bottlenecks - and why trust in the resulting models is
vital when lives may depend on them.
This Autumn issue is stacked with insights - turn the page and see for yourself.
Sharon Munday
Editor, Storage Magazine
EDITORS COMMENT:
SHARON MUNDAY, EDITOR,
STORAGE MAGAZINE
04 STORAGE
MAGAZINE
Sept/Oct 2025
@STMagAndAwards
www.storagemagazine.co.uk
CAN
YOUR
NAS
support on-prem,
public, private, hybrid,
or even air-gapped
environments to fit
your business needs?
Modernize Your NAS
OPINION: COMMVAULT
THE RANSOMWARE PAYMENT BAN: WILL IT WORK?
DARREN THOMSON, FIELD CTO EMEAI AT COMMVAULT CONSIDERS THE IMPACT ON UK PUBLIC AND
PRIVATE SECTORS FOR MANDATORY REPORTING OF RANSOMWARE AND NON-PAYMENT OF
RANSOMWARE DEMANDS.
Following a public consultation, the UK
Government has confirmed it intends to
press ahead with introducing a ban on
ransomware payments for the public sector
and critical national infrastructure providers. It
is also considering a new 'payment prevention
regime' that will require organisations, not
covered by the ban, to notify the government
of their intention to pay the ransom demands
of cyber criminals.
Designed to "target the business model that
fuels cyber criminals' activities", the proposed
policy aims to dismantle the ransomware
model by reducing the financial incentive for
ransomware attacks; a move the Government
hopes will improve operational resilience
across the public sector by making the vital
services the public rely on a "less attractive
target for ransomware groups". If enacted, the
new legislation would make the UK the first
country to legally ban ransomware payments
by the public sector and regulated
organisations that oversee critical national
infrastructure (CNI).
While primarily targeted at public bodies and
CNI operators, the ban has significant
implications for the private sector too. The
introduction of a new mandatory reporting
regime for ransomware incidents will make
private sector organisations subject to an
oversight mechanism that, while it doesn't ban
payments, is likely to discourage payment by
increasing accountability. Meanwhile, suppliers
to the public sector must wait to see if they will be
included in the scope of the ransomware ban.
TAKING A STAND
The UK Government's proposal is being widely
viewed as a world-leading approach to
deterring ransomware attacks by cutting off the
flow of payments to criminals.
In an ideal world, prohibiting ransomware
payments should help curb the effort of the
ransomware gangs targeting schools, local
authorities, and hospitals. Since ransomware
only works if victims pay, making ransomware
payments 'off limits' would reduce the
attraction of public sector targets for financially
motivated attackers.
It's a viewpoint that is widely supported by the
UK private sector looking to break the
ransomware cycle and deprive the
ransomware ecosystem of 'fuel'. According to
our recent research, 94% of UK business
leaders supported the introduction of a
payments ban for public entities. An impressive
99% were also in favour of extending the ban
to private organisations too.
Of those supporting the Government's
proposed payments ban for the public sector, a
third (33%) believe this would decrease the
prevalence of attacks by reducing the incentive
for attackers. A further third (34%) believed it
would lead to increased government support
and intervention to safeguard cyber resilience.
However, the research also reveals that the
private sector's likely compliance with a
ransomware ban is uncertain. Three-quarters
(75%) of business leaders admitted that if the
ban was extended to the private sector, they
would still pay a ransom if it were the only way
to save their organisation, regardless of
whether civil or criminal penalties applied.
Only 10% were able to say with conviction that
they would comply in the event of an attack.
This ambivalence highlights the conundrum
confronting private sector firms. While no
organisation wants to pay a ransom, the
research highlights many feel they will have
little choice but to pay and risk criminal
charges if the company's survival is at stake.
THE LAW OF UNINTENDED
CONSEQUENCES
The UK Government's proposed payment
prohibition measures for the public sector
could have unintended consequences. Since
public sector entities will no longer offer rich
pickings, the risk is that attackers will double
down on targeting the private sector. The fear
here is that they will focus their efforts on
pursuing organisations deemed the least
equipped to protect themselves, such as SMBs
and non-profits.
The recent and devastating experience of
KNP, a 158-year-old UK logistics company that
06 STORAGE Sept/Oct 2025
@STMagAndAwards
www.storagemagazine.co.uk
MAGAZINE
OPINION: COMMVAULT
"Three-quarters (75%) of business leaders admitted that if the ban was extended to the
private sector, they would still pay a ransom if it were the only way to save their organisation,
regardless of whether civil or criminal penalties applied."
Maintaining data integrity and availability
will also be crucial for restoring operations
and reducing the risk of reinfection. In this
respect, immutable, air-gapped backups and
regular recovery point testing and validation is
vital for ensuring the availability of clean data
for restoration.
employed around 700 people, underlines the
devastating impact of an attack on an SMB.
Unable to pay a ransom estimated to be in the
millions, the company had no other option but
to close.
Faced with declaring bankruptcy, some
organisations may decide to make payments
in secret. Alongside forcing ransomware
payments 'underground', this could potentially
expose organisations to further extortion
should cybercriminals subsequently threaten to
publicise these payments.
Given the likelihood that ransomware attacks
are an inevitability and the decision to pay a
ransom is fraught with ethical, legal, and
practical challenges, the only way forward for
private sector organisations is to increase their
investment in prevention, detection, and rapid
recovery technologies that strengthen their
cyber resilience.
A BUSINESS-FIRST APPROACH: THE
MINIMUM VIABILITY MODEL
Given that recovery from a cyberattack takes
24 days on average and that 43% of UK
businesses report they have experienced a
cyber breach or attack in the past 12 months,
maintaining essential services during a
cyberattack has become an absolute must
have. That means having a clear and
actionable plan in place to restore critical
systems, data and processes.
Minimum viability, or a 'minimum viable
company', is a top-down business-led
approach that enables organisations to
prioritise the recovery of core operations until
full recovery can be achieved. By protecting
the key systems, assets, processes, and people
needed to maintain essential services during a
cyberattack, organisations can continue to
operate when disruption hits and protect their
long-term viability.
Developing a plan for minimum viability starts
well before an attack and involves the
identification of what truly matters for
continuous business. In other words, the
fundamental applications and services that
must stay secure and operational at all times.
Typically, this will include communication
platforms such as email and collaboration
tools, financial and customer-facing systems,
and core operational workflows.
Finally, since resilience depends on people as
much as technology, clearly defined incident
procedures and regular scenario-based drills
are essential for evaluating team response
times and continually improving processes.
The goal here is to test the organisation's
readiness and ability to recover from an attack
and identify areas that need strengthening.
PREPARING FOR THE FUTURE
Implemented effectively, the Government's
ransomware payment ban could indeed reduce
the financial incentive for attackers as intended,
but for those organisations without a solid cyber
recovery plan or the time and resources to
recover from an attack, banning payments
brings the potential for existential risk.
Irrespective of where organisations stand on
the value or practicalities of a payment ban,
the importance of improving protection and
recovery capabilities is irrefutable. Especially
when paying a ransom rarely guarantees
recovery and often increases the likelihood of
being targeted again.
By incorporating minimum viability principles
into their resilience planning and recovery
strategies, organisations can significantly
reduce the likelihood of catastrophic
outcomes when an attack occurs. They will
also be able to validate that their core services
can be recovered quickly and safely should
the worst happen.
More Info: www.commvault.com
www.storagemagazine.co.uk
@STMagAndAwards Sept/Oct 2025
STORAGE
MAGAZINE
07
MANAGEMENT: CYBER RESILIENCE
SIX ESSENTIAL 'CYBER
STACK' INGREDIENTS
ERIC HERZOG, CMO AT INFINIDAT RUNS THROUGH THE
CORE COMPONENTS OF NEXT-GENERATION CYBER
RESILIENCE: FROM ATTACK MAPPING TO RAPID RECOVERY
Every single second of the day there's
a cyberattack taking place multiple
times somewhere in the world. That's
according to estimates from a range of
professional cyber security experts and
tech vendors. A well-known brand or
public sector organisation is almost always
in the news and facing serious disruption
because of a cyber incident. So far this
year we've had retailers, hotels, hospitals,
airports, schools, and manufacturers such
as in the automotive sector - all forced to
shut down their operations to minimise the
effects of a cyberattack.
This heightened risk and frequency of
cyberattacks means that finding solutions
to deliver proactive, next-generation
data protection, by employing
sophisticated AI enhanced technology
and deep content analysis, is imperative.
This what we call your 'Cyber Stack'. It
comprises all the essential capabilities
needed by today's enterprises if they
want to ensure their storage systems are
truly cyber resilient. And cyber resilient
storage, that proactively protects data to
withstand an attack and that offers the
ability to rapidly detect a breach and
then restore operations to normal
functioning, is now critical.
This article discusses the six essential
ingredients of cyber resilient enterprise
storage to have in your Cyber Stack and
why. Let's consider each of these in turn to
appreciate what they mean for the
enterprise:
08 STORAGE Sept/Oct 2024
@STMagAndAwards
www.storagemagazine.co.uk
MAGAZINE
MANAGEMENT: CYBER RESILIENCE
"A well-defined Cyber Stack, proactive detection, and tested response plans are what
separate disruption from resilience."
1. Ability to map the attack's impact and
progress
Understanding how an attack unfolds is
fundamental to being able to control it.
Using tools like Security Operations Centre
(SOC) systems or dedicated storage
cybersecurity software (SIEM or SOAR),
enterprises can track suspicious activity as
it moves through the network, servers and,
crucially, storage. This visibility acts as an
early warning system provided storage
scanning is also included.
2. Identification of known ransomware
varieties and outcomes:
By leveraging AI-based detection
technologies such as regular updates and
global intelligence feeds, it's possible to
ensure rapid recognition of any
ransomware and malware that is already
known to the cyber security community. By
keeping cyber risk databases current,
detection levels continuously improve and
the storage system is able to adapt to
thwart new cyberattack types.
3. Mapping an attack timeline and data
changes with precision
Technology such as the SIEM and SOAR
installed in the data centre, when
combined with storage scanning tools,
enable cybersecurity teams to pinpoint
exactly if, when and how data was
modified. This precision supports both a
secure forensic investigation and a timely
response to damage limitation during
ongoing cyberattacks.
4. Maintaining a full audit trail for
compliance and examination purposes
Comprehensive cyber security solutions
must log every event across servers,
storage, and snapshots. By maintaining an
unbroken audit trail, it becomes possible to
drill down to the individual asset level and
observe what was affected during a cyber
incident. This level of granularity is crucial
for both regulatory compliance
requirements and when performing a root
cause analysis.
5. Dashboard reporting showing key event
information
Dashboards provide clear, actionable views
into key event details. They can verify the
time, type and scope of each attack, plus
provide more detailed information about
immutable snapshots taken that contain
uncompromised data. A best practice
approach is to adopt automated policies
and fully integrate enterprise storage with
other cybersecurity tools. This helps to
ensure quick, visible action is possible, with
policy adaptation as and when needed.
6. Sourcing the latest clean versions of
damaged data for rapid recovery
Rapid recovery after a cyberattack depends
on secure forensic fencing and the ability
to scan immutable data snapshots until a
clean (uninfected) backup copy can be
found. Scanning can either be automated
or completed manually and checked by
application teams. Whatever the approach
taken, it should always prioritise the most
recent data, unless a compromise is
detected, in which case earlier snapshot
versions must be reviewed.
NOTHING BEATS PROACTIVE, EARLY
DETECTION
These six ingredients are vitally important
to mitigate the effects of a cyber incident,
but nothing can replace the importance of
early detection of an impending attack.
This is always more important than
detection once an attack is underway. In
spite of this, many enterprise storage
companies persist in offering reactive
ransomware/cyberattack detection
capabilities based on "Anomaly Detection"
technology.
These approaches are far less effective
than proactive threat detection - because
once the attack has taken place, it is
most likely too late. The data
compromised has probably already been
written to storage and may even have
been captured as snapshots. And once
compromised data has been backed up,
it becomes a very lengthy process to
identify clean recovery points.
Best practice advice is always to invest in
proactive detection of your data to ensure
that in the event of an attack, you're well
equipped to recover quickly, with
comprehensive forensic reporting. You can
then perform a deep content analysis to
accurately determine "known good" copies
of data and restore them quickly. These
capabilities are one of the most important
elements of a "total storage cyber resilience
solution" that's designed to deliver a "cyberfocused
and recovery-first strategy" - nextgeneration
data protection.
Experience has shown that enterprises
with a well-defined Cyber Stack, proactive
detection, and well tested response plans
always recover quickly. In contrast, the
ones lacking these essential ingredients will
struggle to recover and inevitably,
some never do.
More Info: www.infinidat.com
www.storagemagazine.co.uk
@STMagAndAwards Sept/Oct 2024
STORAGE
MAGAZINE
09
OPINION: CTERA
RANSOMWARE HAS EVOLVED.
SO MUST OUR DEFENCES
ARON BRAND, CTERA'S CHIEF TECHNOLOGY OFFICER, TELLS US
WHY AI AND HUMAN DECEPTION ARE THE NEXT FRONTLINES IN
THE FIGHT AGAINST RANSOMWARE
Coveware's Q2 2025 ransomware
report landed like a hammer:
average payments have doubled
since last quarter, now sitting at $1.13
million. The ransomware economy has
shifted up many gears.
And ransomware isn't about encryption
anymore. It's about data theft and social
engineering.
For years, the ransomware playbook
was simple: encrypt files, demand
payment for the keys. That model is
fading. Coveware's data shows that in
74% of cases, attackers never bother
encrypting at all. Instead, they quietly
steal sensitive data and use the threat of
exposure as leverage.
Why? Because it works. Exfiltration
requires less effort than deploying fullscale
encryption campaigns, and
reputational pressure on victims to pay
is immense.
Technical defenses aren't what attackers
are working hardest to beat. Instead,
they're working harder to trick people.
Coveware points to groups like Scattered
Spider, Silent Ransom, and Shiny Hunters
using impersonation tactics against help
desks, employees, and service providers.
Phishing, credential theft, and fake
"security prompts" are replacing exploits
and malware as the initial breach vector.
In short: ransomware has gone
human-first.
Backups are still necessary. So are
firewalls, patching, and MFA. But in an
era where attackers impersonate trusted
colleagues over Teams or poison search
results to deliver malware, these measures
are not working. You can't firewall human
trust. You can't patch human behavior.
"Ransomware has gone human-first, so
10 STORAGE Sept/Oct 2025
@STMagAndAwards
www.storagemagazine.co.uk
MAGAZINE
OPINION: CTERA
"Ransomware has gone human-first, so our defence has to
live at the data layer." - Aron Brand, CTO of CTERA
our defence has to live at the data layer.
By combining behaviour-based AI with
built-in deception, we spot suspicious
access early and shut it down before it
becomes a breach."
That's why Gartner introduced the
category of Cyberstorage in 2022.
Cyberstorage is about extending storage
systems beyond resilience and backup into
active defense. Cyberstorage provides
active capabilities like AI based
ransomware behaviour based prevention,
anomaly detection, and data immutability,
right into where the data lives. The goal
isn't just to recover after the fact but to
detect, contain, and respond in real time.
FROM WALLS TO TRAPS
To keep pace, defenses need to evolve in
two ways:
AI-driven detection. When credentials
are stolen, only machine learning can
reliably flag the unusual data access,
strange download patterns, or
anomalous file changes that follow.
Humans can't watch that much
telemetry at once.
Deception and honeypots. Instead of
only hardening the castle walls,
organizations need traps inside the
walls. Fake data, decoy servers, and
honeypots waste attacker resources
while alerting organizations before real
damage occurs.
look like ransomware or insider abuse.
It's about identifying abnormal
behavior in real time, before an
incident becomes a crisis.
And with CTERA Honeypots,
we've built deception directly
into the platform. Attackers
who probe your
environment may think
they've found something
valuable, but they've
actually stumbled into a
trap that alerts you instantly.
It's defense by manipulation,
not just prevention.
CLOSING THOUGHTS
Ransomware has gone
human-first, and that
means we, as defenders,
need to think differently.
If the bad guys are
playing people, maybe
the right move is to
start playing them
back.
More Info:
www.ctera.com
Attackers are innovating. As defenders
we must innovate faster.
This is exactly the philosophy behind
CTERA's approach to organizational
resiliency. CTERA Ransom Protect, our AIbased
ransomware detection continuously
monitors file activity to spot anomalies that
www.storagemagazine.co.uk
@STMagAndAwards Sept/Oct 2025
STORAGE
MAGAZINE
11
CASE STUDY: SILVERSTONE STUDY:
REVVING UP RELIABILITY:
SILVERSTONE OVERHAULS SERVER INFRASTRUCTURE WITH SYNOLOGY
The Challenge: Outdated servers and
unreliable disaster recovery
Silverstone Racetrack had inherited a
server platform that had been outgrown by
the organisation. Their Disaster Recovery
solution was particularly difficult to manage
and needed human intervention to run
properly. Silverstone started looking for high
performance in a solution that was
economical, streamlined and efficient,
alongside also providing Disaster Recovery
that was reliable and easy to manage. A
Synology solution seemed to address these
requirements.
The request was for scalability that would
meet the requirements of the organisation
now and for several years to come.
Minimising downtime in the event of a
problem was also a key requirement.
Silverstone also needed a Disaster Recovery
solution that was dependable and effective.
The Solution: Overhauling server
infrastructure with Synology RackStation for
high-performance storage
Analysis of the existing infrastructure showed
the limitations of the network as well as
identifying disk performance as a key
performance bottleneck. Based on the results
of this analysis, a solution was proposed by
Serveline, a Synology premium partner, to
provide a virtualised server and storage
platform with high availability in place.
The new storage platform was built on
Synology RS2416RP+, in conjunction with
all-flash based SAN to power the
virtualisation environment.
This solution offered to Silverstone Circuits
provided many advantages. Synology's
RackStations are fully VMware certified and
come with regular software updates so ensure
users can leverage the latest technology
offered by VMware (if one physical server
needs repairing or upgrading, the integration
between the Synology and VMware solutions
allows for the live migration of a Virtual server
without any downtime). Virtualisation reduces
the number of physical servers required by the
organisation, reducing maintenance and
energy costs. Virtual servers are managed,
modified and multiplied much more easily
than physical servers, so the new
infrastructure can easily continue to support
the organisation as it grows and changes.
The Benefits: Real-time backups, effortless
failover, and simpler server management
The new Disaster Recovery solution is a
rapid and fully automated backup solution
protecting the organisation's data. Utilising
the multiple server room locations at the
800-acre Silverstone site, a remote,
automated backup target with real-time file
replication was configured. The data is now
stored in the main building, and replicated to
the secondary server location to provide rapid
recovery in the event of fire, flood or theft.
Silverstone Circuits are now enjoying the
benefits of a robust, high performing, highavailability
server platform with room for
expansion as the organisation grows, and an
improved, reliable Disaster Recovery solution
which is hassle-free.
About the Customer:
Silverstone, a motor racing circuit in England,
is built on the site of a World War II Royal Air
Force bomber station, RAF Silverstone, which
opened in 1943. The airfield's three runways
- in classic WWII triangle format - lie within
the outline of the present track. Silverstone is
the current home of the British Grand Prix,
which it first hosted in 1948. The 1950 British
Grand Prix at Silverstone was the first race in
the newly created World Championship of
Drivers. The race rotated between Silverstone,
Aintree and Brands Hatch from 1955 to
1986, but relocated permanently to
Silverstone in 1987. The circuit also hosts the
British round of the MotoGP series.
More Info: www.synology.com
12 STORAGE Sept/Oct 2025
@STMagAndAwards
www.storagemagazine.co.uk
MAGAZINE
Workload Protection Made Easy
Safeguard a wide variety of enterprise workloads via a single console
Learn more
Reserve your free seat
18M+
SaaS accounts
protected
2.1M+
Endpoints & VMs
protected
4.7 / 5 4.6 / 5
OPINION: INNOVEC
OPINION PIECE: QUANTUM LEAP
TO QUANTUM COMPUTING
IAIN WHAM, MANAGING DIRECTOR OF SCOTTISH IT SUPPORT
PROVIDER, INNOVEC, ASKS WHO IS PREPARED TO TAKE THE
QUANTUM LEAP?
The world of information technology is in
constant flux, presenting new and
ongoing challenges for fintech
businesses. In recent times we have seen the
rise of cloud computing, the growth of data
management and the arrival of artificial
intelligence (AI) into every workplace.
Just keeping pace with changes in IT is a
daily challenge. Now, another transformative
wave is on the horizon: quantum computing.
While it might sound like something out of
science fiction, the implications of quantum
computing for even the smallest fintech
business are profound and far-reaching.
Within the next couple of years, quantum
computing will be as relevant to your business
as AI has become, and understanding its
potential and preparing for its arrival is no
longer optional; it's a strategic imperative.
DEMYSTIFYING THE QUANTUM REALM
Quantum computers aren't just faster than
conventional computers, they are designed to
tackle specific, complex problems that are
currently impossible, even for the world's most
powerful supercomputers.
While most quantum computers currently
remain in research labs, outside the budget
of all but the biggest and richest
companies, another type of quantum
computer, called a 'quantum annealer', is
already commercially viable.
More akin to a traditional computer
processor, the quantum annealer is particularly
adept at solving optimisation problems, which
are a vast category of challenges faced by
businesses of all sizes, that involve finding the
best possible solution from an enormous
number of options.
THE QUANTUM ADVANTAGE
While the most sophisticated quantum
computers are still a decade or more away
from widespread use, the implications for
businesses, including small and local ones, are
already emerging.
Think about the challenges faced by many
small, local businesses. For instance, an
artisan bakery might struggle with optimising its
ingredient sourcing, to minimise waste and
cost while ensuring product freshness.
A courier business might use quantum
computing to organise route planning by
considering real-time traffic, weather,
vehicle capacity, delivery windows, and
even staff availability.
Quantum algorithms can identify the most
efficient routes, saving time, fuel, and
reducing wear and tear on vehicles. A
boutique clothing store could use quantum
insights to ensure they have the right stock at
the right time, maximising sales and
minimising dead inventory.
For businesses relying on customer loyalty,
understanding individual preferences is crucial.
This could enable a small independent
bookshop to offer highly personalised
recommendations, or a local brewery to craft
targeted marketing campaigns that resonate
with specific customer segments.
Even small businesses deal with financial
transactions and the associated risks.
Quantum computers can perform more
complex financial modelling, leading to
better investment strategies, more accurate
risk assessments, and improved fraud
detection. A small financial advisory firm
could leverage quantum insights to provide
clients with more robust portfolio
optimisation.
CHANGES TO CYBERSECURITY
While the opportunities of quantum
computing are immense, it also presents a
challenge, particularly in the realm of
cybersecurity. One of the most talked-about
implications of quantum computing is its
potential to break current encryption
standards.
Algorithms which can be run on sufficiently
powerful quantum computers, could render
much of today's online security obsolete. This
means that sensitive data, financial
transactions, and confidential
communications which are secure today,
could become vulnerable in the future.
This transition is referred to as the "postquantum
cryptography" (PQC) era and it is
crucial to understand that you don't need a
quantum computer to implement PQC.
14 STORAGE Sept/Oct 2025
@STMagAndAwards
www.storagemagazine.co.uk
MAGAZINE
OPINION: INNOVEC
PQC algorithms are designed to run on
conventional PCs and they offer protection
against quantum threats. Businesses of all sizes
can start to prepare for PQC now, with a
target deadline of 2030 for transitioning
critical systems. For a small business, this might
seem daunting, but it could be a vital step to
protect your digital assets.
WHY QUANTUM IS THE NEXT BIG
THING FOR FINTECH BUSINESSES
As cloud-based quantum computing-as-aservice
(QCaaS) become more accessible, the
barrier to entry will decrease. Companies like
IBM, Google, and Microsoft are already
offering access to quantum hardware and
development tools, democratising this
powerful technology.
Ignoring quantum computing now would be
akin to ignoring the internet in the late 1990s
or AI a few years ago.
Those who understand and begin to integrate
quantum thinking into their business strategy
will gain a significant competitive advantage
over their rivals.
PREPARING FOR THE QUANTUM
FUTURE
So, what concrete steps can your business take
today to prepare for the quantum revolution?
Conduct a feasibility analysis: Understand
what quantum computing is and what it can
(and cannot) do for your business. You don't
need to become a quantum physicist; focus on
understanding the types of problems quantum
computers are good at solving, particularly
optimisation and simulation. Consider the
computational requirements and data
complexity of the problems you face.
Start small and experiment: You don't need to
buy a quantum computer - many providers
offer QCaaS, allowing you to access quantum
computing power via the cloud. This is the
most practical way for small
businesses to experiment.
Internal training: Invest in "quantum literacy"
for your core team. This doesn't mean
everyone needs to be a programmer. It means
fostering an understanding of the concepts
and potential applications. Workshops, online
courses), and internal knowledge-sharing
sessions can be highly effective.
External partnerships: Explore collaborations
with universities, research institutions, or
quantum computing companies. Many
universities have departments focused on
quantum technologies and may be open to
partnerships that could benefit your business
to provide access to expertise and cuttingedge
research.
Have a transition plan: Start planning how
you will migrate to quantum-resistant
cryptography. This might involve software
upgrades, hardware changes, or working
with your IT provider. Aim to have a plan in
place by 2025-2026 to be ready for the
2030 deadline.
Form a project team or assign a champion:
Designate an individual to stay informed about
quantum developments. This "quantum
champion" can research new applications,
monitor trends, and identify potential
opportunities or threats.
Re-evaluate your business plan: As quantum
computing matures, it will unlock new
possibilities. Consider how these
advancements might impact your business
model. Could you offer new services based on
quantum-enhanced analytics?
SKILLS, TRAINING, AND EQUIPMENT
The primary investment for most small
businesses will be in human capital. This
means allocating budget for training existing
staff, or hiring individuals with an aptitude for
analytical thinking and technology.
Initially, investment in equipment will be
minimal and focused on ensuring your
existing IT infrastructure is robust enough to
access QCaaS platforms and implement
PQC solutions. This might involve
upgrading network capabilities or ensuring
your current systems can support new
cryptographic standards.
Consider investing in partnerships with
quantum computing service providers or
consultants. This can provide access to
specialised expertise without the need for
extensive in-house training initially.
THE QUANTUM FUTURE IS NOW
The growth of quantum computing is not a
distant theoretical concept; it's an evolving
reality that will impact businesses of all sizes.
By understanding its fundamental principles,
identifying potential use cases, and taking
proactive steps to prepare, fintech businesses
can not only weather this technological shift
but also harness its power to innovate,
optimise, and thrive in the coming years.
More Info: https://innovec.co.uk/
16 STORAGE Sept/Oct 2025
@STMagAndAwards
www.storagemagazine.co.uk
MAGAZINE
SCAN ME
THE DOUBLE-LAYER
IMMUTABILITY BACKUP
POWERED BY OPEN-E AND VEEAM ®
Is your data storage really immutable? Discover a new, effortless way to get
a truly immutable, high-performance backup data storage solution you can
count on at all times. Open-E JovianVHR is a Veeam-certified software solution
that empowers your Hardener Repository with an extra layer of security and
performance. Now, you have an extra shield that protects you 24/7.
Discover a brand new solution built to take
your data protection to the next level.
CLOSE THE RANSOMWARE GAP TO ENSURE
YOUR PEACE OF MIND.
CHEVRON-CHEVRON-
www.open-e.com
Learn more
TECHNICAL:
OPEN-E & WESTERN DIGITAL TEAM TO BRING
NVME-BASED DATA STORAGE BOOST
KRISTOF FRANEK, OPEN-E CEO, DELIBERATES ON THE NEED FOR ULTRA-LOW LATENCY, HIGH
THROUGHPUT, AND UNWAVERING RELIABILITY FOR INTENSIVE WORKLOADS
How do you ensure that a data storage
system will keep up with the most
challenging demands? For IT system
administrators pushing the limits of AI, data
analytics, and cloud infrastructure, a fast and
safe storage system remains the number one
essential. At the end of the day, organisations'
success is built on a proper data storage
foundation. So what can be used to enhance
enterprise IT infrastructures systems and
enable them with a proper boost?
THE GROWING NEED FOR NVME
OVER FABRICS
A key technology at the forefront of this
demand is NVMe over Fabrics (NVMe-oF).
Why is it so essential for modern data
centres? NVMe-oF allows high-speed NVMe
data storage to be accessed over a network,
effectively eliminating the distance limitations
of local storage. This is a revolutionary shift,
enabling more flexible and scalable
architecture. However, as we also know,
speed without reliability is a risk. That's where
the Multipath I/O feature becomes crucial. It
provides multiple connection paths to the
data storage, ensuring that if one path fails,
data can still be accessed through another.
This guarantee of high availability is a nonnegotiable
feature for any enterprise today.
OPEN-E DATA STORAGE SOFTWARE
RESPONDS WITH A CERTIFIED
SOLUTION
Open-E listens to its customers' needs and
directly addresses them. That's why the
company implemented the NVMe over
Fabrics (NVMe-oF) Initiator Target with
Multipath I/O in its latest update, Open-E
JovianDSS Up32. But Open-E didn't stop
there. To ensure this feature delivers a truly
reliable and high-performance solution, the
company teamed up with Western Digital.
Open-E's goal was to create a solution that
works flawlessly, saving its customers time,
resources, and the hassle of compatibility
issues.
As Open-E CEO Kristof Franek said, "Our
customers demand ultra-low latency, high
throughput, and unwavering reliability for
intensive workloads. By listening to these
needs, we partnered with Western Digital to
provide a certified, pre-validated solution with
NVMe-oF and Multipath I/O, ensuring our
customers get a high-performance and
dependable JBOF, certified to work
seamlessly with Open-E JovianDSS."
THE POWER OF PARTNERSHIP WITH
WESTERN DIGITAL
Open-E's collaboration with Western Digital is
a prime example of its commitment to
addressing critical customer needs through
strategic partnerships, resulting in the Western
Digital OpenFlex Data24 4200 NVMe-oF
Platform certification with Open-E JovianDSS
Up32 software. The process wasn't simple, as it
involved a rigorous certification regime with
extensive functional and stability tests. The
Open-E team simulated real-world scenarios
like disk failures, hot-plug/hot-swap events,
and power outages to prove the system's ability
to protect and recover data.
The greatest advantage of this partnership for
both customers and channel partners is access
to a pre-validated solution that directly
addresses real-world business needs. Open-E's
collaboration with Western Digital allowed for
the NVMe-oF protocol to be developed and
rigorously tested within the Open-E JovianDSSbased
environment. This collaborative
approach delivers a new level of confidence to
IT professionals. You can learn more about this
specific solution by downloading the Open-E
Certification Report: https://www.opene.com/r/8kng/
More Info: www.open-e.com
18 STORAGE Sept/Oct 2025
@STMagAndAwards
www.storagemagazine.co.uk
MAGAZINE
Learn More About StorPool
Disaster Recovery Engine
StorPool Disaster
Recovery Engine:
First-Ever, Built-In DR
Solution for KVM Clouds
StorPool Storage Becomes the First
Software-Defined Primary Storage Vendor to
Offer a Disaster Recovery Engine for KVM-based Clouds.
StorPool Disaster
Recovery Engine
Get the Full Details
Disaster Recovery Engine In Action
• Simple to create and enforce data replication policies
• Automatically creates VM recovery points
• Automates failover and failback between remote sites
• Four protection models available
• No additional licenses necessary
StorPool Storage - Trusted by Service Providers
Ultra-Fast
Storage
Instantly accelerate your
applications with extreme
scalability and latency
consistently under 100µs,
even with heavy workloads.
Architected
for Cloud
Designed for KVM-based
clouds, providing affordable,
always-on, flexible, integrated
storage that delivers measured
uptime above 5 nines.
Fully-Managed
Service
StorPool experts design, deploy,
monitor, and maintain your
storage, freeing your staff from
daily repetitive tasks allowing
more time for strategic projects.
www.storpool.com
Get Started with StorPool
info@storpool.com
OPINION: PEAK:AIO
WHY AI FACES NEW GROWING PAINS
MARK KLARZYNSKI, FOUNDER AND CHIEF STRATEGY OFFICER, EXPLAINS HOW PEAK:AIO IS HELPING
CLIENTS LIKE THE ZOOLOGICAL SOCIETY OF LONDON OVERCOME AI'S GROWING PAINS
For decades, enterprise IT was
measured by outputs: database
performance, system uptime and
operational resilience. The mission was to
keep businesses running smoothly. Artificial
intelligence has rewritten those rules. In AI,
the focus is no longer on the output of
compute but on the input. Success depends
on feeding GPUs with vast, continuous
streams of data, preserving the integrity of
those inputs, and guaranteeing that the
models built upon them can be trusted.
This fundamental shift was the insight
behind PEAK:AIO's creation. From the
company’s very beginning, PEAK:AIO
recognised that AI was not simply
another workload to be layered onto
traditional IT systems. It represented a new
discipline with different requirements. While
IT teams still concentrate on resilience and
continuity, AI pioneers such as geneticists
identifying new compounds, doctors
detecting early signs of disease,
climatologists simulating extreme weather
and conservationists tracking endangered
species are dealing with data on an
unprecedented scale.
Their challenge is not how to keep a
server alive. It is how to deliver terabytes
and even petabytes of raw information to
GPU cores without bottlenecks, and how to
ensure that the models trained on this flood
of input can be trusted when lives or
ecosystems depend on them.
The Zoological Society of London (ZSL)
illustrates this point clearly. The organisation
has long been at the forefront of using
technology for conservation, from its Instant
Wild programme that enlists citizen
scientists to classify images, to acoustic
sensors that monitor rainforest biodiversity
and affordable satellite tagging of turtles.
These projects share a common thread:
data. Camera traps alone generate millions
of photos, the majority of them empty
frames. Acoustic monitors and video
streams add terabytes more. Before a single
model can be trained, this mass of
information has to be ingested, filtered and
organised. For ZSL, the need was not more
traditional IT infrastructure but a way to
feed GPUs with clean, usable data at the
pace that conservation now demands.
20 STORAGE Sept/Oct 2025
@STMagAndAwards
www.storagemagazine.co.uk
MAGAZINE
OPINION: PEAK:AIO
"Traditional IT was built around keeping outputs reliable. AI is built around
keeping inputs trustworthy. If you cannot feed GPUs with fast, secure data, or if
you cannot prove that a model is the one you validated, then the whole AI
pipeline fails." - Mark Klarzynski, PEAK:AIO
This is where AI's growing pains show most
clearly. In the IT world, performance is
supported by clusters of systems, teams of
administrators and carefully layered
redundancy. In AI, projects often run outside
the data centre, in the field, where
researchers do not have the staff or the
infrastructure to manage complex storage
estates. They need systems that are selfmanaging,
compact, energy efficient and
able to deliver extreme throughput. They also
face risks that IT has never had to consider.
One example is model poisoning, where
corrupted or misleading data enters a
training set and alters the models behaviour.
In conservation that could mean a
recognition system misidentifying species,
undermining an entire programme. Once a
model has been created and validated it must
also remain intact as it is shared around the
world. Traceability, the ability to link a model
back to its dataset and guarantee that it has
not been changed, is no longer optional. It is
as critical as the model itself.
PEAK:AIO's answer was to design storage
around these new realities. For ZSL this
meant systems that could hold more than a
petabyte of data within a small, energy
efficient footprint. The performance was
enough to keep GPUs fully occupied
without specialist intervention. Most
importantly, protection was built in.
PEAK:AIO created a fully automated archive
that is immutable and invisible to users,
controlled entirely by the primary AI data
server. No external servers are involved, no
manual steps are required, and yet within
seconds the archive can be reactivated as a
live NVMe pool. For scientists this means
their irreplaceable images and recordings
are secure without any added complexity.
Security becomes inherent to the system
rather than another task to manage.
The change in results was dramatic. Where
ZSL teams had once been able to process
only a few images per minute, they were
now handling thousands. In London's
Hedgehog Watch project, more than 15
million images were processed at speed,
turning what would once have been weeks
of manual effort into rapid, data-driven
insight that could influence conservation
action in real time. With researchers
uploading imagery and video around the
clock from field sites worldwide, ZSL gained
the assurance that their data would not only
reach GPUs at performance but would
remain protected against ransomware,
corruption or tampering.
The implications extend far beyond wildlife.
By linking trained models directly to their
datasets, with unique identifiers and
cryptographic checks, PEAK:AIO ensures
that models are verifiable. In medicine, this
is essential. A diagnostic model for cancer
cannot be allowed to drift from its training
data or be subtly altered. The cost of a
misdiagnosis is measured in lives. In climate
modelling or urban planning, the reliability
of an AI model can determine multi-billionpound
decisions. Guaranteeing that a
model is unchanged from the one that was
validated is as important as the accuracy of
the initial training.
Mark Klarzynski, Founder and Chief
Strategy Officer of PEAK:AIO, explains it
simply: "Traditional IT was built around
keeping outputs reliable. AI is built around
keeping inputs trustworthy. If you cannot
feed GPUs with fast, secure data, or if you
cannot prove that a model is the one you
validated, then the whole AI pipeline fails.
We designed PEAK:AIO to remove that risk,
to make sure data fuels discovery rather
than undermines it."
This is the real contrast between IT and AI.
IT continues to frame its priorities around
uptime and continuity. Those remain
important, but AI moves at a pace and on a
scale where data is not simply a by-product
of compute but its very fuel. Protecting the
flow of that data, feeding GPUs without
interruption, securing the models that result,
and maintaining their provenance are now
central tasks. In conservation, healthcare and
climate science, the accuracy of a model can
influence the wellbeing of individuals, the
survival of species and the strategies nations
adopt to face environmental change.
The story of ZSL and PEAK:AIO shows what
is possible when storage is reimagined for this
new reality. It is no longer about keeping
systems online. It is about safeguarding inputs,
accelerating discovery and protecting the
decisions that follow. By staying close to the
market and working directly with innovators,
PEAK:AIO is constantly developing solutions
that goes far beyond storing information and
ultra-performance. It ensures trust. It delivers
speed without complexity. And it helps solve
AI's growing pains at the very point where they
matter most.
More Info: www.peakaio.com
www.storagemagazine.co.uk
@STMagAndAwards Sept/Oct 2025
STORAGE
MAGAZINE
21
MANAGEMENT:
MANAGEMENT: HDDs AND THE DATE CENTER
THE FUTURE IS STILL SPINNING:
HOW HARD DRIVES REMAIN
INDISPENSABLE IN DATA CENTRES
RAINER W. KAESE, AT TOSHIBA ELECTRONICS EUROPE GMBH
LOOKS AT WHY HDDS REMAIN THE WORKHORSES OF MODERN
DATA CENTRES
In recent years, there have been
predictions of the hard drive's demise.
Yet, this storage medium remains
indispensable in the data centres of
enterprises and cloud providers. And, for the
foreseeable future, this is unlikely to change.
While hard drives may have
disappeared from most consumer
devices -and with that, from the view of
end users - they remain highly prevalent
in data centres. More than that, they
bear the brunt of data storage demands,
as no other storage medium can provide
the direct access and necessary capacity
for AI, video streaming, and other dataintensive
applications as economically
as hard drives. After all, SSDs are still
about five to eight times more expensive
per unit of capacity. Even if their price
were to match HDDs, it would take many
decades and insurmountable
investments to scale production capacity
to a level where SSDs could possibly
replace hard drives. This is due to the
complex and costly production of flash
memory in cleanrooms.
Therefore, not only does the majority
of installed storage capacity in data
centres consist of hard drives, but newly
added capacity is also predominantly
based on this classic storage medium. In
the year 2024 alone, 56 million
enterprise HDDs were shipped globally,
with a total capacity of 959 Exabytes -
that's 959 million Terabytes and more
22 STORAGE Sept/Oct 2024
@STMagAndAwards
www.storagemagazine.co.uk
MAGAZINE
MANAGEMENT:
MANAGEMENT: HDDs AND THE DATE CENTER
than four times the capacity of enterprise
SSDs shipped in the same period (59
million units with a total of 226
Exabytes).
The reason the hard drive is still so in
demand almost 70 years after its debut is
due primarily to its consistent capacity
growth - by 2 terabytes per year - while
maintaining stable costs. Initially,
innovations like helium-filled drives and
thinner disks enabled higher capacities.
Today, new recording technologies such
as Microwave-Assisted Magnetic
Recording (MAMR) and Heat-Assisted
Magnetic Recording (HAMR) are driving
this progress. These technologies utilise
microwaves and laser diodes respectively,
which means less magnetic energy is
required and the write head can be
smaller. Smaller write heads mean denser
data storage and, consequently, higher
capacities. Experts predict that drives with
up to 50 terabytes per unit will be
possible in the coming years.
Additionally, despite their moving parts,
hard drives are remarkably durable and
efficient. The failure rate of enterprise
HDDs is typically around 0.35%, which
translates to just seven failed drives per
year in a data centre with 2,000 hard
drives in operation. Large data centre
operators and cloud providers often
achieve even better reliability rates.
Power consumption for hard drives is
also relatively consistent, regardless of
capacity or workload, as most energy is
used to spin the spindle - typically
around 7 to 8 W. For high-capacity
drives, this makes HDDs very energyefficient,
consuming only 0.3 to 0.5 W
per terabyte, which is comparable to
SSDs of the same capacity.
The comparatively low performance is
often cited against hard drives, but this
case
only holds
true when
considering a
single drive. In
modern storage
architectures, dozens of hard
drives work together in arrays,
enabling parallel read and write
operations. In this way, storage systems
can easily achieve throughput rates of 15
GB/s and over 15,000 IOPS.
Ultimately, hard drives offer everything
data centre operators and cloud
providers value: high capacities at low
acquisition and operating costs, high
reliability, and sufficient performance for
most applications. Where performance
falls short, a few SSDs can easily be
added into the mix, but the majority of
data still resides on disks.
Hard drives may not be the star of the
data centre - they've been around too
long for that. Instead, they are the quiet,
indispensable workhorses that reliably
operate in the background. Without
them, it's fair to say, our digital world
would no longer function.
More Info: www.toshiba-storage.com
"No other medium can
match the capacity and
economics of hard
drives for today's datahungry
applications."
www.storagemagazine.co.uk
@STMagAndAwards Sept/Oct 2024
STORAGE
MAGAZINE
23
ROUNDTABLE:
ROUNDTABLE : EMERGING MEMORY TECHNOLOGIES
EXPERT PERSPECTIVES:
MEETING FUTURE
WORKLOAD DEMANDS
AN INDUSTRY ROUNDTABLE ON THE RISE OF EMERGING MEMORY
TECHNOLOGIES: FEATURING CONTRIBUTIONS FROM INDUSTRY
EXPERTS INCLUDING BIOMEMORY, BLOOR RESEARCH, CERABYTE,
SCALEFLUX, SNIA AND TARMIN
In this roundtable, vendors, analysts, and
researchers share their perspectives on
the future of memory - and the
challenges of performance, capacity,
bandwidth, energy efficiency, and latency.
To set the stage, we start with an overview of
memory developments in recent years.
It often feels as though memory is an
outlier in the technology world. While we've
seen significant changes in compute power
(both with CPU and GPU) and storage,
memory development has been iterative
rather than revolutionary.
While that approach has worked in the
past, current memory technology is
starting to cause challenges, due to an
issue known as the "memory wall
problem". This occurs when a processor's
speed outpaces memory's bandwidth and,
24 STORAGE Sept/Oct 2025
@STMagAndAwards
www.storagemagazine.co.uk
MAGAZINE
ROUNDTABLE:
ROUNDTABLE : EMERGING MEMORY TECHNOLOGIES
"Emerging memory technologies are being driven by the explosive growth in AI and
machine learning, big data analytics, scientific computing, and hyperscale cloud
data centres," - JB Baker, ScaleFlux
as a result, the processor has to wait for
data to be transferred from memory,
introducing a bottleneck.
The performance restrictions caused by the
memory wall problem are only getting
worse, as CPU and GPU advancement
continues to outpace improvements in
memory architecture. And it's being
exacerbated by the growth of demanding,
memory-intensive workloads such as highperformance
computing (HPC) and AI, which
didn't exist at the same scales until relatively
recently but are now seeing rapid adoption.
This issue is creating the need for new
memory technologies, designed for
modern workloads.
"Emerging memory technologies are being
driven by the explosive growth in AI and
machine learning, big data analytics,
scientific computing, and hyperscale cloud
data centres," said JB Baker, VP marketing
and product management, ScaleFlux.
"Traditional memory technologies like
DRAM and NAND flash are reaching
scaling, speed, and density limits that
restrict the performance required for nextgeneration
workloads."
With AI demands highlighting the flaws in
existing systems, Baker believes that new
memory technologies are required.
"The needs for higher bandwidth, lower
latency, greater capacity, and energy
efficiency in AI and HPC applications are
exposing the inadequacy of traditional
solutions. We are indeed approaching the
physical and economic end of the road for
conventional memory scaling, making new
architectures and technologies essential,"
he noted.
While traditional memory technology still
has a role to play, other factors are pushing
the demand for emerging technologies.
David Norfolk, Practice leader:
development and governance, Bloor
Research, explained, "AI hype is driving
things - that and the need for vendors to sell
something new with higher margins. I very
much doubt that we are at the end of the
road yet, but people always want more
speed, scaling, and density. "What may be a
driver, is more energy efficiency, less heat
and more reliability - less waste."
DEFINING THE PROBLEM:
High-performance workloads have
numerous requirements, so no single
emerging memory technology works across
the board. For example, AI workloads
require a significant volume of data, both for
fresh processing and longer-term storage.
That applies to both typical Generative AI
(Gen AI) services, such as ChatGPT, and to a
growing number of physical machines that
collect data using sensors for decisionmaking.
But it isn't always clear what data is
needed, when, and for how long it should be
stored.
Martin Kunze, CMO, Cerabyte, explained,
"It is not yet defined how much raw sensor
data is needed for decision making, and
how long it needs to be retained for when it
comes to machine-human interaction. There
were already legal consequences for
companies that didn't keep enough data to
reconstruct accidents that were caused by
false AI-decisions."
Legal reasons, rather than purely
technological ones, will have their part to
play in how emerging memory technologies
are provisioned and used.
"The 'audit trail data' will be one of many
drivers that lead to the surging demand for
data storage," Kunze continued. "Current
storage technologies are approaching their
limits; analysts are forecasting a scenario
where mainstream technologies can deliver
only 50% of the required demand - a
looming supply gap could put AI-related
investments at risk."
A TIERED APPROACH:
Universal memory, which combines persistent
memory and storage into a single unit,
would seem to be the panacea, providing
fixed storage for vast amounts of data and
high speeds for processing on demand.
However, that is unlikely to be a realistic
proposition for some time, so tiered data
using a variety of technologies will be the
default in the short-to-medium term.
Arthur Sainio and Raghu Kulkarni,
Persistent Memory Special Interest Group cochairs,
SNIA (The Storage Networking
Industry Association), said, "Universal
memory such as PCM and ULTRARAM
promises to merge RAM speed with
persistence, but faces manufacturing
complexity, high costs, and scaling barriers.
Tiered architectures will dominate short-to-
www.storagemagazine.co.uk
@STMagAndAwards Sept/Oct 2025
STORAGE
MAGAZINE
25
ROUNDTABLE :
ROUNDTABLE : EMERGING MEMORY TECHNOLOGIES
Faster persistent memory technologies have
their place, although there are still hurdles
that need to be overcome.
"Emerging alternatives like MRAM and
ReRAM provide advantages such as near-
SRAM speed, zero standby power in the
case of MRAM, and analogue compute
capabilities like ReRAM, but face scalability
and manufacturing hurdles. They are
gaining some traction as they promise better
scalability, energy efficiency, and
performance for future HPC demands, but
have hurdles to overcome," said Saino and
Kulkarni. "CXL NV-CMM types of products
offer DRAM-like speed and persistence,
making them valuable for caching and
checkpointing functions in HPC
applications. High density hybrid CXL
solutions are likely as well."
NO NEW ARCHITECTURES:
One thing that seems clear is that there will
not be a new server architecture for HPC
and AI workloads that replaces what we
have today. Advances in CPU and GPU
technology, and large investments in such
platforms, still make general-purpose
computing the best fit for most jobs.
As such, some emerging memory
technologies are likely to be more of niche
interest, for custom jobs that require the
fastest speeds. Computational-RAM (CRAM),
where computations can take place directly
in RAM, is a good example of this.
"Although CRAM offers compelling
advantages for AI inference and acceleration
in theory, it suffers from very limited
programmability and restricted workload
flexibility. As a result, CRAM is unlikely to
replace the traditional server architecture for
general HPC. Instead, it will at most be
deployed selectively for niche applications,"
said Baker.
Effective scaling and higher density:
Irrespective of this, AI and HPC are pushing
the requirements for more memory and
require more flexible ways of using it. In that
regard, continuing to push the boundaries of
today's memory technologies makes sense,
as it can help maximise investment in current
computing architecture.
At the core of memory development are
two technologies that can help: 3D DRAM
for increased capacity and Compute
Express Link (CLX) for improved scaling and
memory pooling.
"HPC and AI require both 3D DRAM for
capacity and bandwidth, and CXL for
scalable, cost-effective memory expansion.
3D DRAM such as HBM3, is ideal for onpackage,
high-speed tasks like training large
AI models due to its fast data access and
energy efficiency. CXL will provide pooled
memory for flexibility and persistent
workloads," said Sainio and Kulkarni. "A
hybrid approach that combines these
technologies is essential for efficiently
meeting the growing demands of modern
HPC and AI applications."
Emerging technologies also promise to
maximise the investment in existing storage,
which is particularly important given the
need for a tiered approach to modern
workloads.
Kunze gives an example: "Emerging
technologies such as ceramic storage can
release expensive high performing storage
like HDDs which today is used for storing
cold data, for a better use."
Emerging memory technologies also
promise improved caching and access to
data available on traditional storage
technologies, such as flash and hard disks.
"Advanced caching strategies leveraging
faster memory types - such as HBM or
stacked DRAM - can significantly accelerate
access to hot data, improving the
performance of existing storage systems.
Using persistent memory for metadata
acceleration or tiered caching layers will
continue to enhance storage efficiency
without fundamentally redesigning
architectures," said Baker.
SOFTWARE IS CRITICAL:
While hardware may steal the limelight,
software is essential in provisioning and
managing data tiers. Crucially, software has
to make life easier and work with what's
available, rather than changing how
systems work.
This is a valuable lesson learned from Intel
Optane, as Lynn explained: "A final hurdle
for Optane was the slow adaptation of the
software ecosystem. While, in theory, Optane
28 STORAGE Sept/Oct 2025
@STMagAndAwards
www.storagemagazine.co.uk
MAGAZINE
Ransomware-Proof
Backups with Ootbi
Ootbi (Out-of-the-Box Immutability) delivers secure, simple, and powerful backup storage for Veeam
customers with no security expertise required.
In a world where businesses fall victim to ransomware every 11 seconds and 93% of attacks are targeting
backups, Ootbi by Object First helps make backup data ransomware-proof.
3 Reasons Why Ootbi Is the Best Storage for Veeam
Secure
• S3 out-of-the-box immutability
• A hardened storage target with
zero access to the root
• Separation of the Backup
Software and Backup Storage
layers according to Zero Trust
best practices
Simple
• NO security expertise required
• 15 minutes to deploy and scale
• Updated and optimized
automatically by Object First
Powerful
• Lightning fast backup
up to 8 GB/s
• Supercharged Instant Recovery,
capacity, and performance scale
linearly
• Use of standard Veeam
block size and encryption
Eliminate the need to sacrifice performance
and simplicity to meet budget constraints
with Ootbi by Object First.
Learn More About
the Best Storage
for Veeam
ROUNDTABLE :
ROUNDTABLE : EMERGING MEMORY TECHNOLOGIES
Faster persistent memory technologies have
their place, although there are still hurdles
that need to be overcome.
"Emerging alternatives like MRAM and
ReRAM provide advantages such as near-
SRAM speed, zero standby power in the
case of MRAM, and analogue compute
capabilities like ReRAM, but face scalability
and manufacturing hurdles. They are
gaining some traction as they promise better
scalability, energy efficiency, and
performance for future HPC demands, but
have hurdles to overcome," said Saino and
Kulkarni. "CXL NV-CMM types of products
offer DRAM-like speed and persistence,
making them valuable for caching and
checkpointing functions in HPC
applications. High density hybrid CXL
solutions are likely as well."
NO NEW ARCHITECTURES:
One thing that seems clear is that there will
not be a new server architecture for HPC
and AI workloads that replaces what we
have today. Advances in CPU and GPU
technology, and large investments in such
platforms, still make general-purpose
computing the best fit for most jobs.
As such, some emerging memory
technologies are likely to be more of niche
interest, for custom jobs that require the
fastest speeds. Computational-RAM (CRAM),
where computations can take place directly
in RAM, is a good example of this.
"Although CRAM offers compelling
advantages for AI inference and acceleration
in theory, it suffers from very limited
programmability and restricted workload
flexibility. As a result, CRAM is unlikely to
replace the traditional server architecture for
general HPC. Instead, it will at most be
deployed selectively for niche applications,"
said Baker.
Effective scaling and higher density:
Irrespective of this, AI and HPC are pushing
the requirements for more memory and
require more flexible ways of using it. In that
regard, continuing to push the boundaries of
today's memory technologies makes sense,
as it can help maximise investment in current
computing architecture.
At the core of memory development are
two technologies that can help: 3D DRAM
for increased capacity and Compute
Express Link (CLX) for improved scaling and
memory pooling.
"HPC and AI require both 3D DRAM for
capacity and bandwidth, and CXL for
scalable, cost-effective memory expansion.
3D DRAM such as HBM3, is ideal for onpackage,
high-speed tasks like training large
AI models due to its fast data access and
energy efficiency. CXL will provide pooled
memory for flexibility and persistent
workloads," said Sainio and Kulkarni. "A
hybrid approach that combines these
technologies is essential for efficiently
meeting the growing demands of modern
HPC and AI applications."
Emerging technologies also promise to
maximise the investment in existing storage,
which is particularly important given the
need for a tiered approach to modern
workloads.
Kunze gives an example: "Emerging
technologies such as ceramic storage can
release expensive high performing storage
like HDDs which today is used for storing
cold data, for a better use."
Emerging memory technologies also
promise improved caching and access to
data available on traditional storage
technologies, such as flash and hard disks.
"Advanced caching strategies leveraging
faster memory types - such as HBM or
stacked DRAM - can significantly accelerate
access to hot data, improving the
performance of existing storage systems.
Using persistent memory for metadata
acceleration or tiered caching layers will
continue to enhance storage efficiency
without fundamentally redesigning
architectures," said Baker.
SOFTWARE IS CRITICAL:
While hardware may steal the limelight,
software is essential in provisioning and
managing data tiers. Crucially, software has
to make life easier and work with what's
available, rather than changing how
systems work.
This is a valuable lesson learned from Intel
Optane, as Lynn explained: "A final hurdle
for Optane was the slow adaptation of the
software ecosystem. While, in theory, Optane
28 STORAGE Sept/Oct 2025
@STMagAndAwards
www.storagemagazine.co.uk
MAGAZINE
ROUNDTABLE :
ROUNDTABLE : EMERGING MEMORY TECHNOLOGIES
""Universal memory such as PCM and ULTRARAM promises to merge RAM
speed with persistence, but faces manufacturing complexity, high costs,
and scaling barriers. Tiered architectures will dominate short-to-medium
term due to cost efficiency. Universal memory may see niche use (edge
AI, aerospace) but requires material breakthroughs to displace tiered
systems, likely post-2030. Hybrid solutions like CXL PMEM + DRAM +
SSDs, remain the pragmatic path." Arthur Sainio and Raghu Kulkarni, Persistent Memory Special
Interest Group co-chairs, SNIA (The Storage Networking Industry Association)
DIMMs expanded memory transparently to
the OS, in practice, optimising databases
and file systems to take full advantage of its
unique persistence and performance
characteristics proved to be complex and
time-consuming, further hindering its
widespread and effective use."
Software is critical to the success of any
technology, particularly in a future where
resources must be efficiently combined
across different platforms.
"Software optimises workloads across
CPUs, GPUs, TPUs, and CRAM by
managing resources, scheduling tasks, and
improving memory use. Tools like
Kubernetes and TensorFlow ensure efficient
hardware utilisation, while future
innovations in AI-driven orchestration,
unified APIs, and real-time monitoring will
enhance performance and energy efficiency
across heterogeneous platforms," said
Sainio and Kulkarni.
BARRIERS TO UPTAKE:
While the AI explosion may make adoption
of emerging memory technologies a forgone
conclusion, there are still many risks,
particularly around the investment in existing
memory technologies. Demand for new
technologies can be limited by what's
currently working.
A general resistance to new technology is
something noted by Norfolk, who
highlighted that one of the biggest barriers
to adoption is "The amount of legacy tech
still in use and working 'well enough' in
many applications. Plus, general mistrust of
anything too new unless there is no
alternative."
In a similar vein, new technology has to
be demonstrably better than what's
available now. As Baker said, "New
memory technologies must not only
outperform but also offer acceptable
economics compared to DRAM or NAND
to achieve widespread adoption."
These are factors that we've seen time and
time again, but failure to invest in emerging
technologies poses a risk of its own. As
Kunze explained: "100x more money is
invested in computation than in memory and
storage. But without investment in newly
scalable technologies, billions of investments
in AI could be squandered due to lack of
storage. This looming risk should be
exposed to and explored in the AI and AI-
Investor community."
www.storagemagazine.co.uk
@STMagAndAwards Sept/Oct 2025
STORAGE
MAGAZINE
29
ROUNDTABLE:
ROUNDTABLE : EMERGING MEMORY TECHNOLOGIES
THE FUTURE IS COMING:
Despite these warnings, the requirements of
demanding computing workflows are only
exacerbating the memory wall problem,
increasing the need for novel solutions.
Emerging memory technologies are required
now more than ever, and wider adoption is
only a matter of time.
"Looking five years ahead, the confluence
of ever-increasing data intensity and the
scaling of datasets suggests we are indeed
on the cusp of a transformative period in
memory technology, arguably the most
significant in a generation. This relentless
growth in data demands will necessitate
radical advancements and new architectural
approaches to overcome the limitations of
current memory systems," said Lynn.
Developments in scalability and density
must be priorities for any new technology
looking to successfully tackle the memory
wall challenge. Thankfully, the building
blocks of these technological advancements
are already available.
"Breakthroughs in CXL-based memory and
Racetrack memory could transform the
industry. CXL will enable scalable, lowlatency
persistent memory integration, while
Racetrack memory offers ultra-high density,
faster speeds, and energy efficiency. These
advancements can revolutionise AI, HPC,
and edge computing performance," said
Sainio and Kulkarni.
It's important to think about how data will
be used to understand the future of
emerging memory technologies, as Kunze
explained: "There will be 'hot storage' and
'not so hot storage.' The distinction between
hot and cold storage/data will disappear;
rather, data will be classified by the need to
make it immediately accessible or not."
As a result, the future looks set to be based
on multiple technologies, with tiering used
to hit different requirements at different
points in a system. That means emerging
memory technologies, but also continuing to
push the limits of what today's technology
can offer.
"We expect there will be more flavours of
persistent and volatile memory. They will be
based primarily on DDR cell but also NAND
cells," said Baker. "The objective of DDR
based memory will offer lower power, slower
performance vs DRAM and cost vs a
standard DRAM. It will reside between
DRAM and NAND in the compute hierarchy.
The innovation on NAND memory will target
to expand the bandwidth in the overall
compute hierarchy to meet the needs of AI
and in-memory databases."
CONCLUSION
You'll probably be disappointed if you are
expecting a new, emerging memory
technology to become standardised in the
near future. For the time being, the
traditional tiered memory architecture isn't
going anywhere, and will continue to see
iterative improvements to boost speed and
capacity.
But, equally, the ever-growing demands of
AI and HPC workloads mean that there's a
sense of urgency to solve the performance
bottlenecks with current memory designs.
Held back by issues such as high costs,
limited software support and a general
resistance to technological change,
emerging technologies have not caught on
quite yet.
That said, there is clearly a sense that
change is inevitable, sooner or later, and
various approaches could be adopted now
to address these bottlenecks. As our
contributors make clear, while universal
memory may still be years away, the
combination of tiered approaches, persistent
memory, and advanced orchestration
software promises a transformative era for
HPC and AI workloads. For the technology
industry, the challenge is not if, but how fast,
we can adapt.
30 STORAGE Sept/Oct 2025
@STMagAndAwards
www.storagemagazine.co.uk
MAGAZINE
The future is here.
Tiered Backup Storage
• Fastest backups
• Fastest restores
• Scalability for fixed-length backup window
• Comprehensive security with ransomware recovery
• Comprehensive disaster recovery from a site disaster
• Low cost up front and over time
Thank you so much
to all who voted, and
congratulations to our fellow
Storage Awards 2025 winners!
Visit our website to learn more
about ExaGrid’s award-winning
Tiered Backup Storage.
LEARN MORE
TECHNICAL: SUPERMICRO
STORAGE IS TRANSFORMING AI
WENDELL WENJEN, DIRECTOR OF STORAGE MARKET DEVELOPMENT, SUPERMICRO CONSIDERS THE
ROLE OF DATA IN ENTERPRISE AI AND INCLUDES DATA LAKES FOR AGGREGATING ENTERPRISE DATA
AND USING IT FOR ENTERPRISE-SPECIFIC AI MODEL TRAINING FINE-TUNING
It's a cliché to say that "data is the new oil"
but what does that mean for enterprise
AI? How has enterprise AI evolved in the
past 12 months? Let's start by debunking the
"data is the new oil" analogy. While oil is a
fungible and finite commodity, data is mostly
unique and can be infinitely created. What is
true is that data, particularly the enterprise's
proprietary data, is the fundamental source
for the customisation of AI models for
specific companies, industry and use cases.
Most enterprises are in the process of
planning AI-enabled applications, and
many have successfully deployed some into
production. The process of moving proofof-concept
AI projects into full deployment
continues to be an area where many
projects face significant barriers and fail to
make the transition. These include elements
such as significant project costs, including
the AI development and deployment
infrastructure, project goal alignment with
stakeholders, and scarcity of AI
development talent.
THE ENTERPRISE DATA LAKE
The enterprise data lake where relevant
organisational data is collected from siloed
applications, shared drives and log data is
the most common method to utilise such
data to build AI learning models.
Identifying, aggregating, extracting,
normalising and other data ingestion
tasks, while often the most time
consuming and labor-intensive
part of the AI development
process, are essential to create
an accurate AI model. Modern
data lakes (as compared with
legacy Hadoop-based data
lakes) are based on object
storage software, which is often
stored on disk-based storage
servers, for cost efficiency.
Once ingested, the data is then
used to fine-tune commercial or
open-source large language models
(LLM), as in the case of generative AI
applications. Rather than create a
completely new LLM from scratch,
a pre-built language model
which incorporates commonly
available domain
knowledge but requires
additional training using
enterprise's specific
data. This enterprise model fine-tuning stage
requires dedicated GPU and storage
infrastructure and is a continuous process as
new data is created. The result is a
customized large language model which
contains the company's specific information
to generate responses.
It's often not feasible to retrain enterprise
LLMs every time new data is created,
especially in the case of real-time data such
as financial market information, news and
other temporal data. In these cases, retrieval
augmented generation (RAG) has become a
popular technique. It appends contextually
relevant information to the input query which
is used to augment the original query.
The retrieval phase searches a vector
database for similar information where the
relevant information has been previously
stored in the form of vector embeddings
which are numerical representations of the
data and then combines it with the
tokenised original query as input into the
enterprise LLM. This method produces more
relevant responses and reduces
hallucinations. The vector database used in
RAG is a data store which can be
implemented as file or object storage.
CHANGES IN THE STORAGE
INFRASTRUCTURE
Storage and data management continue to
be an integral part of enterprise AI
infrastructure. This includes storage servers,
networking, and disk and flash media,
which all build a foundation for retaining
enterprise data in a persistent and protected
environment. Both disk and flash-based
storage are used in enterprise AI
infrastructure with tradeoffs in cost and
performance.
32 STORAGE Sept/Oct 2025
@STMagAndAwards
www.storagemagazine.co.uk
MAGAZINE
TECHNICAL: SUPERMICRO
"The process of moving proof-of-concept AI projects into full deployment continues to be
an area where many projects face significant barriers."
Data management refers to the storage
management software used to maintain and
update digital information. This can be
block-based, file-based or object-based, and
whilst each storage access method has a
role in the enterprise AI infrastructure, they
again offer tradeoffs in performance and
flexibility to accommodate fixed or variable
sized data. A new element of data
management is the introduction of data
orchestration, which adds intelligent and
automated workflows to the data
management platforms.
Data having gravity is a common metaphor
referring to the difficulty in moving large data
sets to different computing resources. As
enterprise data stores grow, the concept of
data gravity will drive more of the AI
computing workload to be done "in place".
This means that the computational resources
will come to the data or be incorporated into
the data management platforms rather than
moving the data to the compute resources.
This is especially important when executing
demanding workloads, at scale.
LARGE SCALE INFERENCE
One change from last year is the
implementation of large-scale, high-volume
inference as a part of agentic AI workflows,
which combines a series of reasoning or
goal-seeking series of AI agents. This highvolume
AI inference workload processes
thousands of queries per second, requiring
more optimised efficiency in data processing.
One optimisation which is starting to
become adopted is known as a
disaggregated inference process, which
separates the two phases of processing an
inference query. First, the prefill phase
tokenises input query, and the second
decode phase outputs the AI model
response. By dedicating separate GPUs
resources for each phase, the overall
inference throughput can be improved.
In the decode phase, a further optimisation
is to store the results of previously processed
queries to look up the results when the same
token pattern is presented. The Key-Value
(KV) cache stores these previous results in
multiple tiers - from the very fast but smallscale
GPU memory, to larger, local in-system
NVMe storage, and then further to shared
large scale network storage.
This KV cache can grow to multiple
petabytes, using shared NVMe-based file or
object storage for these intermediate token
results. Processing bottlenecks from
continuous re-computation of the same
query can be eliminated. By referring to
results stored in the KV cache, the overall
inference performance is greatly increased.
STORAGE ECOSYSTEMS
NEED TO BE A
SOLUTION FOR AI
The role of data in enterprise
AI includes the data lake
for aggregating the
enterprise data
and using it
for
enterprise-specific AI model training finetuning.
As AI becomes more prevalent in
corporate environments, businesses need to
adopt tools like RAG inference which
contains a vector database to enable the
fast lookup of enterprise specific information
related to their AI queries. Another new
trend is the need for large scale inference
and the implementation of disaggregated
inference processing which contains the KV
cache data, primarily stored in flash-based
network storage.
With the state-of-the-art enterprise AI
infrastructure and processes continuing to
evolve and improve, the companies
implementing these projects need to develop
infrastructure which is flexible, reconfigurable
and able to support new AI deployment
methods developed in the future. However,
the fundamental storage infrastructure and
management of enterprise data will always
be reusable.
More Info: www.supermicro.com
www.storagemagazine.co.uk
@STMagAndAwards Sept/Oct 2025
STORAGE
MAGAZINE
33
CASE STUDY:
CASE STUDY: NATIONAL LIBRARY OF SCOTLAND
NATIONAL LIBRARY OF
SCOTLAND ENTRUSTS
CULTURAL TREASURES TO SCALITY
THOUSANDS OF ASSETS ARE DIGITISED AND STORED ON SCALITY
RING PER WEEK, PRESERVING PRECIOUS COLLECTIONS AND
MAKING THEM ACCESSIBLE TO THE PUBLIC
The National Library of Scotland (NLS)
houses one of Scotland's national
cultural collections, and, as a legal
deposit library, it is entitled to claim a copy
of everything published in the United
Kingdom. The 31 million items in its
growing collection already occupy 120
miles of shelving, with around 5,000 new
items being added every single week.
The library's holdings include vastly differing
formats - from maps to music to VHS tapes -
as well as priceless items, including one of
only 21 complete Gutenberg Bibles known to
exist and the last letter written by Mary Queen
of Scots just before her execution in 1587.
THE CHALLENGE
The library set an ambitious goal for a third
of its collection to be in digital format by
2025. Their former preservation storage
involved one copy on SAN, backed up on
tape. As the digitisation initiative got
underway, the strategy had to be
reconsidered. To preserve their precious data,
NLS decided to keep three copies in different
geographic locations and use multiple
technologies for safety with checksums to
ensure data integrity twice each year.
With so much irreplaceable data to hold,
protect and make accessible for generations
to come, eliminating tape, while ensuring
streamlined growth and enabling simple onprem
and cloud storage synergy with S3,
emerged as key goals.
THE OUTCOME
After receiving proposals from "all of the
expected vendors," NLS chose Scality RING as
a foundation of their preservation storage
strategy. In addition to providing the resilient
S3-compatible storage they wanted, Scality
further advanced the library's data integrity
assurance model by developing a Bitrot
checker for digital preservation that
streamlines the integrity checking process.
Now that Scality RING has shown its strength,
the National Library of Scotland is looking to
add use cases and is even considering
offering it as a service to other organisations,
given its multi-tenancy capabilities.
"In an environment where there are large
volumes of invaluable data, we needed a
ransomware-proof backup solution that was
easy and quick to implement while also being
capable of infinite scaling up as data and use
cases increase. Working with Scality has been
brilliant."
Stuart Lewis Associate Director, National
Library of Scotland
Results: Scalability: Now the storage can grow
as they achieve their digitisation goals. All they
have to do is add more capacity, linearly.
There's no rip and replace, and no technology
refresh - add drives to servers, add servers
when the current ones are at capacity.
Standard S3 interface: To achieve a single
interface for cloud and on-prem object
storage, the library decided to standardise on
S3. This decision motivated a re-engineering
of their homegrown asset management system
to support S3 - and was key to choosing
Scality RING.
Lower TCO: The library maintains three
copies - one in each of their data centres and
one in the cloud. Ease of management and
the ability to scale without replacement,
coupled with software licensing that doesn't
penalise for replicating the data, which equals
big savings.
Less stress: No LUNS means less maintenance
and no more lengthy tape backups from
which the library wondered if they would ever
be able to restore. All that, plus they can lose
a data centre and everything is fine - even
available throughout.
"We're looking at moving other backups,
replacing tape with disk-to-disk backup;
moving the non-print collections and even
offering it as a service to other organisations,
given its multitenancy capabilities."
Stuart Lewis Associate Director, National
Library of Scotland
More Info: www.scality.com
34 STORAGE Sept/Oct 2025
@STMagAndAwards
www.storagemagazine.co.uk
MAGAZINE