10.02.2026 Views

STLATEST

Transform your PDFs into Flipbooks and boost your revenue!

Leverage SEO-optimized Flipbooks, powerful backlinks, and multimedia content to professionally showcase your products and significantly increase your reach.

STORAGE

MAGAZINE

The UK’s number one in IT Storage

January/February 2026

Vol 26, Issue 1

CASE STUDY:

NAS in Post-Production

DATA FORTIFICATION:

Pillars of Risk Assessment

ROUNDTABLE:

2026 Predictions from 8 Experts

DATA GRAVITY:

Cloud Storage Strategy

COMMENT - RESEARCH - INTERVIEWS - CASE STUDIES - OPINIONS - PRODUCT REVIEWS


LOGICALLY

AIR-GAPPED

ZFS LOCK

Besides the native XFS

immutability granted by

Linux Hardened Repository,

ZFS read-only snapshots are

logically air-gapped. In turn,

they prevent data loss even

after a root compromise.

VEEAM® MEETS ZFS

IMMUTABILITY: THE ULTIMATE

HARDENED REPOSITORY

Are you brave enough to rely on only a single layer of immutability defense?

Not a chance! You deserve more. It’s time to move past sluggish performance and insufficient

protection.

Open-E JovianVHR provides the technically superior foundation for your custom-built

Hardened Repository from Veeam. Eliminate the single point of failure with an independent

layer of protection.

Choose Double-Layer Immutability to ensure Cyber-Resilience against really sophisticated

threats. Settle for nothing less than the highest standard through the ultimate defense

mechanism.

lock badge-ch gauge-ma

GUARANTEED

DATA

INTEGRITY

ZFS provides end-to-end

checksumming to defeat “bit

rot” and ensures your backups

are valid. Self-healing autocorrects

corrupt blocks in

redundant arrays.

HIGH

PERFORMANCE

& LOW TCO

Leverage ZFS Caching and

inline compression for multi-

GB/s throughput. Deploy on

commodity hardware with

simple licensing to dramatically

lower your Total Cost of

Ownership.

〉 〉 〉 〉 〉 〉 〉 〉 〉 〉 〉 〉 〉 〉 〉 〉 〉 〉 〉 〉 〉 〉 〉 〉 〉 〉 〉 〉 〉 〉

Stop worrying about

your data.

Get the true digital

backup vault for your

Hardened Repository.

www.open-e.com

Get a Free Guide


January/February 2026

Vol 26, Issue 1

CONTENTS

STOR

MAGAZINE

STORAGE

CASE STUDY:

CONTENTS

ROUNDTABLE:

DATA FORTIFICATION:

DATA GRAVITY:

COMMENT….....................................................................4

CASE STUDY: EXAGRID-VEEAM SOLUTION REDUCES

VM BACKUPS BY 95% AT THE FOOTBALL POOLS….…6

Turning backup risk into a safe bet for high-speed recovery

06

Q&A STRATEGY SESSION WITH INFOSCALE: THE

FUTURE OF RESILIENCE….……....................................……….8

InfoScale's Bhooshan Thakar talks in detail on the future of resilience, AI and multicloud

strategy

08

MANAGEMENT: SECURING AI WORKLOADS WITH

SOVEREIGN CLOUD….................................................…10

Vultr considers the storage decisions that determine whether enterprises scale AI securely

and competitively

PARTNER INSIGHT: A TO Z OF DATA STORAGE BOOK

NOW AVAILABLE…..............……...............................…….…12

Hot off the press, Virtual Effect tells us what's behind their 25th edition

OPINION: THE EXECUTIVES 2026 GUIDE TO DATA

FORTIFICATION:..........................................................…16

Crawford Technologies details eight pillars of risk assessment for data

20

CASE STUDY: NAS BRINGS THE WOW FACTOR TO

POST-PRODUCTION AT G6 MOTION CONTROL…....20

Lights, Camera, Action! Seagate storage underpins high-volume post-production and archiving

CASE STUDY: KRYSTAL MODERNISES STORAGE TO

POWER HIGH-PERFORMANCE VIRTUAL

INFRASTRUCTURE.…....................................................22

StorPool provides Krystal’s storage foundation

24

ROUNDTABLE: 2026: THE YEAR THAT STORAGE

ENTERS ITS "STRATEGIC ERA"...................................…24

Eight leading storage vendors give insight and predictions for the year to come

OPINION: HOW TODAY'S CLOUD STORAGE STRATEGIES

ARE KEY IN ADDRESSING DATA GRAVITY.........................30

Leaseweb details why tackling data gravity is now central to achieve cloud optimisation

32

STRATEGY: LEARNING FROM PREVIOUS FLASH INDUS-

TRY SHUTDOWNS TO PREDICT RECOVERY CYCLES.....32

StorONE considers why storage architectures must adapt to slower flash recovery cycles

www.storagemagazine.co.uk @STMagAndAwards Jan/Feb 2026

STORAGE

MAGAZINE

03


COMMENT

EDITOR: Sharon Munday

editor@storagemagazine.co.uk

SUB EDITOR: Mark Lyward

mark.lyward@btc.co.uk

REVIEWS: Dave Mitchell

EDITORS COMMENT:

SHARON MUNDAY, EDITOR,

STORAGE MAGAZINE

PUBLISHER: John Jageurs

john.jageurs@btc.co.uk

LAYOUT/DESIGN: Ian Collis

ian.collis@btc.co.uk

SALES/COMMERCIAL ENQUIRIES:

Lucy Gambazza

lucy.gambazza@btc.co.uk

Stuart Leigh

stuart.leigh@btc.co.uk

MANAGING DIRECTOR: John Jageurs

john.jageurs@btc.co.uk

DISTRIBUTION/SUBSCRIPTIONS:

Christina Willis

christina.willis@btc.co.uk

PUBLISHED BY: Barrow & Thompkins

Connexions Ltd. (BTC)

Suite 2, 157 Station Road East

Oxsted. RH8 0QE

Tel: +44 (0)1689 616 000

Fax: +44 (0)1689 82 66 22

SUBSCRIPTIONS:

UK £35/year, £60/two years,

£80/three years;

Europe: £48/year, £85 two years,

£127/three years;

Rest of World: £62/year

£115/two years, £168/three years.

Single copies can be bought for £8.50

(includes postage & packaging).

Published 6 times a year.

No part of this magazine may be

reproduced without prior consent, in

writing, from the publisher.

©Copyright 2026

Barrow & Thompkins Connexions Ltd

Articles published reflect the opinions

of the authors and are not necessarily those

of the publisher or of BTC employees. While

every reasonable effort is made to ensure

that the contents of articles, editorial and

advertising are accurate no responsibility

can be accepted by the publisher or BTC for

errors, misrepresentations or any

resulting effects

The UK Launches a Refreshed Cyber Action Plan: What It Means for the Storage of Data,

Resilience and Service Trust in the Public Sector

We begin 2026 with a notable refresh to the UK's cyber resilience ambitions. In early January

the UK Government released an updated Government Cyber Action Plan (GCAP) that sets out

how public services intend to strengthen cyber security in the year ahead. The update builds on

the 2022 Cyber Security Strategy but with clearer expectations, defined delivery roles and funding

for central coordination. While primarily aimed at the public sector, the direction of travel is

familiar for us all. Digital services are critical infrastructure and resilience is now treated as a

measurable outcome.

For the storage community this matters because cyber resilience depends entirely on a data

infrastructure that can withstand disruption and recover quickly. Public services are increasingly

digital by default across the NHS, policing, education and local government. That elevates the

importance of secure storage, reliable backup, immutable retention and fast recovery. GCAP's call

for better risk visibility also reflects ongoing trends that we detail in this issue, where case studies

illustrate built-in resilience and continuity actively shaping storage architectures.

Other news to start the year includes Veeam's quiet acquisition of Object First. In a blog post we

heard that the deal will bring Object First's immutable object storage appliances directly into the

Veeam portfolio and points to continuing market demand for simple, cyber-aware data protection

platforms. Perhaps this New Year marriage is no surprise - Object First's Ootbi appliances have

always been positioned as a ransomware-resilient storage layer for Veeam environments, with

immutability, easy deployment and scale out of the box.

If we look at these two stories together, January signals that we will continue to prioritise resilience,

visibility and continuity across both the public and private sectors. Storage sits at the centre of that

equation and, as our 2026 Predictions Roundtable Feature shows, eight industry leaders go full

throttle predicting a year in which AI permeates everything. While their commentaries vary, the

shared conclusion is clear. Infrastructures that can access, protect, retain and restore all data

reliably and fast, will remain one of the most important enablers of trust this year.

Welcome to the first issue of Storage Magazine in 2026.

Best

Sharon Munday

Editor, Storage Magazine

04 STORAGE

MAGAZINE

Jan/Feb 2026

@STMagAndAwards

www.storagemagazine.co.uk


The future is here.

Tiered Backup Storage

• Fastest backups

• Fastest restores

• Scalability for fixed-length backup window

• Comprehensive security with ransomware recovery

• Comprehensive disaster recovery from a site disaster

• Low cost up front and over time

Thank you so much

to all who voted, and

congratulations to our fellow

Storage Awards 2025 winners!

Visit our website to learn more

about ExaGrid’s award-winning

Tiered Backup Storage.

LEARN MORE


CASE STUDY:

CASE STUDY: THE FOOTBALL POOLS

EXAGRID-VEEAM SOLUTION REDUCES VM

BACKUPS BY 95% AT THE FOOTBALL POOLS

IN COMBINING EXAGRID AND VEEAM, THE FOOTBALL POOLS TURNS BACKUP RISK INTO A SAFE BET

FOR HIGH-SPEED RECOVERY

The ExaGrid system is easy to install and use

and works seamlessly with the industry's leading

backup applications so that an organisation

can retain its investment in its existing backup

applications and processes. In addition,

ExaGrid appliances can replicate to a second

ExaGrid appliance at a second site or to the

public cloud for disaster recovery (DR).

VM BACKUPS REDUCED BY 95%

Chris backs up The Football Pools' data on a

daily, weekly, and monthly schedule. "Our data

is typically comprised of virtual hard disk files

followed by bespoke application data. What I

mean by that is that the data can be outputs

from in-house applications. It can be additional

documents, databases, or a mixture of

Windows and Linux operating systems.

The Football Pools, headquartered in

Liverpool, UK, has been a core part of

the British footballing weekend since

1923, offering customers and pundits the

chance to win £3 Million twice a week every

week. During the last 95 years, The Football

Pools have paid out over £3 billion in prize

money to more than 60 million lucky winners.

Having worked with ExaGrid closely in a

previous organisation, Chris Lakey, The

Football Pools' Infrastructure Manager, quickly

recommended the company switch after

starting his new position there. Chris details

his reasoning: "The key points that I brought

up were ExaGrid's deduplication, scalability,

and the fact that it eliminates the need for

manual intervention. Those points, plus the

fact that the overall total cost of using an

ExaGrid system was far less expensive than

our previous solution, were what led us to

make the switch."

The company installed an ExaGrid system at

its primary site that cross-replicates with another

system at its data centre (colo) site. "Installation

was extremely simple. We were able to have

the ExaGrid systems up and running within an

hour, from out of the box to sending backup

data to the system," noted Chris.

Chris was pleased that ExaGrid integrates

well with Veeam, The Football Pools' existing

backup application. "I'd say ExaGrid integrates

better with Veeam than any other backup

application. In my previous role, I used

Backup Exec, which is a little more

challenging to get configured, though still

beneficial in terms of deduplication and

compression."

"We've tried to maintain consistency by

keeping the backup start times the same, only

now they are so much quicker. Backups used

to take up to 40 minutes per virtual machine

(VM). Now, backups of each VM are

deduplicated and encrypted-at-rest inside of

two minutes," said Chris. "We run at a high

speed now - a full backup of our entire estate

at our main office can be as short as five-anda-half

hours."

ExaGrid writes backups directly to a diskcache

Landing Zone, avoiding inline

processing and ensuring the highest possible

backup performance, which results in the

shortest backup window. Adaptive

Deduplication performs deduplication and

replication in parallel with backups for a strong

recovery point (RPO). As data is being

deduplicated to the repository, it can also be

replicated to a second ExaGrid site or the

public cloud for DR.

06 STORAGE Jan/Feb 2026

@STMagAndAwards

www.storagemagazine.co.uk

MAGAZINE


CASE STUDY:

CASE STUDY: THE FOOTBALL POOLS

KEY BENEFITS:

Switch to ExaGrid results in 95% shorter VM

backups

Extremely high data deduplication - 29:1

dedupe ratios for Linux backups

ExaGrid is an easy, 'zero-touch' solution

requiring less technician involvement

"There's been much less technician

involvement since we've introduced ExaGrid. It's

zero touch from an administrator's perspective.

I'm most impressed with how easy it is to set the

system up, and how well it integrates with

backup products, such as Veeam," notes Chris.

HIGH DEDUPLICATION RATIO FOR

LINUX BACKUPS

Incorporating data deduplication into The

Football Pools' backup environment was an

important factor in the company's search for the

right solution. Chris details "Our deduplication

is extremely high, and our best deduplication

ratio is seen with our Linux backups - we're

actually running at a 29.7:1 ratio."

Veeam uses the information from VMware

and Hyper-V and provides deduplication

on a per-job basis, finding the

matching areas of all the virtual

disks within a backup job

and using metadata

to reduce the

overall

footprint of

the

backup data. Veeam also has a dedupefriendly

compression setting which further

reduces the size of the Veeam backups in a way

that allows the ExaGrid system to achieve

further deduplication. This approach typically

achieves a 2:1 deduplication ratio.

Veeam uses changed block tracking to

perform a level of data deduplication. ExaGrid

allows Veeam deduplication and Veeam

dedupe-friendly compression to stay on.

ExaGrid will increase Veeam's deduplication by

a factor of about 7:1 to a total combined

deduplication ratio of 14:1, reducing the

storage required and saving on storage costs

up front and over time.

'ZERO-TOUCH' SOLUTION

Chris values the simplicity of his backup

environment, now that ExaGrid has been

installed. "There's been much less technician

involvement since we've introduced

ExaGrid. It's zero touch from

an administrator's

perspective.

I'm

most impressed with how easy it is to set the

system up, and how well it integrates with

backup products. Once the ExaGrid system is

configured and the backup schedule is set up

in Veeam, there's no need to do anything else.

Knowing that the backups will continue to run

has given me peace of mind. I can relax and

focus my time on other issues."

In addition to the low-maintenance system,

Chris appreciates working with ExaGrid's

customer support. "I've worked with two

ExaGrid support engineers and have found

that both are equally helpful and always

available. It's great to know they're just a

phone call away."

More Info: www.exagrid.com

www.storagemagazine.co.uk

@STMagAndAwards Jan/Feb 2026

STORAGE

MAGAZINE

07


Q&A:

Q&A: INFOSCALE

INTELLIGENT RESILIENCE AT AN INFLECTION POINT

IN THIS STORAGE MAGAZINE Q&A, BHOOSHAN THAKAR OUTLINES HOW INFOSCALE IS HELPING ENTERPRISES

MODERNISE, REDUCE PLATFORM DEPENDENCY AND DESIGN RESILIENCE AROUND APPLICATIONS

With more than two decades spent

shaping enterprise resilience

strategies, Bhooshan Thakar has

witnessed the storage industry move through

repeated cycles of platform change,

architectural reinvention and varying

approaches to assure resilience. Today, as

General Manager of InfoScale, his focus is

firmly on helping organisations extract longterm

value from their technology investments

while adapting to an increasingly complex

operating environment.

InfoScale's recent journey has become

increasingly relevant to industry observers.

Over the past 12 months, the company has

entered a new phase operating as a

standalone business within the giant Cloud

Software Group (which has brought together a

portfolio of major enterprise software platforms

including Citrix, NetScaler and InfoScale), and

has gained sharper focus, increased resources,

and renewed investment behind its

"resilience anywhere" mission.

The timing of CSG's

investment into

InfoScale matters.

Enterprises today are in

a constant state of

reflection: modernising

estates, reassessing

long-standing platform

dependencies,

examining public cloud

expansion, while

simultaneously

responding

to

heightened regulatory and cyber pressure.

Resilience is no longer about individual

systems or components; it is about ensuring

applications and business services continue

to operate successfully. CIOs increasingly

prioritise outcomes such as continuity,

compliance and brand reputation, rather

than anchoring resilience strategies to any

single technology stack.

Storage Magazine (SM): You've spent more

than 25 years in enterprise resilience. What

convinced you that the industry is finally ready

for a more intelligent approach?

Bhooshan Thakar: When you've been in this

space for as long as I have, you see the same

challenges surface repeatedly. Historically,

resilience was treated as protection against

isolated single points of failure, e.g. a piece of

hardware, a storage system, a network

component. That made sense when

environments were largely on-prem and

relatively static.

Equally the need for resilience

has really changed. Today

CIOs talk of 'when, not if' and

we discuss the scale, reach

and impact of disruption. In

recent years, we have seen

many incidents with global

consequences and prolonged

recovery times, even when there

was no malicious intent involved.

These moments reinforce a simple

reality: failure is no longer an edge

case. It is something enterprises

now assume will happen at

some point.

That's shaped my

conviction that

resilience must

become more

intelligent and proactive. CIOs are now

driving that change because the cost,

complexity and consequences of downtime

are impossible to ignore. Resilience has had

to move closer to the application and be

designed for dynamic environments.

SM: Why is now such a pivotal moment for

InfoScale to match these resilience goals?

Bhooshan Thakar: The past 18 to 24 months

have created a perfect storm for enterprise IT.

Organisations are running business-critical

applications across hybrid estates, multiple

clouds and modern platforms, while

tolerance for disruption continues to shrink.

At the same time, the business impact of

downtime has become far more visible.

Whether the cause is a cyber incident, a

software update, a configuration change or

a platform transition, the consequences are

immediate and measurable. Downtime now

carries financial, regulatory and

reputational risk, elevating resilience from a

technical concern to a board-level priority

and responsibility.

That mindset change has altered how

organisations treat resilience. The question is

now how resilience should be designed and

implemented for maximum protection.

Traditional, infrastructure-centric models do

not align well versus an application-centric,

infrastructure-agnostic approach. InfoScale

aligns closely with where the market now is.

SM: Why has resilience shifted from an

infrastructure concern to an applicationlevel

priority?

Bhooshan Thakar: Applications don't live in

one place anymore. They span infrastructure

layers, platforms, and environments. And

they're changing faster than ever. That creates

a real challenge. Operations teams are quickly

08 STORAGE Jan/Feb 2026

@STMagAndAwards

www.storagemagazine.co.uk

MAGAZINE


Q&A:

Q&A: INFOSCALE

realising that traditional high availability and

recovery strategies weren't built to detect,

anticipate, or prevent the kinds of failures that

now cascade across the stack. Meanwhile, the

business now feels any of those downtime

impacts almost immediately.

So, it's not that resilience has shifted from

infrastructure to the application. It's that

resilience can no longer live in silos. As systems

become more interconnected and move faster,

infrastructure-centric approaches leave teams

increasingly exposed. Resilience today requires

full-stack awareness and coordinated

response, with the application as the point

where application state, data state, and

dependencies come together in real time. That

means continuously protecting state, absorbing

disruption as it happens, and restoring

operations without having to rebuild them.

InfoScale was built for this future. It delivers an

intelligent, software-defined resilience layer that

spans the full stack while operating alongside

the application. It keeps applications available,

protects application and data state, withstands

failure, and recovers only when needed.

The result is resilience as a real-time

operating model that preserves uptime,

protects data, and lets organisations evolve

across hybrid and multi-cloud environments

without any tradeoffs.

SM: With cloud modernisation and VMware

exit strategies accelerating, how are CIOs and

channel partners rethinking portability and

vendor dependence?

Bhooshan Thakar: Modernisation is

unavoidable, and for many organisations it is

happening under significant time pressure.

Cloud adoption, platform change and vendor

concentration risk are converging, particularly

in regulated industries.

Portability is now critical. CIOs want the

freedom to run applications where it makes

sense for the business, without being locked

into a single platform or vendor. At the same

time, they need confidence that resilience,

availability and compliance requirements will

be met.

InfoScale supports that flexibility by

decoupling application resilience from

infrastructure choices. Whether workloads are

on-premises, in private or public cloud,

virtualised or containerised, resilience policies

remain consistent. That allows organisations to

modernise at their own pace, reduce

dependency risk and meet regulatory

expectations without compromising continuity.

SM: CIOs increasingly frame resilience in

terms of business outcomes rather than

avoiding downtime. What does that look like

in practice?

Bhooshan Thakar: From a CIO perspective,

resilience is no longer just about systems being

available. It comes down to three core

business outcomes. The first is financial

impact: how quickly can you recover and how

much downtime cost can you avoid? The

second is reputation: how do you maintain

trust with customers, partners and regulators

when disruption occurs?

The third outcome is enabling the business to

focus on innovation and growth. CIOs should

not be consumed by constant technology shifts

underneath their applications. Resilience

should provide stability, allowing organisations

to adapt and evolve without distraction.

InfoScale supports this by abstracting

complexity away from the application layer,

enabling IT leaders to focus on delivering

business value while resilience operates as an

embedded, outcome-driven capability rather

than a reactive technical function.

SM: Ransomware has become a continuity

and compliance issue as much as a

cybersecurity one. How is that changing

recovery strategies?

Bhooshan Thakar: Ransomware has

fundamentally shifted the resilience

conversation. Protection remains important,

but recovery has become critical. Data shows

that organisations rarely recover 100% from

ransomware incidents, and recovery times in

large enterprises can stretch into weeks.

That has significant implications for continuity,

cost, reputation and regulatory exposure. As a

result, enterprises are rethinking recovery

strategies with a strong focus on speed and

data integrity. The goal is to minimise

downtime and data loss, reducing the leverage

attackers hold.

InfoScale plays a key role by enabling very

low recovery time objectives (RTO) and

recovery point objectives (RPO). By making

recovery faster and more predictable,

organisations can respond more confidently to

incidents and reduce the broader business

impact of ransomware events.

SM: AI is starting to influence how

organisations anticipate and respond to

disruption. What role do you see it playing in

the future of resilience?

Bhooshan Thakar: AI has the potential to

move resilience from a reactive discipline to a

proactive one. By applying machine learning

close to the application and data, it becomes

possible to detect anomalies, monitor system

behaviour and identify configuration drift

before those issues lead to downtime.

InfoScale's proximity to the application layer

makes this particularly powerful. By analysing

patterns and changes across the environment,

we use AI to provide early warning signals that

allow teams to intervene sooner.

The direction is clear: AI will help

organisations anticipate disruption rather than

simply respond to it, shifting the resilience

conversation eventually to autonomous, selfhealing,

application resiliency. As AI continues

to mature, resilience systems will increasingly

become self-correcting, enabled by both

InfoScale's innovation and the wider R&D

strength of Cloud Software Group.

More Info: www.infoscale.com

www.storagemagazine.co.uk

@STMagAndAwards Jan/Feb 2026

STORAGE

MAGAZINE

09


MANAGEMENT: SOVEREIGN CLOUD

SECURING AI WORKLOADS WITH SOVEREIGN CLOUD

KEVIN COCHRANE, CHIEF MARKETING OFFICER AT VULTR DELVES INTO THE STORAGE DECISIONS

THAT DETERMINE WHETHER ENTERPRISES SCALE AI SECURELY AND COMPETITIVELY

Artificial Intelligence has fundamentally

transformed how businesses create, store

and protect data. Training a single LLM

can generate terabytes of data containing

proprietary algorithms and sensitive

information. Inference applications further

increase risk by exposing model parameters to

potential exploitation, while agentic AI systems

continuously query databases holding

competitive intelligence in real-time.

The security implications are profound.

Traditional storage architectures, designed for

conventional database workloads, lack the

necessary controls to safeguard AI assets

against modern threats. More critically, reliance

on hyperscaler infrastructure concentrates AI

workloads in foreign jurisdictions, where data

can be accessed through government requests.

For regulated industries like finance, healthcare

and public sector services, this creates an

unacceptable level of exposure.

Sovereign AI cloud infrastructure directly

addresses these challenges by embedding

compliance, security, and geographic control

directly into the storage layer itself. This

approach provides a foundation for enterprises

to deploy AI at scale - without compromising

data sovereignty or protection.

THE SECURITY CHALLENGE

AI deployments introduce potential security

exposure for proprietary data. A successful

attack for that customer data could halt

production systems, destroy training data, and

expose intellectual property.

Nation-state actors are also actively targeting

leading AI research facilities. Competitors

attempt to extract model parameters through

sophisticated inference attacks, while insider

threats represent growing risks as AI teams

rapidly expand. The concentration of highvalue

data in centralised locations makes AI

infrastructure an attractive target.

Neocloud and private cloud architectures

offer inherent security advantages over

shared infrastructure. Air-gapped

environments physically isolate AI

workloads from other tenants, external

networks, and the public internet.

Dedicated control planes ensure that

administrative access remains completely

untethered from shared management

systems. Geographic isolation guarantees

data never crosses borders without explicit

authorisation; enforcing regulatory

compliance, data residency and eliminating

exposure, critical for organisations in

healthcare, government, financial services,

and other regulated sectors.

MULTI-TIER STORAGE WITH

BUILT-IN RESILIENCE

Training a single AI model

can produce hundreds of

terabytes of checkpoint data, multiple model

versions, and extensive logs. Without effective

tiering and resilience strategies, organisations

face impossible trade-offs between affordability

and data protection.

Object storage provides the foundational

layer, but performance and resilience

requirements vary dramatically across

workloads. Active training datasets require

high-throughput storage capable of serving

gigabytes-per-second performance, with

redundancy that prevents single points of

failure. Historical checkpoints and logs must

remain protected and retrievable to support

compliance auditing or model reproduction.

Modern sovereign cloud architectures address

these demands through multiple performance

tiers with integrated resilience. Accelerated tiers

built on NVMe handle active workloads and

write-heavy operations, with replication across

availability zones to ensure fault tolerance.

Premium tiers balance performance and cost

whilst maintaining geographic redundancy

within sovereign boundaries. Standard tiers,

provide cost-effective, long-term retention with

durable storage guarantees.

S3 compatibility remains non-negotiable. It

enables portability across providers and

integration with AI tool ecosystems without

sacrificing security controls. This flexibility allows

organisations to implement "train centrally, infer

locally, and deploy globally" strategies - training

models in sovereign regions, then distributing

through container registries to other compliant

locations as needed.

Backup and disaster recovery must be

architected into the storage layer from day one.

AI datasets cannot be recreated if lost.

Geographic replication within sovereign

boundaries provides resilience against facility

failures without compromising data locality and

10 STORAGE Jan/Feb 2026

@STMagAndAwards

www.storagemagazine.co.uk

MAGAZINE


MANAGEMENT: SOVEREIGN CLOUD

compliance requirements.

SOVEREIGNTY: SECURITY THROUGH

GEOGRAPHIC CONTROL

GDPR, the EU AI Act, and sector or country

regulations mandate that certain data types

remain within geographic boundaries. For

organisations training models on customer

information or proprietary research, storage

location is no longer a technical detail - it

directly impacts both legal compliance and

competitive security.

This problem compounds when fine-tuning

foundation models with proprietary data

which embeds sensitive information into

model weights and parameters. When

customised models or training data sit in

jurisdictions subject to foreign government

access requests, companies create

unacceptable exposure - regardless of

encryption or access controls.

Sovereign cloud architecture differs

fundamentally from traditional private cloud

approaches in this respect. Data sovereignty

cannot be achieved through policy alone; it

demands physical infrastructure with clearly

defined and enforceable regional boundaries.

Sovereign storage infrastructure guarantees

data locality through architectural design. Data

never crosses geographic boundaries unless

explicitly authorised. This applies to primary

storage, replicas, backups, and checkpoint

data. Block volumes, object buckets, and file

systems maintain clear regional assignment

with controls that prevent movement regardless

of API calls or configuration changes.

Administrative policies alone prove

insufficient. Storage layers must enforce

geographic restrictions at the infrastructure

level. Air-gapped deployments ensure that

even control plane operations cannot violate

sovereignty requirements, while dedicated

control planes under customer management

eliminate dependencies on external systems

that might respond to foreign legal demands.

For European and highly regulated

organisations concerned about US

hyperscaler access to data - and research

shows that approximately 50% of European

CXOs are, sovereign cloud provides security

through physical and legal isolation.

WHY PRIVATE CLOUD ARCHITECTURE

MATTERS FOR AI SECURITY

The concentration of AI infrastructure among

hyperscalers has created dependencies that

many organisations now view as unacceptable

risk. Storage operated by hyperscalers may be

subject to government access requests across

multiple jurisdictions simultaneously. For

competitive sectors, research institutions, or

any organisation handling sensitive data, this

exposure quickly becomes untenable.

Private cloud infrastructure built on sovereign

principles solves this through architectural

separation rather than contractual promises.

Organisations deploy storage in specific

jurisdictions with guaranteed locality, retaining

sovereignty control through dedicated

infrastructure that cannot be accessed by

other tenants, providers, or foreign

governments. At the same time, they preserve

the flexibility to relocate workloads as

regulations or business requirements evolve.

For AI training, this translates into highperformance

NVMe block storage with

sustained throughput for GPU clusters, all

within defined geographic boundaries. For

inference and agentic AI workloads, it means

tiered object storage matched to performance

requirements whilst maintaining security

controls. For compliance, it requires

infrastructure with architectural enforcement of

data sovereignty.

Partnerships between sovereign cloud

providers and AI infrastructure specialists

further strengthen this model. By optimising

data paths between storage and GPU clusters,

these collaborations ensure storage

performance doesn't limit infrastructure

utilisation - while maintaining security and

sovereignty controls throughout the stack.

STRATEGIC STORAGE DECISIONS

DETERMINE AI SUCCESS

Enterprises deploying AI at scale recognise

that storage represents strategic infrastructure

- one that directly determines competitive

advantage. Leading businesses match storage

architectures to workload requirements:

NVMe block storage for training, tiered object

storage for data lifecycle management, and

specialised architectures for vector databases.

Most importantly, they embed sovereignty

through infrastructure design from the

outset. They choose private cloud

architectures that guarantee data sovereignty

through physical isolation and dedicated

control planes. They select providers

capable of delivering production

performance while maintaining complete

data sovereignty. They build on infrastructure

that delivers independence from hyperscaler

ecosystems without sacrificing the GPU

acceleration and cloud-native capabilities

necessary for successful AI innovation.

In healthcare, this enables the training of

diagnostic AI models on patient data without

regulatory exposure. In financial services, it

supports algorithmic trading systems that

process market data within jurisdictional

boundaries. In manufacturing, it allows AIdriven

production optimisation using

proprietary process data. In government and

research, it provides the foundation for

developing sovereign AI capabilities aligned

with national interests.

The storage decisions made today determine

which enterprises scale AI successfully - and

which encounter technical, competitive, or

regulatory bottlenecks. Infrastructure designed

for AI workloads, deployed with sovereign

cloud principles, and engineered for

performance, security and independence

provides the foundation for sustainable

competitive advantage in the AI age.

More Info: www.vultr.com

www.storagemagazine.co.uk

@STMagAndAwards Jan/Feb 2026

STORAGE

MAGAZINE

11


PARTNER INSIGHT: VIRTUAL INSIGHT:

EFFECT

A TO Z OF DATA STORAGE NOW

AVAILABLE HOT OFF THE PRESS

THE INDUSTRY'S MUCH LOVED LITTLE BOOK OF STORAGE HAS

JUST TURNED 25, WITH THE POCKET-SIZED GUIDE COMPLETELY

REFRESHED AND PUBLISHED READY FOR THE AI ERA. THE

AUTHOR AND CHIEF STRATEGY OFFICER AT VIRTUAL EFFECT

JOHN GREENWOOD, TALKS TO STORAGE MAGAZINE ABOUT

WHAT IT TAKES TO KEEP THIS CHERISHED GUIDE RELEVANT,

READABLE AND FUTURE-PROOF.

Storage Magazine's 2025 Specialist

Storage Reseller of the Year, Virtual

Effect, have announced the latest

printed edition of their "A to Z of Data

Storage". It's the essential pocket guide that

has spanned two decades of changes with

the 2026 guide more comprehensive than

ever with 128 pages of print - all written

without using generative AI. We spoke to

the man responsible for penning it all

together, John Greenwood:

Storage Magazine: Where did the idea for

the Little Book of Storage originally come

from?

JG: "I was at a trade show back in the early

2000s and noticed visitors being handed

stacks of datasheets by vendors. These

ended up in increasingly bulging show bags

and, inevitably, in bins as attendees walked

out of the event. My thinking then was that

if we could produce something that would

fit in the pocket of a customer, it was more

likely to survive not just the day but actually

make it back to their office for long-term

usage and reference. But it also had to be

genuinely an all-encompassing essential

guide - and that was perhaps the bigger

challenge. Several months later, enter the

pocket-sized guide that told newcomers and

seasoned users alike, what backup, HCI,

snapshots and the never-ending storage

acronym lottery actually meant.

Storage Magazine: What problem did the

Book originally set out to solve?

John Greenwood: "There was, and still is,

an assumption that anyone involved in data

storage, whether they work in the industry

or simply use it, has a core understanding

of the technology, terminology, vendors and

topology. I saw so many people enter the

industry and look like the proverbial 'rabbit

in headlights' as they were plunged into

conversations that must have initially

seemed like a foreign language. You're

taught many things academically, but none

include the lessons entitled 'An Introduction

to Data Storage'. The objective of the Book

has always been to provide that foundation

and offer a high-level insight into the data

storage industry."

Storage Magazine: This is the 25th year of

the Book. How has it changed over that

time?

John Greenwood: "Well, it's safe to say the

industry has changed a lot. Storage used to

be a subject that was a true conversation

killer when asked what you do for a living.

Today data storage has become more

mainstream. Society now appreciates what

data is and the incredible importance of it,

and how you handle it. The next

generations, with some now entering the

sector, come with their 'data anywhere'

expectations, which have helped fan that

flame. From a corporate perspective, we

hear time and again that data has become

the new oil. Likewise, the euphoria around

artificial intelligence, and all the good and

bad that comes with it, is fuelled entirely by

data and its storage, access and processing

capabilities.

But let's go back to the beginning. As

business looking to build our brand in the

UK channel, our first iteration was actually

entitled the 'Little Book of Backup', written at

a time when backup was typically the last

job of the week, handed to the lowestranking

member of the IT team on a Friday

afternoon as everyone else headed home

or to the pub. Back then it was just 32

pages, and when we printed 250 copies for

a trade show, they all went within two hours

of the doors to the show opening. That's

when we realised what we had achieved

within that little guide. Immediately the

phones started to ring and doors were

opened as we were invited in to talk to data

custodians about a topic that was very close

to home for them."

Storage Magazine: It's come a long way

then! So tell us about the new edition of the

A to Z of Data Storage and what's changed

for the 2026 edition?

JG: "It has been a while since I wrote the

last one, and that is primarily because the

storage industry has not really moved as

quickly as it had previously. You can blame

a global pandemic and uncertain economic

landscapes for that. Things have not stood

still however, but the development of

12 STORAGE Jan/Feb 2026

@STMagAndAwards

www.storagemagazine.co.uk

MAGAZINE


REGISTER

FOR YOUR

FREE TICKET

WWW.TECHSHOWLONDON.CO.UK/STORAGEMAGAZINE


PARTNER INSIGHT: VIRTUAL INSIGHT:

EFFECT

We also identify some emerging vendors

and technology who could be the next big

thing in the industry. The glossary of terms

unravels 200 industry acronyms, alongside

a fun 'Where are they now' section covering

some of the forgotten brands and vendors

of the industry.

Storage Magazine: What was the biggest

challenge in writing the new edition?

John Greenwood: "Where to draw the line:

There are so many vendors that suggest

they are in storage, but when you scratch

the surface you find that this is merely a

marketing angle or that they OEM someone

else's technology. It was a challenge to limit

it to 150 vendors in all honesty and having

taken six months to write the Book, there

had to be a cap on this."

innovative technology and the vendor

landscape has been slower to restart. I liken

it to getting the petrol lawnmower out of the

shed after a cold winter - it takes a while to

get going again.

That said, the most notable shift in this

edition is towards AI. The industry is leaning

hard into that topic and AI alone is driving

a lot of business cases and budgets. If your

technology can support that from the

ground up, you suddenly become a very

attractive vendor to have on the shortlist.

There has also been the emergence of

ransomware since 2015, with backup data

being increasingly a primary target for these

cyber criminals to stop roll-backs to the

point before the attack. The phrases airgapped

and immutability appear frequently

in the Book as a result, with a number of

our vendor partners now delivering

solutions to address this escalation of cyberattacks.

Hyperconverged infrastructure and

virtualisation solutions also feature, largely

as a result of the euphoria that Broadcom

have triggered by ruffling the feathers of a

previously fairly content VMware audience.

The Book covers some of the alternatives

for the vSphere and vSAN community.

Storage Magazine: We hear there is now a

collectable launched alongside the Book.

What is the Brick IT Hard Drive?

John Greenwood:"Our Brick IT Hard Drive

represents the first of a limited-edition series

exclusively for the Virtual Effect customer

base who have been longstanding

supporters of the Little Book. This was born

out of an idea having met with a customer

and noticing that he had a collection of

Lego items on the shelf in his office. It is the

ultimate collector's item for anyone that

uses data storage technology on a daily

basis."

Storage Magazine: And how do Storage

Magazine readers get a copy of the A-Z

Data Storage?

John Greenwood: "By visiting

https://virtualeffect.co.uk/sm or using the

QR code below:"

14 STORAGE Jan/Feb 2026

@STMagAndAwards

www.storagemagazine.co.uk

MAGAZINE


CAN YOUR NAS

be accessed anywhere, any

time, on any device?

Modernize Your NAS


OPINION:

OPINION: RISK ASSESSMENT

THE EXECUTIVE'S 2026 GUIDE TO DATA

FORTIFICATION

ERNIE CRAWFORD, PRESIDENT & CEO, CRAWFORD TECHNOLOGIES DETAILS EIGHT PILLARS OF RISK

ASSESSMENT FOR DATA

intercept high-value document streams

(e.g., unencrypted "hot folders").

Regulatory Compliance: Between

nation-states seeking intellectual

property and hacktivists aiming to

undermine public trust, the risk of a

public data leak has become a top-tier

liability under UK and EU regulatory

frameworks.

THE EIGHT PILLARS OF RISK

ASSESSMENT

Business leaders must look beyond the

firewall. Document and data security is

now a multi-dimensional risk that directly

impacts the balance sheet and regulatory

standing. If one of these eight pillars is

weak, the entire structure is vulnerable.

For organisations who manage

millions of records containing

Personally Identifiable Information

(PII), Personal Health Information (PHI)

alongside financial data, perimeter

software and network defences no longer

provide adequate protection.

The financial impact of a breach

underscores this reality. The average cost

of a data breach globally is estimated at

around $4.5 million, with the UK and

Germany at just under $4 million and the

United States at approximately $10

million. Business disruption and

operational downtime account for nearly

one-third of these total costs. This

demonstrates that security failures are no

longer just IT hurdles, they are enterprisewide

crises.

ADVERSARIAL PROFILES: WHO IS

TARGETING YOUR DOCUMENTS' PII

AND PHI?

Today's adversaries operate with

unprecedented coordination, affecting

three core pillars of UK business:

Operational Resilience: Organised

crime gangs (e.g. Hacking Inc.) caused

44% of last year's breaches. Their new

tactic involves stealing archives before

encryption. This means that data

recovery no longer guarantees data

privacy or roll back recovery.

Financial Integrity: Professional "Black

Hat" mercenaries remain undetected in

corporate networks for an average of

292 days. This gives them ample time

to map financial processes and

The Human & Organisational Element

1. Staffing: Organisations with staffing

shortages face breach costs 43%

higher than those with optimised

teams.

2. Behavioural Governance: Employees

are involved in 68% of breaches.

Corporate culture must prioritise

security over the "path of least

resistance."

3. Professional Certification: Third-party

audits such as ISO, SOC 2 and

HITRUST are not optional. They provide

objective evidence that your security

controls actually work.

The Extended Ecosystem

4. Process Security: 30% of breaches

originate in the supply chain. Legacy

processes like unencrypted FTP are

high-risk gaps.

16 STORAGE Jan/Feb 2026

@STMagAndAwards

www.storagemagazine.co.uk

MAGAZINE



OPINION:

OPINION: RISK ASSESSMENT

5. The Physical-Digital Bridge: Digital

assets are frequently compromised

through physical entry points. Secure

print facilities and auditable access are

your first line of defence.

Infrastructure & Data Integrity

6. Active Network Validation: Firewalls

are only a baseline level of security.

Networks require continuous

adversarial testing to find logic flaws

before attackers do.

7. Device Firmware: Printers and

scanners are networked computers.

Automated firmware management is

critical to closing hardware backdoors.

8. Persistent Data Protection: With

compromised PII costing an average of

£127 per record, encryption in transit

and at rest is a fundamental financial

safeguard.

STRENGTHENING DEFENSE

THROUGH AI AND AUTOMATION

Transitioning to a predictive defence

requires intelligent tools. Organisations

integrating AI and automation report

cutting breach resolution time by 80 days

and reducing total costs by an average of

$1.9 million USD. Technology rarely fails;

people do. Manual processes and a

reliance on tribal knowledge create

vulnerabilities that standardised protocols

are designed to prevent. Automation

removes these risks by enforcing consistent

compliance checks across every document,

regardless of who is managing the shift.

AI provides proactive threat detection by

analysing document access patterns in real

time. If a user account begins accessing

thousands of records outside of normal

hours, the system recognises the anomaly

and intervenes immediately. This allows

organisations to neutralise threats based

on their behavioural fingerprint rather than

waiting for a post-incident report.

BUILDING A PROACTIVE

DOCUMENT SECURITY

FRAMEWORK

To achieve document and data security,

organisations should consider adopting a

data-centric approach to document

lifecycle security.

Dynamic Multi-level Encryption: This

framework establishes your

"Communications Vault," a multi-layered

security architecture designed specifically

for high-volume transactional

environments. A single print file can

contain thousands of PII or PHI records,

making it a high-value target. This

approach eliminates vulnerability by

embedding security at every layer of the

system.

File Level: Encrypts the entire batch

during transit and storage to prevent

bulk data theft.

Document Level: Isolates individual

customer records. Securing data at the

document level ensures that a single

compromised record cannot lead to a

systemic breach.

Page Level: This delivers the last line of

defence. Encrypting data at the

individual page level protects sensitive

data through the final rendering and

print-preprocessing stages.

Intelligent Workflow Consolidation:

Complexity is the enemy of security. Many

communications organisations often

manage thousands of disparate

workflows, which are a sprawling maze of

processes that create systemic

vulnerability. These fragmented paths

service as unsecured back roads to PII

and PHI data, rather than a single,

governed secure entrance.

By leveraging AI and automation, you

can consolidate these workflows into a

single, streamlined highway. This reduces

the potential targets for an attacker and

mitigates the inherent risks of human error

and non-standardised processes.

Operational Guardrails: Building a

resilient defence requires addressing both

the access points and the people

managing them.

Unified Credentials: Organisations

must implement a unified

authentication system requiring a

single, secure credential for every

production tool. This centralises

oversight and ensures that access to

sensitive document streams can be

revoked instantly across the entire

enterprise.

Continuous Training: Human error

remains a persistent vulnerability,

specifically regarding negligent

insiders and third-party vendors.

Implementing regular training is

essential; staff who complete regular

phishing simulations are four times

more likely to identify and report

suspicious activity.

THE BOTTOM LINE: TRUST IS THE

ULTIMATE CURRENCY

Securing PII, PHI and financial data is

more than a technical challenge; it is a

commitment that extends beyond the IT

department. A document is a critical

touchpoint in the relationship a business

has with its customers. When a document

is compromised, you lose more than

records. You lose the trust that underpins

your brand.

By embracing automation, streamlining

workflows and strengthening your

organisational pillars, you move from

being a target to being a fortress. In an

age where a breach is a matter of when

rather than if, your goal must be to ensure

that every communication remains a

symbol of integrity.

More Info: www.crawfordtech.com

18 STORAGE

Jan/feb 2026

@STMagAndAwards

www.storagemagazine.co.uk

MAGAZINE


Step into

a thriving

partnership

with an

innovative

industry

leader.

The Seagate Partner Program

helps you grow with the training,

enablement, demand generation,

and marketing support you need

to gain a competitive edge.

As a Seagate partner, you’ll get:

• Access to special pricing and up-front rebates

• Sales tools like deal registration, co-marketing,

and demand generation support

• Training and certifications to boost your expertise

• Access to evaluation units, demo equipment,

and a 30-day free trial of Lyve Cloud

Scan the QR code to explore the benefits per tier:


CASE STUDY:

CASE STUDY: G6 MOTION CONTROL

NAS BRINGS THE WOW FACTOR TO POST-

PRODUCTION AT G6 MOTION CONTROL

LIGHTS, CAMERA, ACTION!

SEAGATE STORAGE UNDERPINS

HIGH-VOLUME POST-

PRODUCTION AND ARCHIVING

Manchester-based G6 Motion

Control is a leading global

visual engineering company

that produces films, commercials, and

music videos. The team specialises in

high-speed motion control capture,

steadicam work, and virtual production.

Their cameras produce extremely large

files that are stored, accessed and

shared, which means high-capacity

storage is a must.

To support their creative process, G6

Motion Control needed a

comprehensive, reliable NAS-based

storage system that provides easy access

to hours and hours of footage.

MAKING INNOVATIVE FILMS AND

VIDEOS VIA HIGH-TECH MOTION

CAPTURE

G6 Motion Control offers advanced

filmmaking tools under one roof to

produce footage that breaks boundaries

and wows the brands and artists they

support. Whether combining digital and

physical worlds (virtual production) or

capturing motion using a robot and

slow-motion camera, G6 Motion Control

produces results that most people think

cannot be achieved.

GROWING DATA LEADS TO NEW

CHALLENGES

As G6 Motion Control expanded their

video-capturing offerings, they

encountered new challenges. As projects

accumulated across multiple external

drives, the production team struggled to

quickly locate footage. This wasted time

was compounded further when multiple

people needed to access the same drive.

They also worried that footage could be

misplaced or lost when shifting files to

different hard drives to optimise space.

20 STORAGE Jan/Feb 2026

@STMagAndAwards

www.storagemagazine.co.uk

MAGAZINE


CASE STUDY: G6 STUDY:

MOTION CONTROL

BOOSTING EFFICIENCY WITH A

SINGLE STORAGE SOURCE

For peace of mind and improved

workflow, G6 Motion Control decided to

move their systems to a more robust,

long-lasting solution that would help with

performance and growth. The goal was

an all-in-one system, a single location to

store and retrieve all file types, that

allowed the team to edit more efficiently

and archive completed projects. The new

setup also needed to give multiple users

(with multiple computers) quick access.

After extensive research, G6 Motion

Control adopted a solution that checked

all the boxes: the QNAP QuTS Hero (an

8-bay NAS system) and a series of

Seagate drives: eight IronWolf HDDs, two

SSDs, and two M.2 drives.

With this approach, the company can

upgrade the Seagate HDDs or add an

extension box for additional storage. The

flexibility to grow their production system

as their work grows allows G6 Motion

Control to focus on what is most

important: creativity.

STREAMLINING PRODUCTION

TENFOLD

Today, G6 Motion Control has

experienced improved workflow along

with faster editing and rendering. Storing

all projects in one unified place makes it

easy to locate the needed files. Handling

all post-production via the Seagate SSDs

results in faster, smoother editing.

Housing final files on Seagate HDDs frees

up space on the SSDs for the next project.

"Having everything in one reliable NAS

system makes things faster and easier.

We can edit using SSDs while

transcoding and backing up footage to

HDDs," commented Rammy Anwar, G6

Motion Control Ltd

SEAGATE PRODUCTS IN USE AT G6

MOTION CONTROL LTD:

IronWolf 110 SSD × 2

IronWolf 525 SSD PCIe Gen4 × 4

NVMe × 2

IronWolf HDD 4TB × 8

More Info: www.seagate.com

www.storagemagazine.co.uk

@STMagAndAwards Jan/Feb 2026

STORAGE

MAGAZINE

21


CASE STUDY: KRYSTAL STUDY:

KRYSTAL MODERNISES STORAGE TO POWER

HIGH-PERFORMANCE VIRTUAL INFRASTRUCTURE

LEADING UK WEB HOSTING PROVIDER BUILDS KATAPULT IAAS PLATFORM ON STORPOOL SOFTWARE-

DEFINED STORAGE TO MAXIMISE PERFORMANCE AND AVAILABILITY

THE SOLUTION: STORPOOL

SOFTWARE-DEFINED STORAGE

Krystal selected StorPool to underpin the

Katapult platform. Key selection drivers

included:

Performance: low-latency, highthroughput

workloads without tuning

Data Protection: triple replication for

durability

Space Efficiency: storage density

improvements and hardware efficiency

gains

Automation: full API-driven integration

into Katapult workflows

Scale-as-you-grow: incremental

capacity without re-balancing disruption

"It is absolutely critical for us that our

storage is both 100% reliable and 100%

available. StorPool has lived up to that

promise," said Alex Easter, CTO at Krystal.

Krystal is one of the UK's largest

independent web hosting providers

supporting nearly 30,000 clients and

hosting over 200,000 websites. Founded in

2002, the company has grown steadily over

18 years through a mix of high-touch

support, transparent pricing and in-house

software development.

THE CHALLENGE: A NEW IAAS

PLATFORM DEMANDED MORE

STORAGE PERFORMANCE

Over the last two years, Krystal has been

developing Katapult, a virtual Infrastructureas-a-Service

platform designed for extreme

performance and consumer-grade

simplicity. The goal was to create a modern

hosting environment for demanding

workloads with predictable performance,

high availability and seamless scale.

Krystal's existing platform used integrated

storage, which was reliable but lacked

flexibility. Resizing drives on demand,

expanding available capacity or rebalancing

data required manual work and downtime

risk, an approach incompatible with Krystal's

next-generation IaaS ambitions.

To support Katapult, Krystal needed to

source a storage platform that delivered:

Consistent low-latency performance

Real-time scalability without rebalancing

Hardware density improvements

Triple replication for data durability

Full API control for automation

Predictable economics at scale

EVALUATION AND TECHNOLOGY

DIRECTION

Krystal evaluated several software-defined

storage solutions as well as traditional SAN

architectures. Software-defined storage

offered a better alignment with the Katapult

philosophy: scale-out performance, deep

automation and full flexibility to scale, build

and operate the infrastructure in-house.

OUTCOME: A STORAGE PLATFORM

DESIGNED FOR GROWTH

With StorPool, Krystal gained a storage

foundation that supports its strategic vision

for Katapult. The platform offers

performance without trade-offs and provides

the operational headroom required to scale

client workloads over time.

"StorPool is a vital component of

Katapult and gives us maximum

performance and reliability with no tradeoff,"

added Easter.

From a business perspective, StorPool

allows Krystal to differentiate its IaaS

offering through performance, automation

and transparency, values that mirror the

company's hosting roots.

TECHNOLOGY STACK USED:

AMD EPYC 2 processors

KVM

Katapult platform

50Gbps networking

StorPool Software-Defined Storage

More Info: www.storpool.com

22 STORAGE Jan/Feb 2026

@STMagAndAwards

www.storagemagazine.co.uk

MAGAZINE


Learn More About StorPool

Keep Your

Business Running

at Full-Speed with

a Future-Ready

HCI Platform

Get the Hyperconverged

Advantage!

StorPool Storage with

Oracle Virtualization

Ultra-Fast Performance

on a Proven Hyperconverged

Cloud Platform

• Full-stack alternative to VMware

• Runs on standard servers. No vendor lock-in.

• Eliminates Storage Silos with mixed HCI environment

• Reduces complexity and total cost of ownership

A blazing-fast, cost-efficient, and reliable alternative for organizations moving away from

VMware. StorPool Storage with Oracle Linux Virtualization delivers an enterprise-grade

stack to power your critical workloads with unmatched simplicity and performance.

Ultra-Fast

Performance

With sub-100µs in VM

latency, there’s no need to

sacrifice performance or

availability to gain the

advantages of

hyperconvergence.

Simplified

Operations

Speed up VM deployments

with thin-clone snapshots.

Minimize storage usage, and

automate business continuity

with built-in backup and DR

functionality in the storage.

Seamless

Integration

The StorPool plug-in contains no

legacy code, has no external

dependencies, and is decoupled

from Oracle Virtualization

components - upgrade one

without affecting the other.

info@storpool.com


ROUNDTABLE: 2026 PREDICTIONS

2026: THE YEAR THAT

STORAGE ENTERS ITS

"STRATEGIC ERA"

FROM SUSTAINABILITY TO RESILIENCE OPERATIONS AND

DISTRIBUTED AI TEAMS, OUR 2026 PREDICTIONS ROUNDTABLE

POINTS TO THE DATA STORAGE STACK BECOMING THE

BACKBONE OF DIGITAL DECISION-MAKING.

In our first industry roundtable of

2026, Founders, CEOs and CMOs

outline the forces shaping the storage

landscape in their market, as AI

acceleration, sustainability economics

and resilience operations converge.

Artificial intelligence seems to be the

catalyst, but storage is becoming the

constraint, enabler and differentiator, in

equal measures! As the industry prepares

for another year of rapid AI adoption,

long-term digital preservation,

distributed data architectures and

resilient fast recovery from cyber

disruption are emerging as the

battlegrounds for innovation.

2026 PREDICTIONS

Prediction 1: "AI elevates the value of

data": Dave Mosley, Chairman and

CEO, Seagate Technology

Seagate's leadership team is predicting a

year defined by data becoming the

ultimate creative currency, reshaping

how people create, build and run

organisations.

We are at a pivotal moment in the

history of humanity, one in which tech

advancements are sparking and scaling

creativity in a way we have never seen

before. As AI platforms put the power of

data into the hands of everyone,

everywhere, we will see a creativity

renaissance where data fuels

unprecedented organisational

24 STORAGE Jan/Feb 2026

@STMagAndAwards

www.storagemagazine.co.uk

MAGAZINE


ROUNDTABLE: 2026 PREDICTIONS

"2026 will mark the beginning of a major architectural

shift: agentic AI systems will merge with distributed file

services to create AI digital teams that can autonomously

capture data, act on it and push results across multiple

locations and platforms." - Jimmy Tam, CEO, Peer Software

performance and accelerates innovation

in every industry as well as boardrooms,

factory floors, emergency rooms and

laboratories.

More video content, in particular, will

be created in 2026 than at any time in

history, placing enormous importance on

how data is curated, stored, shared and

re-stored. And with that comes an

imperative for all of us in 2026: treat

every byte of data as if it has value,

because it does.

Prediction 2: "Distributed AI drives

architectural change": argues Jimmy Tam,

CEO, Peer Software

Agentic AI will converge with distributed

file services to enable a new class of

distributed digital teams.

2026 will mark the beginning of a

major architectural shift: agentic AI

systems will merge with distributed file

services to create AI digital teams that

can autonomously capture data, act on it

and push results across multiple locations

and platforms. As organisations deploy

distributed AI agents at the edge, in the

cloud, and across data centres, they will

realise the missing piece is the ability to

move information seamlessly and

intelligently between those agents. The

convergence of agentic AI and distributed

file services will become essential for

orchestrating workflows, sharing context

and ensuring AI agents can collaborate

in real time across heterogeneous

environments.

DISTRIBUTED STORAGE WILL BECOME

A STRATEGY FOR LOAD-BALANCING

DATA, ENERGY USE AND GPU COSTS

As GPU scarcity, energy prices and

power-availability constraints intensify,

organisations will turn to distributed

storage architectures to balance not just

data, but operational costs and

resources. In 2026, storage and

infrastructure decisions will increasingly

factor in electricity rates, regional

resource availability, latency impacts and

GPU scheduling considerations.

Instead of concentrating workloads in a

single region or cloud, enterprises will

distribute data and compute to optimise

for cost efficiency and sustainability -

shifting data to where it is cheapest and

most energy-efficient to run AI workloads.

AI CONSOLIDATION WILL ACCELERATE,

DRIVING A WAVE OF M&A FOCUSED

ON INTEGRATING DISPARATE SYSTEMS

Large vendors will aggressively acquire

smaller AI, data and edge-platform

companies to accelerate capabilities,

expand ecosystems and simplify customer

adoption. The real challenge will be

integrating the disparate systems these

acquisitions bring. Companies that can

rapidly harmonise data, metadata and file

services across newly merged

environments will be the ones that deliver

value fastest.

Prediction 3: "AI moves from hype to

pragmatism.": Terry Storrar, Managing

Director, Leaseweb UK

I see AI's trajectory shifting from the initial

explosive hype to pragmatic growth.

While the early boost and surges in

investment have created inflated

expectations, the coming year will see the

focus shift from speculative hype to more

tangible value. This will mean businesses

reassessing their goals and focusing on

prioritising AI initiatives that enhance

efficiency and customer engagement,

such as applied machine learning or

agentic automation.

www.storagemagazine.co.uk

@STMagAndAwards Jan/Feb 2026

STORAGE

MAGAZINE

25


ROUNDTABLE: 2026 PREDICTIONS

"Companies are recognising that AI is not going to

replace humans but rather complement their

existing skill sets." - Terry Storrar, Managing Director, Leaseweb UK

This stabilisation in the market will

impact infrastructure as well. Many data

centres were built for AI-heavy workloads,

so a slow-down in AI growth might mean

excess capacity emerging. This presents an

opportunity for the industry to rebalance

capacity towards more moderate,

diversified workloads, integrating

sustainability and energy efficiency.

Importantly, the human aspect will

remain a focus. There is growing

scepticism and fears around AI's impact

on jobs. Companies are recognising that

AI is not going to replace humans but

rather complement their existing skill sets.

As ethical and workforce considerations

develop further, the industry will move

towards a more sensible view on AI where

automation empowers, not replaces,

human potential.

Prediction 4: "AI literacy becomes a

competitive differentiator." Charis Thomas,

Chief Product Officer, Aqilla

2026 will be the year when a new kind of

tech literacy becomes essential.

Remember how we once had to learn to

search the internet effectively by refining

queries, judging sources and

understanding how information was

surfaced? We will now need to learn how

to interact with AI in the same way. This

prompt literacy is not about tricks or

shortcuts; it is the modern equivalent of

learning how to research properly. As with

all computing scenarios, the quality of the

question shapes the quality of the answer.

Or more colloquially, put rubbish in and

you will get rubbish out. Developing that

skill across teams will be every bit as

important as the technology itself.

Alongside this, the long-running

convergence of search engines and AI is

becoming far more visible. Large

language models are now deeply woven

into search, and search is increasingly

feeding back into AI systems. It is a

powerful combination, but it makes

traceability and underlying data even

more important. LLMs excel at giving you

an answer, often the next logical answer.

But discovery and verification still matter.

The foundations behind every insight

remain relevant and will be even more so

over the next 12 months.

So as 2026 unfolds, the dividing line

will not be between organisations that

use AI and those that do not. It will be

between those who apply it thoughtfully

and those who let AI guide them

unquestioningly. The advantage will sit

with teams that embrace transparency,

develop prompt literacy, apply sound

judgement and maintain control as they

automate. AI will not run the function.

But it will elevate the people who do.

Prediction 5: "Sustainability aligns with

cost and preservation:" Martin Kunze,

CMO and Co-Founder, Cerabyte

Sustainability is shifting from a purely

ESG/SDG talking point to a CFO-level

26 STORAGE Jan/Feb 2026

@STMagAndAwards

www.storagemagazine.co.uk

MAGAZINE


ROUNDTABLE: 2026 PREDICTIONS

"2026 will be the year when a new kind of tech

literacy becomes essential. Remember how we

once had to learn to search the internet effectively

by refining queries, judging sources and

understanding how information was surfaced? We

will now need to learn how to interact with AI in

the same way." - Charis Thomas, Chief Product Officer, Aqilla

cost discussion.

Current mainstream technologies

depend on periodic media replacement

and data migration, and continuous

energy consumption (for both storage

systems and their environments) to

maintain the data stored.

As a result, long-term TCO is dominated

by media costs and power. A technology

that avoids both frequent replacement

and ongoing energy consumption can

dramatically reduce TCO by shrinking

both the environmental and financial

footprint. This is why sustainability

considerations are increasingly aligned

with cost optimisation rather than being

treated as an afterthought.

DATA STORAGE: FROM COMMODITY

TO MISSION-CRITICAL - AI'S FUTURE IS

INSEPARABLE FROM THE FUTURE OF

STORAGE

AI is turning data storage from a

ubiquitous commodity into a strategic

bottleneck and therefore a differentiator.

The next wave of AI depends not just on

more data, but on data that can be

retained economically and accessed at

scale for repeated analysis in a timely

fashion. Incumbent tiers force an

uncomfortable trade-off: either

performance at a high cost, or low cost

with constraints in access or throughput.

What is emerging is a new requirement

between "hot" and "deep archive", a tier

with high read performance, second-level

latency and a cost structure meaningfully

below HDD for massive AI datasets.

Without such a scalable, economically

viable storage layer, AI adoption hits a

ceiling because models and agents can

only be as capable as the data they can

quickly and repeatedly retrieve, reprocess

and learn from. This is why new storage

technologies like Cerabyte are becoming

foundational infrastructure for AI, not just

optional improvements.

DIGITAL PRESERVATION GOES

MAINSTREAM FROM VERTICAL TO

HORIZONTAL

Traditionally, "long-term storage" was

seen as a niche vertical focused on

archives, cultural heritage or specific

regulated industries. That view has now

considerably changed.

Today, almost every sector generates

data that is effectively kept for 10+ years,

often by default rather than by design. As

a result, long-term data storage has

become a requirement cutting across

industries horizontally: finance,

healthcare, media, automotive, public

sector, research and many others.

This broader market cannot be served

efficiently by incumbent technologies

alone. They struggle to meet

simultaneously due to the required cost

per stored TB-year, and the access

paradigm and sustainability requirements.

New technologies are increasingly

needed to deliver both economic and

environmental viability at scale.

A telling signal is that AWS sponsored

the dinner at iPRES this year, the leading

digital preservation conference, as the

first and largest hyperscale data centre

operator, joining traditional sponsors

such as archival software vendors and

on-prem tape providers. This indicates

that the understanding of "long-term"

digital preservation has moved from a

narrow archival niche into a mainstream

www.storagemagazine.co.uk

@STMagAndAwards Jan/Feb 2026

STORAGE

MAGAZINE

27


ROUNDTABLE: 2026 PREDICTIONS

"The potential of AI to transform a business is

not overhyped, but most organisations lack the

data maturity to leverage it effectively.

Companies are slow-rolling investments in AIenabling

hardware if they are not ready at the

application and data layers." - Susan Odle, CEO, StorMagic

business capturing the interest and

attention of the largest cloud and

infrastructure providers."

Prediction 6: "Pragmatism reshapes

modernisation": Susan Odle, CEO,

StorMagic

Companies are slow-rolling IT

modernisation initiatives to make sure

their systems and vendor selections are

designed to be flexible enough to hold up

under financial pressure. AI is a good

example. The potential of AI to transform

a business is not overhyped, but most

organisations lack the data maturity to

leverage it effectively. Companies are

slow-rolling investments in AI-enabling

hardware if they are not ready at the

application and data layers. The cost

impact is too great if the business benefits

are not clear. It is no longer a race to the

wild west.

More than ever, leadership means

showing up, explaining decisions,

listening without defensiveness and

engaging customers and their trusted

channel partners as part of the process.

The technical foundation still matters, but

trust is what determines whether that

foundation holds.

The companies that will stand out in

2026 will be led by people who

communicate clearly, act with consistency

and build relationships that endure

through change.

Prediction 7: "Resilience becomes a frontline

imperative": Jim McGann, CMO,

Index Engines

In 2026, the critical storage metric will

shift from capacity or performance to

time-to-detect corruption. Faster

detection means smaller blast radius,

quicker recovery and reduced business

impact. The difference between

discovering corruption in minutes versus

days could determine whether you

recover from the most recent clean

snapshot or lose weeks of critical data.

Storage platforms will adapt with

continuous byte-level integrity

monitoring, AI-powered detection of

subtle data changes, immutable

snapshots with pre-restore verification

and automated isolation when corruption

is detected.

The defining vendor question will

become: How fast can you detect an

attack and confirm which data is clean

for recovery? Organisations that can

answer in minutes, not days, will

dominate the market.

Ransomware is evolving. Storage

resilience must be measured by detection

speed. If your system cannot alert you

the moment data integrity is

compromised, you are already too late."

Prediction 8: "Recovery defines resilience":

Martin Gittins, Area Vice President for

North Europe, Commvault

28 STORAGE Jan/Feb 2026

@STMagAndAwards

www.storagemagazine.co.uk

MAGAZINE


ROUNDTABLE: 2026 PREDICTIONS

It is critical that in 2026, businesses take the

steps necessary to protect their Active Directory,

often known as the 'keys to the kingdom'."

- Martin Gittins, Area Vice President for North Europe, Commvault

Traditional approaches to resilience are

no longer enough in the age of AI. With

data being generated at unprecedented

rates and agents making decisions with

little human oversight, security, identity

and recovery - too often treated as

separate issues and split between teams -

must be brought together.

This approach creates a new category

called Resilience Operations (ResOps),

which will define 2026 as a new

discipline that rearchitects resilience for

the modern enterprise, managing it

across increasingly complex and

emerging AI environments.

One of the biggest trends we have seen

in 2025 is the exploitation of Active

Directory by cybercriminals to gain access

to all data, systems and applications. It is

the lifeblood of businesses, and therefore

an obvious target. It is for this reason that

identity plays an important role in

ResOps. It is critical that in 2026,

businesses take the steps necessary to

protect their Active Directory, often known

as the 'keys to the kingdom'.

The latest solutions deliver greater

automation that makes this more

achievable than ever before. From

detecting weaknesses and threats across

Active Directory and logging any changes

automatically, to rolling back any

unwanted changes at speed before they

can impact the whole system, today's

solutions can provide the automated

support that IT leaders need to keep their

organisations safe and accessible.

But it is important to recognise that as

cyberattacks are now inevitable, there

must be equal focus on the recovery

process and making it as clean and

complete as possible. Research shows

that 94 per cent of ransomware attacks

attempt to compromise backup storage,

meaning that businesses are at risk of

restoring affected backups, re-injecting

malware into their environment and

simply prolonging the disruption.

There needs to be a greater focus in the

next 12 months on deploying tactics that

both prevent backups being compromised

and prevent organisations accidentally

restoring them if they have. AI has a vital

role to play in this, enabling IT teams to

quickly identify and analyse suspicious

files, and even automatically detect

threats in backups during the recovery

process and remove them without

damaging the 'good' data.

As technology advances enable us to

make smarter decisions about our data,

let 2026 be the year that we fight back

against the cybercriminals - and win.

EDITOR'S ROUNDTABLE SUMMARY

CLOSE

If one theme unites this year's predictions,

it is the expansion of storage from an

infrastructure decision to an

organisational one. Resilience, AI maturity

and long-term data stewardship are

converging, creating new expectations for

storage vendors and new opportunities

for those able to innovate with clarity.

Buckle-up for another exciting year!

More Info:

www.seagate.com

www.peersoftware.com

www.leaseweb.com

www.aqilla.com

www.cerabyte.com

www.stormagic.com

indexengines.com

www.commvault.com

www.storagemagazine.co.uk

@STMagAndAwards Jan/Feb 2026

STORAGE

MAGAZINE

29


OPINION:

OPINION: DATA GRAVITY

HOW TODAY'S CLOUD STORAGE STRATEGIES ARE

KEY IN ADDRESSING DATA GRAVITY

TERRY STORRAR, MANAGING DIRECTOR, LEASEWEB UK DETAILS WHY TACKLING DATA GRAVITY IS

NOW CENTRAL TO ACHIEVE CLOUD OPTIMISATION AND COMPETITIVE INNOVATION

The economic impact of data driven

businesses is huge, with both the

volume and value of data continuing

to grow at staggering rates. While it is

difficult to put a single definitive value on

the world's data, AI-driven data markets

alone are estimated to account for up to

$15.7 trillion USD per year towards

global GDP by 2030. Managing

and storing this data in an

organised way is a major hurdle

in itself. Adding the

applications and services that

naturally orbit around it

presents a data gravity

challenge that organisations

should address as a priority

part of their cloud strategies.

Although the concept of data

gravity is not new, first referenced

in 2010 by technologist Dave

McCrory, its impact is becoming far

more visible in the way organisations

attempt to manage their data. The speed

at which digital businesses gather data is

accelerating rapidly, attracting more data

along with the components that

contribute to system performance.

Over time, the largest data

masses develop a

strong

gravitational pull for everything else, making

it complex and laborious to prise apart,

reorganise and store in a logical structure. It

also becomes harder, riskier and more costly

to move away from a single cloud provider.

This situation is not insurmountable, but it is

significant enough to prompt some

businesses to postpone cloud transformation

or to make only small tactical changes to

their cloud infrastructure. Neither option

delivers long term efficiency. Instead,

businesses are better advised to address data

gravity before it becomes overwhelming and

leads to vendor lock-in that is simply too

costly or impractical to reverse.

A FORCE WITH WIDESPREAD IMPACT

More often than not, the pull of data gravity

works against what IT professionals want

from their cloud services: flexibility,

transparency and cost efficiency.

The convenience of holding data,

applications and workflows with a single

provider can be limiting and even detrimental

to shaping the best cloud strategy. It can

increase exposure to spiralling costs, egress

fees and service outages. Importantly,

migrating workloads to other cloud providers

or repatriating data on-premise can be slow,

complicated and expensive.

At a strategic level, data gravity can

adversely affect competitiveness and the

ability to adapt and innovate, because the

cloud infrastructure and storage design

underpinning key initiatives is not optimised

to support business objectives. In this

scenario, data gravity becomes a barrier to

change and prevents the agility and

30 STORAGE Jan/Feb 2026

@STMagAndAwards

www.storagemagazine.co.uk

MAGAZINE


OPINION: DATA GRAVITY

innovation that cloud services should enable.

Another practical concern for IT managers

is that data and workloads lodged with just

one provider are more vulnerable to

cyberattacks, outages or sudden price

changes. If something impacts the data held

with a single provider, an organisation may

find itself without an alternative option. This is

precisely why many businesses are

introducing resilience plans with redundancy

across multiple providers to guarantee

replication and back up for critical data.

The rapid growth of AI workloads is also

exacerbating data gravity. AI data needs to

sit close to high performance compute

resources, which reinforces the gravitational

pull toward a single environment and

increases the risk of long-term lock-in.

BUILD UNDERSTANDING OF DATA

TO INFORM STORAGE

For businesses aiming to escape data

gravitational pull, the first step is to

evaluate how data is managed throughout

its lifecycle. Understanding what data

exists, where it is located, what is its

purpose, and how frequently it is used,

builds a central picture of where the

heaviest data sets sit and where data

gravity may be problematic.

From here, it becomes far

easier to segment data and

allocate workloads to the

most suitable storage

environments: high

performance cloud for

valuable and frequently

accessed data, lower cost

storage for lower priority

data, or on-premise storage for

very sensitive data. Data is not

created equal and this approach

sheds light on how to organise data

logically, deploy backups and reduce the

risk that everything becomes bound to a

single system in the future.

HYBRID MODELS PROVIDE CHOICE

AND FLEXIBILITY

It is no surprise that hybrid models are

becoming the default approach to business

operations. A key benefit is that data and

workloads are spread across multiple

environments, using a blend of services

from more than one provider. This may be

a multi-cloud environment combining

public and private cloud, or a mix of cloud

and on-premise infrastructure. Hybrid

provides choice and control for IT teams

and strengthens business continuity. If one

cloud provider suffers an outage, critical

data sets can be retrieved from alternative

storage environments.

However hybrid is designed, mapping

environments to workload requirements and

paying close attention to storage design are

effective ways to mitigate data gravity and

build an IT setup that is ready for future

change. Businesses can also take

advantage of private networking, cloud

exchanges and open standards to facilitate

moving data and workloads between

environments. Governance frameworks

within modern cloud services also help

organisations meet compliance standards

by providing better visibility of what data is

stored and where.

HERE TO STAY

Data gravity is both a technical and

strategic challenge. It is not going away,

and digital businesses must find ways to

manage it without losing the performance

and efficiency benefits of cloud. This may

involve AI-based data management tools,

reorganising data storage, reviewing

continuity and redundancy provision, or

embracing a hybrid IT model. The

organisations that recognise the long-term

implications of data gravity and respond by

reshaping their cloud and storage

infrastructures will be best prepared for the

data driven years ahead.

More Info: www.leaseweb.com/en/

www.storagemagazine.co.uk

@STMagAndAwards Jan/Feb 2026

STORAGE

MAGAZINE

31


MANAGEMENT: AI-CENTRIC STORAGE

LEARNING FROM PREVIOUS FLASH INDUSTRY

SHUTDOWNS TO PREDICT RECOVERY CYCLES

GAL NAOR, CEO, STORONE DETAILS WHY STORAGE ARCHITECTURES MUST ADAPT TO SLOWER FLASH

RECOVERY CYCLES AND PERSISTENT PRICE PRESSURE

The current flash price crisis is often

described as a short-term supply

imbalance that will correct itself

once manufacturers respond. According

to this view, higher prices will incentivise

new capacity, production will ramp, and

pricing will normalise within a few

quarters.

HISTORY SUGGESTS

OTHERWISE.

Past fab closures in the

NAND industry show

that production

capacity does not

return quickly after

demand-driven

downturns. When

capacity exits the

market, it rarely

comes back on a

relevant time

horizon. As a

result,

expectations of

rapid normalisation

are largely

speculative and

increasingly

disconnected from how

the industry actually

behaves.

FAB CLOSURES ARE

STRUCTURAL, NOT

TEMPORARY

A clear

example is Intel's exit from NAND

manufacturing. Between 2018 and 2020,

Intel gradually withdrew from NAND

production after years of price pressure,

volatile demand, and shrinking margins.

This was not a response to a brief market

dip, but a strategic decision shaped by

the economics of fab operations: massive

capital requirements, long payback

periods, and extreme exposure to demand

swings. Once Intel stepped back, the

associated production capacity did not

pause; it effectively left the flash market.

The Utah fab previously associated with

Intel's NAND activity was later sold and

repurposed for analogue semiconductor

manufacturing. That capacity was

permanently removed rather than paused.

This illustrates a critical point:

semiconductor fabs are not modular

assets that can be stopped and restarted

on demand. Fabs are highly specialised

systems built around specific process

technologies, equipment tuning, and

accumulated operational knowledge.

When a fab exits a technology class, the

expertise, supplier alignment, and process

maturity are lost. Recreating that

capability does not happen with a restart,

it requires a full reinvestment cycle.

INTEL WAS NOT AN OUTLIER

Intel's decision reflects a broader industry

pattern. Across multiple NAND cycles,

manufacturers that reduced or exited

capacity during periods of weak pricing

have been slow to return, even when

prices later recovered. This behaviour is

32 STORAGE Jan/Feb 2026

@STMagAndAwards

www.storagemagazine.co.uk

MAGAZINE



MANAGEMENT: AI-CENTRIC STORAGE

shaped by hard-learned lessons

from past oversupply cycles.

Overbuilding memory capacity

has repeatedly destroyed value.

As a result, management teams

today require far more than

short-term price signals before

committing to new fabs. They

seek sustained demand

visibility, long-term margin

confidence, and evidence

that current conditions are

not simply another

transient spike. Capital

discipline is no longer

situational; it has become

a permanent operating

principle.

WHY RISING PRICES

DO NOT TRIGGER

IMMEDIATE NEW

CAPACITY

Even in a rising price

environment, the

economics of building new

flash fabs remain

unattractive.

A new fab requires

multiple years of work

before producing

meaningful output. It

demands high utilisation

over long periods to justify

returns and exposes

manufacturers to the risk

that pricing weakens

before the investment pays

off. Short-term price

increases do not offset

decade-long capital

exposure. As a result,

manufacturers are reluctant

to respond aggressively to

price signals that may not

persist. Capacity decisions are

made based on expectations

years into the future, not current

spot prices. In practical terms, this

means that supply expansion lags

demand by years, not quarters.

FROM CYCLICAL VOLATILITY TO

STRUCTURAL CONSTRAINT

The flash market is often described as

cyclical, but the industry's behaviour has

changed. Instead of rapid capacity

responses smoothing out volatility, today's

market is shaped by persistent underreaction.

Capital restraint has transformed price

cycles. When demand rebounds faster

than capacity, shortages persist. What

once might have been a short-lived

imbalance increasingly becomes a multiyear

constraint. In that context, the

current flash price environment should

not be viewed as an anomaly. It is a

predictable outcome of an industry that

has internalised the cost of over

investment and now prioritises caution

over expansion.

STRATEGIC IMPLICATIONS FOR

STORAGE ARCHITECTURE

The broader conclusion is strategic, not

tactical. Organisations can no longer

plan storage infrastructure under the

assumption that flash prices will quickly

return to historical norms. Flash

dependency has become a systemic risk.

Architectures built on the premise of

abundant, cheap flash assume a level of

price stability that recent history does not

support.

Long-term planning now requires

reducing reliance on flash-only designs

and treating flash as a scarce, premium

resource rather than a default tier.

Systems that incorporate intelligent data

placement and economically efficient

tiers are no longer just a cost

optimisation. They are a prerequisite for

resilient infrastructure planning in a

market where capacity returns slowly and

price pressure can persist far longer than

expected.

More Info: www.storone.com

34 STORAGE Jan/Feb 2026

@STMagAndAwards

www.storagemagazine.co.uk

MAGAZINE


COMING SOON!

Forattendanceand sponsorship

enquiriespleasecontact:

stuart.leigh@btc.co.uk orlucy.gambazza@btc.co.uk

W W W .STORAGE-AW ARDS.COM


Ransomware-Proof

Backups with Ootbi

Ootbi (Out-of-the-Box Immutability) delivers secure, simple, and powerful backup storage for Veeam

customers with no security expertise required.

In a world where businesses fall victim to ransomware every 11 seconds and 93% of attacks are targeting

backups, Ootbi by Object First helps make backup data ransomware-proof.

3 Reasons Why Ootbi Is the Best Storage for Veeam

Secure

• S3 out-of-the-box immutability

• A hardened storage target with

zero access to the root

• Separation of the Backup

Software and Backup Storage

layers according to Zero Trust

best practices

Simple

• NO security expertise required

• 15 minutes to deploy and scale

• Updated and optimized

automatically by Object First

Powerful

• Lightning fast backup

up to 8 GB/s

• Supercharged Instant Recovery,

capacity, and performance scale

linearly

• Use of standard Veeam

block size and encryption

Eliminate the need to sacrifice performance

and simplicity to meet budget constraints

with Ootbi by Object First.

Learn More About

the Best Storage

for Veeam

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!