16.06.2021 Views

ST May-Jun 2021

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

STOR

MAGAZINE

STORAGE

The UK’s number one in IT Storage

May/June 2021

Vol 21, Issue 3

NAS VS. SAN:

The future of shared storage

INTERNET OF THINGS:

Where will all the data go?

STORAGE TIERING:

Have your cake and eat it

DR-AS-A-SERVICE:

A primer for SMBs

COMMENT - NEWS - NEWS ANALYSIS - CASE STUDIES - OPINION - PRODUCT REVIEWS


Keep ahead of the

ransomware waves!

Ride with balance and poise.

Over the horizon ransomwares just keep on coming,

sometimes daily, wave after wave.

From ransomware, unpredictable data capacity needs,

compliance requirements to high standards of data availability,

security and fast recovery, we equip our customers and

partners to meet the future head-on, with modular, flexible,

and future-proof data management and business continuity

solutions for the next generation of hybrid data centers.

When prevention fails, StorageCraft protects your data!

One vendor, one solution, total business continuity.

www.StorageCraft.com

WHERE YOUR DATA IS ALWAYS SAFE, ALWAYS ACCESSIBLE, ALWAYS OPTIMIZED


The UK’s number one in IT Storage

NAS VS. SAN:

DR-AS-A-SERVICE:

A primer for SMBs

May/June 2021

Vol 21, Issue 3

CONTENTS

STOR

MAGAZINE

STORAGE

CONTENTS

The future of shared storage

INTERNET OF THINGS:

Where wi l a l the data go?

STORAGE TIERING:

Have your cake and eat it

COMMENT - NEWS - NEWS ANALYSIS - CASE STUDIES - OPINION - PRODUCT REVIEWS

COMMENT….....................................................................4

Backup is not just for March 31st

STRATEGY: HARD DISK DRIVES…..…….........................6

The amount of data grows globally by several billion terabytes every year as more

machines and devices generate data - in the age of IoT, argues Rainer Kaese of

Toshiba, HDDs remain indispensable

10

INTERVIEW: INFINIDAT….............................................…8

Storage magazine editor Dave Tyler caught up with Phil Bullinger, Infinidat's new CEO

to discuss the challenges of taking on a new role in the middle of a global crisis

CASE STUDY: TOTAL EXPLORATION & PRODUCTION.......10

Total has migrated its offshore oil and gas platforms to a hyperconverged

infrastructure that delivers space efficiencies along with improved replication and DR

RESEARCH: DATA PROTECTION………...............................12

New research shows 58% of backups are failing, creating data protection challenges

and limiting the ability of organisations to pursue Digital Transformation initiatives

12

CASE STUDY: SAVE THE CHILDREN SPAIN………......14

Non-profit organisation Save the Children Spain has reduced the costs of its backup and

simplified disaster recovery thanks to NAKIVO Backup & Replication

MANAGEMENT: STORAGE TIERING…………............................16

The ideal 'storage cake' has three equally important tiers, argues Craig Wilson, Technical

Architect at OCF

CASE STUDY: IMPERIAL WAR MUSEUMS……............18

Lifecycle Management Software scans and moves inactive data from primary storage to

'perpetual tier' for long-term access and archival

16

OPINION: DISASTER RECOVERY AS A SERVICE…...20

As more and more SMBs are attracted to Disaster Recovery as a Service, Florian Malecki

of StorageCraft outlines the key requirements of a DRaaS solution - what is important,

and why

CASE STUDY: TAMPERE VOCATIONAL COLLEGE TREDU..22

A college in Finland has implemented a hybrid solution that offers a unified portal for

storage and backup of multiple services, eliminating the need to jump from one

application to another

ROUNDTABLE: BACKUP……......................................…24

This year's World Backup Day has come and gone, but it might be the lingering impact

of Covid-19 that has a deeper effect on organisations' backup thinking. Storage

magazine gathered the thoughts of industry leaders

24

MANAGEMENT: INTERNET OF THINGS……...............28

From medical wearables through search-and-rescue drones to smart cities, CheChung

Lin of Western Digital, describes ways to optimise ever-growing volumes of IoT data

using purpose-built storage

CASE STUDY: CANAL EXTREMADURA……............…30

Spanish TV network Canal Extremadura has revamped its IT infrastructure with Quantum

StorNext File System software and tape solutions

TECHNOLOGY: NAS……………….....................................32

With the greater performance, functionality and ease of use of NAS, it is increasingly

hard to justify the need for a SAN in modern creative workflows, argues Ben Pearce of

GB Labs

32

OPINION: CLOUD STORAGE…..................................….34

The Covid-19 pandemic has forced businesses to evolve quickly and adjust to the new

working dynamic - but some have been better prepared than others, explains Russ

Kennedy of Nasuni

www.storagemagazine.co.uk @STMagAndAwards May/June 2021

STORAGE

MAGAZINE

03


COMMENT

EDITOR: David Tyler

david.tyler@btc.co.uk

SUB EDITOR: Mark Lyward

mark.lyward@btc.co.uk

REVIEWS: Dave Mitchell

PRODUCTION MANAGER: Abby Penn

abby.penn@btc.co.uk

PUBLISHER: John Jageurs

john.jageurs@btc.co.uk

LAYOUT/DESIGN: Ian Collis

ian.collis@btc.co.uk

SALES/COMMERCIAL ENQUIRIES:

Lyndsey Camplin

lyndsey.camplin@storagemagazine.co.uk

Stuart Leigh

stuart.leigh@btc.co.uk

MANAGING DIRECTOR: John Jageurs

john.jageurs@btc.co.uk

DISTRIBUTION/SUBSCRIPTIONS:

Christina Willis

christina.willis@btc.co.uk

PUBLISHED BY: Barrow & Thompkins

Connexions Ltd. (BTC)

35 Station Square, Petts Wood

Kent BR5 1LZ, UK

Tel: +44 (0)1689 616 000

Fax: +44 (0)1689 82 66 22

SUBSCRIPTIONS:

UK £35/year, £60/two years,

£80/three years;

Europe: £48/year, £85 two years,

£127/three years;

Rest of World: £62/year

£115/two years, £168/three years.

Single copies can be bought for £8.50

(includes postage & packaging).

Published 6 times a year.

No part of this magazine may be

reproduced without prior consent, in

writing, from the publisher.

©Copyright 2021

Barrow & Thompkins Connexions Ltd

Articles published reflect the opinions

of the authors and are not necessarily those

of the publisher or of BTC employees. While

every reasonable effort is made to ensure

that the contents of articles, editorial and

advertising are accurate no responsibility

can be accepted by the publisher or BTC for

errors, misrepresentations or any

resulting effects

BACKUP IS NOT JUST FOR

MARCH 31ST

BY DAVID TYLER

EDITOR

Following on from last issue's thought-provoking and occasionally controversial

interview with Eric Siron about backup, this time around we are pleased to

feature an industry roundtable article developed in the weeks after World Backup

Day. The storage sector has tried hard to make March 31st a significant date to

remind people and organisations of the vital importance of backup, but it could

perhaps be argued that by putting it on the same level as Star Wars Day or St.

Swithin's Day, we are instead trivialising data protection and distracting users from the

key point that, in fact, backup is something we should be constantly thinking about -

and regularly testing!

As Nick Turner of Druva says in the article: "Whilst we've celebrated a decade's worth

of World Backup Days, this past year has tested the ability to protect business data like

no other." If something as world-changing as Covid-19 can't make us focus on

protecting critical assets, what might?

Businesses have adapted remarkably rapidly to remote working, but while best

practices around data protection and recovery are still there, it is critical that those

businesses evolve their strategies just in the same way that our approach to data and

access changes. Quest Software's Adiran Moir comments: "We also need to move away

from the concept of focusing just on backup. In order to get this right, organisations

need to consider continuity - ensuring they have a platform in place that will not only

recover the data but will do so with minimal downtime."

Does a shift to the cloud mean that everyone will move to a Backup-as-a-Service

model? As is so often the case, the question gets an 'it depends' answer from most of

the experts in our article. Scality's Paul Speciale describes what to be wary of when

opting for the cloud approach: "As with all things in IT, we need to carefully consider

the overall cost of ownership for backup solutions, including the trade-off between

shifting capital expenditures to operational savings in the cloud. While it can be true

that cloud backups save money, it can also be true that they are more expensive than

on-premises solutions."

It is surely no coincidence that World Backup Day falls just one day before April Fools'

Day - there is a none-too-subtle suggestion that only an idiot isn't keeping a very

watchful eye on how his backup processes are functioning. As Zeki Turedi of

CrowdStrike concludes: "Milestones like World Backup Day act as reminders for IT

professionals to look again at their IT architecture and confirm it's still fit for purpose."

Amen to that.

^

04 STORAGE

MAGAZINE

May/June 2021

@STMagAndAwards

www.storagemagazine.co.uk


Performance Begins Now

Introducing the Most Comprehensive Portfolio of X12 Server and Storage Systems

with 3 rd Gen Intel® Xeon® Scalable processors

Learn More at www.supermicro.com

© Supermicro and Supermicro logo are trademarks of Super Micro Computer, Inc. in the U.S. and/or other countries.


STRATEGY:

STRATEGY: HARD DISK DRIVES

RACK TO THE FUTURE

THE AMOUNT OF DATA WORLDWIDE GROWS BY SEVERAL BILLION

TERABYTES EVERY YEAR AS MORE AND MORE MACHINES AND

DEVICES ARE GENERATING DATA - BUT WHERE WILL WE PUT IT ALL?

EVEN IN THIS AGE OF IOT, ARGUES RAINER KAESE OF TOSHIBA

ELECTRONICS EUROPE GMBH, HARD DRIVES REMAIN INDISPENSABLE

Data volumes have multiplied in

recent decades, but the real data

explosion is yet to come. Whereas,

in the past, data was mainly created by

people, such as photos, videos and

documents, with the advent of the IoT

age, machines, devices and sensors are

now becoming the biggest data

producers. There are already far more of

them than people and they generate data

much faster than us. A single autonomous

car, for example, creates several terabytes

per day. Then there is the particle

accelerator at CERN that generates a

petabyte per second, although 'only'

around 10 petabytes per month are

retained for later analysis.

In addition to autonomous driving and

research, video surveillance and industry

are the key contributors to this data flood.

The market research company IDC

assumes that the global data volume will

grow from 45 zettabytes last year to 175

zettabytes in 2025 (IDC "Data Age 2025"

Whitepaper, Update from May 2020).

This means that, within six years, three

times as much data will be generated as

existed in total in 2019, namely 130

zettabytes - that is 130 billion terabytes.

Much of this data will be evaluated at

the point of creating, for example, in the

sensors feeding an autonomous vehicle or

production facility (known as edge

computing). Here, fast results and

reactions in real-time are essential, so the

time required for data transmission and

central analysis is unacceptable. However,

on-site storage space and computing

power are limited, so sooner or later,

most data ends up in a data centre. It can

then be post-processed and merged with

data from other sources, analysed further

and archived.

This poses enormous challenges for the

storage infrastructures of companies and

research institutions. They must be able to

absorb a constant influx of large amounts

of data and store it reliably. This is only

possible with scale-out architectures that

provide storage capacities of several

dozen petabytes and can be continuously

expanded. And they need reliable

suppliers of storage hardware who can

satisfy this continuous and growing

storage demand. After all, we cannot

afford for the data to end up flowing into

a void. The public cloud is often touted as

a suitable solution. Still, the reality is that

the bandwidth for the data volumes being

discussed is insufficient and the costs are

not economically viable.

For organisations that store IoT data,

storage becomes, in a sense, a

commodity. It is not consumed in the true

sense of the word but, like other

consumer goods, it is purchased regularly

and requires continuing investment. A

blueprint of how storage infrastructures

and storage procurement models can

look in the IoT age is provided by

research institutions such as CERN that

already process and store vast amounts of

data. The European research centre for

particle physics is continuously adding

new storage expansion units to its data

centre, each of which contains several

hundred hard drives of the most recent

generation. In total, their 100,000 hard

disks have attained a total storage

capacity of 350 petabytes.

THE PRICE DECIDES THE MEDIUM

The CERN example demonstrates that

there is no way around hard disks when it

comes to storing such enormous amounts

of data. HDDs remain the cheapest

medium that meets the dual requirements

06 STORAGE May/June 2021

@STMagAndAwards

www.storagemagazine.co.uk

MAGAZINE


STRATEGY:

STRATEGY: HARD DISK DRIVES

"Although the prices for SSDs are falling, they are doing so at a similar rate to HDDs.

Moreover, HDDs are very well suited to meet the performance requirements of highcapacity

storage environments. A single HDD may be inferior to a single SSD, but the

combination of several fast-spinning HDDs achieve very high IOPS values that can

reliably supply analytics applications with the data they require."

of storage space and easy access. By

comparison, tape is very inexpensive but

is not suitable as an offline medium and

is only appropriate for archiving data.

Flash memory, on the other hand, is

currently still eight to ten times more

expensive per unit capacity than hard

disks. Although the prices for SSDs are

falling, they are doing so at a similar rate

to HDDs. Moreover, HDDs are very well

suited to meet the performance

requirements of high-capacity storage

environments. A single HDD may be

inferior to a single SSD, but the

combination of several fast-spinning

HDDs achieve very high IOPS values that

can reliably supply analytics applications

with the data they require.

In the end, price alone is the decisive

criterion - especially since the data

volumes to be stored in the IoT world can

only be compressed minimally to save

valuable storage space. If at all possible,

compression typically takes place within

the endpoint or at the edge to reduce the

amount of data to be transmitted. Thus, it

arrives in compressed form at the data

centre and must be stored without further

compression. Furthermore, deduplication

offers little potential savings because,

unlike on typical corporate file shares or

backups, there is hardly any identical

data.

Because of the flood of data in IoT and

the resultant large quantity of drives

required, the reliability of the hard disks

used is of great importance. This is less to

do with possible data losses, as these can

be handled using appropriate backup

mechanisms, and more to do with

maintenance of the hardware. With an

Annualised Failure Rate (AFR) of 0.7 per

cent, instead of the 0.35 per cent

achieved by CERN with Toshiba hard

disks, a storage solution using 100,000

hard disks would require that 350 drives

are replaced annually - on average

almost one drive replacement more per

day.

HDDS STICK AROUND FOR YEARS

TO COME

In the coming years, little will change with

the main burden of IoT data storage

borne by hard disks. Flash production

capacities will simply remain too low for

SSDs to outstrip HDDs. To cover the

current storage demand with SSDs alone,

flash production would have to increase

significantly. Bearing in mind that the

construction costs for a single flash

fabrication facility run to several billion

Euros, this is an undertaking that is

challenging to finance. Moreover, it would

only result in higher flash output after

around two years that would only cover

the demand of 2020 and not that of

2022.

The production of hard disks, on the

other hand, can be increased much more

easily because less cleanroom production

is needed than in semiconductor

production. Additionally, the development

of hard disks is progressing continuously,

and new technologies such as HAMR

(Heat-Assisted Magnetic Recording) and

MAMR (Microwave-Assisted Magnetic

Recording) are continuing to deliver

capacity increases. Experts assume that

HDD storage capacity will continue to

increase at a rate of around 2 terabytes

per year for a few more years at constant

cost. Thus, IDC predicts that by the end of

2025, more than 80 per cent of the

capacity required in the enterprise sector

for core and edge data centres will

continue to be obtained in the form of

HDDs and less than 20 per cent on SSDs

and other flash media.

More info: www.toshiba-storage.com

www.storagemagazine.co.uk

@STMagAndAwards May/June 2021

STORAGE

MAGAZINE

07


INTERVIEW: INFINIDAT INFINIDAT

BUILDING MOMENTUM

PHIL BULLINGER WAS APPOINTED CEO AT INFINIDAT AT THE START

OF 2021, HAVING PREVIOUSLY SHONE AT STORAGE INDUSTRY

NAMES INCLUDING LSI, ORACLE, DELL EMC AND WD. STORAGE

MAGAZINE EDITOR DAVE TYLER CAUGHT UP WITH PHIL TO DISCUSS

THE CHALLENGES OF TAKING ON A NEW ROLE IN THE MIDDLE OF A

GLOBAL CRISIS

Dave Tyler: You have around 30 years of

experience across many of the biggest

names in the sector - which presumably

was a large part of why Infinidat wanted to

bring you in to helm the business. What was the

draw from your side?

Phil Bullinger: I was really attracted to the

Infinidat opportunity because of their great

reputation as well as the fantastic customer

experience around the product: in my years in

enterprise storage, whenever I came across an

Infinidat customer they were never shy to talk

about how much they loved the platform. I

know - again from my own experience - that

behind every great product is a great team, and

since I joined the company in January these first

few months have been incredibly positive -

everything I had expected and hoped to find

here has been reinforced.

There is a real focus on innovation, but

crucially the customer comes first in every

conversation we have: it's genuinely a part of

the DNA of the company. I know a lot of

businesses say that, but so much of how

Infinidat is organised pivots around the

customer experience, whether it's our L3 support

team, our technical advisers in the field, or how

the product itself operates. I've been delighted

to find the extent to which the company is

organised around the customer.

DT: Tell us a little more about the specifics of

your role and how you fit in to Infinidat's

strategy?

PB: I've been blessed on many occasions in my

career to lead companies through growth and

scale: stepping into a business at a specific

point in its lifecycle where it had good products

and satisfied customers, and the task then was

to efficiently scale the business to new levels of

success. That is exactly what we're focussed on

right now here at Infinidat: investing in

engineering, sales, marketing, and raising the

profile of the company in the markets that

we're targeting.

We have a lot of momentum as a business:

throughout 2020 we had sequential growth -

every quarter was larger than the one before.

08 STORAGE May/June 2021

@STMagAndAwards

www.storagemagazine.co.uk

MAGAZINE


INTERVIEW: INTERVIEW: INFINIDAT

Q4 saw very large growth, year over year, and

having just completed our first calendar quarter

of 2021 I'm really pleased to say it was a record

for the company. That gives us a lot of

confidence and impetus as we look ahead to

the rest of 2021.

DT: Given the global situation as we start to

come out of the worst of the pandemic, how

has the storage sector been affected?

PB: There are some points that might look

contradictory: from a macro-economic point of

view there are emerging tailwinds in the global

economy. There is no doubt that in the last year

Covid-19 had both a positive and a negative

impact on business. Large enterprise spending

was constrained last year, and many enterprises

were uncertain as to what the future held. But at

the same time there were fundamental drivers

of storage activity as well, because of the

pressing need for digital transformation.

Companies - of all sizes - were moving as fast

as they could to transform their businesses

around, what frankly, has become a digital

economy, based around digital user experience

and digital connectivity.

How companies use their data is now the

primary determinant not just of whether a

company will be successful, but of whether it will

even continue to exist going forward. What

we're starting to see right now is projects that

may have been on hold for a year coming back

to the forefront. As a result, we're finding our

own sales activity globally is seeing more

'lubrication' in the process as customers are

much more interested in investing in

transformational projects.

DT: What is Infinidat doing to address not just

the post-Covid world, but the way businesses

see data and storage more generally?

PB: This is the key question for us at Infinidat:

what are customers motivated now to do with

their data infrastructure? The landscape looks

very different today to ten years ago, with the

advent of the public cloud where data and

applications are accessed via the internet as

opposed to locally. As new models have

emerged, private cloud remains a crucial part

of almost every business infrastructure - so

almost every customer has a hybrid model.

There has been a lot of hype around seamless

movement of data and applications from onpremises

to off-premises (and back again!), but

the fact is that most enterprise customers tend

not to pursue that line. They are more likely to

think about which applications or data are

super-critical to the business, need the highest

levels of performance, availability and of course

security - and those will go into a private cloud.

For almost all of our customers, Infinidat forms

the centrepiece of their private cloud

infrastructure, because of our massive

scalability, great reliability, and enterprise data

features - all at lower costs than our competitors

and often offering better performance than allflash

systems. It's easy to see why Infinidat then

becomes a compelling consolidation platform.

As the intersection of digital transformation,

private cloud architecture and Infinidat come

together, that's what is creating such momentum

around the business for us.

DT: What about partners? I know Infinidat

puts great store in its relationships with the

likes of VMware and AWS - how important

are those companies (and others) to your

continued success?

PB: Those two are really good examples of

solution ecosystem partners for Infinidat.

Customers don't generally buy just a platform -

they buy a solution to a problem. Those

solutions almost always involve an ecosystem

web of ISVs, applications, compute, network,

and storage - so we're very cognisant of the fact

that Infinidat has to be tightly integrated and

closely working with a whole range of different

partner offerings.

Infinidat has also always emphasised the very

highest levels of integration with VMware: even

from our very first release we had an

exceptional level of native integration. These

days some of the largest VMware deployments

at enterprise customers are on Infinidat simply

because we integrate so well and provide the

scale that those customers need. I believe we

are solving the long-standing issue between

application administrators and storage

administrators: when you can give the VMware

administrator the native ability to manage our

storage, snapshots, replication etc., that can

really help them to manage the overall

application infrastructure of that enterprise.

As well as VMware and AWS we also have

critical relationships with companies such as

Veeam, Veritas, Commvault, Red Hat, and tight

integrations with SAP and Oracle and other

enterprise ISVs.

DT: Who would you say Infinidat views as its

primary competition in today's market?

PB: At its simplest our competitive landscape is

other primary storage systems you would find

in an enterprise sovereign secure data centre -

but there's more to it than that. You can look at

the Gartner Magic Quadrant for primary

storage and see all the 'usual suspects'. We

compete against the best and most capable

primary storage products that branded system

OEMs and storage companies are bringing to

the market.

The public cloud has raised the tide for all the

boats; the world currently only stores a fraction

of all the data that it generates. This means that

companies are constantly striving to work out

how to collect, and reason over, and drive

insight from, more and more data. Public cloud

models have certainly accelerated that, and

therefore more data is being created in the

enterprise, and some of that data - usually the

most important parts - will almost always be onpremises,

or in a colocation, in a private cloud

architecture. The way I look at it is that the

cloud is a driver for business activity, and

business activity drives data, and data of course

drives storage, which is good for Infinidat and

our business opportunity.

Our customers trust our products and support

to protect petabytes of their most important data

- data that they 'hold most dearly', and which

needs the very highest levels of availability,

protection, and security.

More info: www.infinidat.com

www.storagemagazine.co.uk

@STMagAndAwards May/June 2021

STORAGE

MAGAZINE

09


CASE STUDY:

CASE STUDY: TOTAL EXPLORATION & PRODUCTION

DELIVERING A TOTAL SOLUTION

TOTAL EXPLORATION & PRODUCTION HAS MIGRATED ITS OFFSHORE OIL AND GAS PLATFORMS TO A

HYPERCONVERGED INFRASTRUCTURE THAT DELIVERS SPACE EFFICIENCIES AS WELL AS IMPROVED

REPLICATION AND DISASTER RECOVERY CAPABILITIES

Nutanix has announced the

completion of a project for Total

Exploration & Production UK Limited

(TEPUK) to deploy its market-leading

hyperconverged infrastructure both in

Aberdeen and to its North Sea oil and gas

installations. TEPUK operates across the

entire oil and gas value chain, aiming to

provide hydrocarbons that are more

affordable, more reliable, cleaner and

accessible to as many people as possible.

Offshore oil and gas platforms are a

challenging environment in which to install,

manage and support IT of any description.

Due, not least, to logistical challenges and

strict safety requirements, but also because

physical space, Internet connectivity and

manpower are at a premium.

TEPUK chose to replace end-of-life legacy

servers and storage networks on its rigs with

Nutanix hyperconverged infrastructure.

Requiring less than half the equivalent rack

space of alternative solutions, the Nutanix

infrastructure isn't just space efficient. Other

benefits include on-demand scalability, selfhealing

high availability, integrated Prism

Central remote management and hypervisor

neutral virtualisation capabilities.

Aberdeen-based Nutanix partner NorthPort

Technologies was also involved. With its

extensive experience in this field, it is able to

provide engineers fully trained and certified

to meet the critical safety requirements of the

offshore industry.

"Our engineers don't just have to be

trained in IT, they need to be physically fit

and pretty committed to do this kind of job,"

explains Russell Robertson, Consulting IT

Specialist at NorthPort Technologies. "Not

least because they have to travel to and

from the rigs in all weathers and be

prepared to undergo the same rigorous

training as anyone working offshore."

The principal role of the offshore equipment

is to host local infrastructure services, such as

Windows domain controllers, file and print

sharing and all the usual business

productivity apps. These are supported using

VMs alongside other specialist industrial

control and safety workloads.

Despite the many challenges plus

additional issues associated with the Covid-

19 lockdown, the NorthPort team has now

completed installation of the last set of 3-

node offshore clusters, bringing the total

installs to nine. In addition, the Aberdeen

reseller has configured a coordinating 15-

node Nutanix cluster on-shore with another

at a separate TEPUK site for replication and

disaster recovery (DR).

"With this announcement we are delighted

to welcome TEPUK to a growing list of oil

and gas companies using Nutanix

hyperconverged infrastructure to deliver

industrial strength IT services in some of the

most challenging environments around the

globe," commented Dom Poloniecki, Vice

President & General Manager, Sales,

Western Europe and Sub-Saharan Africa

region, Nutanix.

"More than that, it reflects the versatility of

the Nutanix platform which is equally at home

providing core IT services on an offshore oil

rig as it is supporting leading edge hybrid

cloud in a corporate data centre."

More info: www.nutanix.com

10 STORAGE May/June 2021

@STMagAndAwards

www.storagemagazine.co.uk

MAGAZINE


Copyright ©2021 QNAP UK Limited All rights reserved

uksales@qnap.com


RESEARCH:

RESEARCH: DATA PROTECTION

CAN YOU RELY ON YOUR BACKUP?

NEW RESEARCH SHOWS THAT 58% OF BACKUPS ARE FAILING, CREATING DATA PROTECTION CHALLENGES

AND LIMITING THE ABILITY OF ORGANISATIONS TO PURSUE DIGITAL TRANSFORMATION INITIATIVES

URGENT ACTION ON DATA

PROTECTION REQUIRED

Respondents stated that their data

protection capabilities are unable to keep

pace with the DX demands of their

organisation, posing a threat to business

continuity, potentially leading to severe

consequences for both business reputation

and performance. Despite the integral role

backup plays in modern data protection,

14% of all data is not backed up at all, and

58% of recoveries fail, leaving

organisations' data unprotected and

irretrievable in the event of an outage by

cyber-attack.

Data protection challenges are

undermining organisations' ability to

execute Digital Transformation

initiatives globally, according to the

recently-published Veeam Data Protection

Report 2021, which has found that 58% of

backups fail leaving data unprotected.

Veeam's research found that against the

backdrop of COVID-19 and ensuing

economic uncertainty - which 40% of CXOs

cite as the biggest threat to their

organisation's DX in the next 12 months -

inadequate data protection and the

challenges to business continuity posed by

the pandemic are hindering companies'

initiatives to transform.

The Veeam Data Protection Report 2021

surveyed more than 3,000 IT decision

makers at global enterprises to understand

their approaches to data protection and

data management. The largest of its kind,

the study examines how organisations

expect to be prepared for the IT challenges

they face, including reacting to demand

changes and interruptions in service,

global influences (such as COVID-19), and

more aspirational goals of IT

modernisation and DX.

"Over the past 12 months, CXOs across

the globe have faced a unique set of

challenges around how to ensure data

remains protected in a highly diverse,

operational landscape," said Danny Allan,

Chief Technology Officer and Senior Vice

President of Product Strategy at Veeam. "In

response to the pandemic, we have seen

organisations accelerate DX initiatives by

years and months in order to stay in

business. However, the way data is

managed and protected continues to

undermine them. Businesses are being held

back by legacy IT and outdated data

protection capabilities, as well as the time

and money invested in responding to the

most urgent challenges posed by COVID-

19. Until these inadequacies are

addressed, genuine transformation will

continue to evade organisations."

Furthermore, unexpected outages are

common, with 95% of firms experiencing

them in the last 12 months; and with 1 in 4

servers having at least one unexpected

outage in the prior year, the impact of

downtime and data loss is experienced all

too frequently. Crucially, businesses are

seeing this hit their bottom line, with more

than half of CXOs saying this can lead to a

loss of confidence towards their

organisation from customers, employees

and stakeholders.

"There are two main reasons for the lack of

backup and restore success: Backups are

ending with errors or are overrunning the

allocated backup window, and secondly,

restorations are failing to deliver their

required SLAs," said Allan. "Simply put, if a

backup fails, the data remains unprotected,

which is a huge concern for businesses given

that the impacts of data loss and unplanned

downtime span from customer backlash to

reduced corporate share prices. Further

compounding this challenge is the fact that

the digital threat landscape is evolving at an

exponential rate. The result is an

unquestionable gap between the data

12 STORAGE May/June 2021

@STMagAndAwards

www.storagemagazine.co.uk

MAGAZINE


RESEARCH:

RESEARCH: DATA PROTECTION

"In response to the pandemic, we have seen organisations accelerate DX initiatives by

years and months in order to stay in business. However, the way data is managed and

protected continues to undermine them. Businesses are being held back by legacy IT

and outdated data protection capabilities, as well as the time and money invested in

responding to the most urgent challenges posed by COVID-19."

protection capabilities of businesses versus

their DX needs. It is urgent that this shortfall is

addressed given the pressure on

organisations to accelerate their use of cloudbased

technologies to serve customers in the

digital economy."

I.T. STRATEGIES IMPACTED BY

COVID-19

CXOs are aware of the need to adopt a

cloud-first approach and change the way IT is

delivered in response to the digital

acceleration brought about by COVID-19.

Many have already done so, with 91%

increasing their cloud services usage in the

first months of the pandemic, and the majority

will continue to do so, with 60% planning to

add more cloud services to their IT delivery

strategy. However, while businesses recognise

the need to accelerate their DX journeys over

the next 12 months, 40% acknowledge that

economic uncertainty poses a threat to their

DX initiatives.

UK-specific highlights from the Veeam

research included:

Hybrid-IT across physical, virtual and

cloud: Over the next two years, most

organisations expect to gradually, but

continually, reduce their physical servers,

maintain and fortify their virtualised

infrastructure, while embracing 'cloud-first'

strategies. This will result in half of

production workloads being cloud-hosted

by 2023, forcing most firms to re-imagine

their data protection strategy for new

production landscapes.

Rapid growth in cloud-based backup:

Backup is shifting from on-premises to

cloud-based solutions that are managed

by a service provider, with trajectory

reported from 33% in 2020 to 50%

anticipated by 2023.

Importance of reliability: 'To improve

reliability' was the most important driver of

a UK organisation to change its primary

backup solution, stated by 33% of

respondents.

Improving ROI: 33% stated that the most

important driver for change was

improving the economics of their solution,

including improving ROI and reducing

TCO.


Availability gap: 78% of companies have

an 'availability gap' between how fast they

can recover applications and how fast

they need to recover them.

Reality gap: 77% have a 'protection gap'

between how frequently data is backed-up

versus how much data they can afford to

lose after an outage.

Modern data protection: Over half (51%)

of organisations now use a third-party

backup service for Microsoft Office 365

data, and 43% plan to adopt Disaster

Recovery as a Service (DRaaS) by 2023.

TRANSFORMATION STARTS WITH

DIGITAL RESILIENCY

As organisations increasingly adopt modern IT

services at rapid pace, inadequate data

protection capabilities and resources will lead

to DX initiatives faltering, even failing. CXOs

already feel the impact, with 30% admitting

that their DX initiatives have slowed or halted

in the past 12 months. The impediments to

transformation are multifaceted, including IT

teams being too focused on maintaining

operations during the pandemic (53%), a

dependency on legacy IT systems (51%), and

a lack of IT staff skills to implement new

technology (49%). In the next 12 months, IT

leaders will look to get their DX journeys back

on track by finding immediate solutions to

their critical data protection needs, with

almost a third looking to move data

protection to the cloud.

"One of the major shifts we have seen

over the past 12 months is undoubtedly an

increased digital divide between those who

had a plan for Digital Transformation and

those who were less prepared, with the

former accelerating their ability to execute

and the latter slowing down," concluded

Allan. "Step one to digitally transforming is

being digitally resilient. Across the board

organisations are urgently looking to

modernise their data protection through

cloud adoption. By 2023, 77% of

businesses globally will be using cloud-first

backup, increasing the reliability of

backups, shifting cost management and

freeing up IT resources to focus on DX

projects that allow the organisation to excel

in the digital economy."

More info: www.veeam.com/wp-executivebrief-2021.html

www.storagemagazine.co.uk

@STMagAndAwards May/June 2021

STORAGE

MAGAZINE

13


CASE STUDY:

CASE STUDY: SAVE THE CHILDREN SPAIN

CHILD'S PLAY

NON-PROFIT ORGANISATION SAVE THE CHILDREN SPAIN HAS REDUCED THE COSTS OF ITS BACKUP AND

SIMPLIFIED DISASTER RECOVERY THANKS TO NAKIVO BACKUP & REPLICATION

Save the Children Spain is a member

of Save the Children International, a

non-profit organisation that is aimed

at making the world a better place for

children through better education,

healthcare, and economic opportunities.

Nowadays, with 25,000 dedicated staff

across 120 countries, Save the Children

responds to major emergencies, delivers

innovative development programs, and

ensures children's voices are heard

through campaigning to build a better

future for and with children. Save the

Children Spain has 10 locations across

Spain and two main data centres in

different locations.

The organisation has a hybrid

infrastructure with a mix of private cloud

services and on-premises servers. A part of

the infrastructure is virtualised with nearly

40 virtual machines storing file servers,

databases, reporting services, and other

applications. The main objective of the IT

department is to ensure consistency and

reliability of all data. As Save the Children

is a non-profit organisation, all money

spent on projects must be properly justified

to donors, while all invoices, transfer

orders, and project reports kept safe. That

is why data protection is imperative for the

organisation to continue to be successful

in the future. The organisation's goal is to

be able to back up all data and services

every day and restore those services easily

and quickly in case of a disaster.

Until recently, Save the Children Spain

was using a different backup product to

protect its data. However, the cost of that

software was too expensive for a nonprofit.

The organisation also wanted to

take advantage of its NAS servers, but the

previous software did not support NAS

installation. Installation on NAS servers

was essential for reducing the overall

complexity of the backup strategy and

freeing up a server from the environment.

When Save the Children understood that

the costs of updating its licensing with the

14 STORAGE May/June 2021

@STMagAndAwards

www.storagemagazine.co.uk

MAGAZINE


CASE STUDY:

CASE STUDY: SAVE THE CHILDREN SPAIN

"Simplifying the disaster recovery strategy was our main goal. Today, we can

recover data on the same day in a minimum amount of time. We replicate VMs

across our data centres. This allows us to have source VMs and VM replicas.

Thus, if we lose one data centre, we can always power on VMs in another and the

organisation will be operational within a few hours."

previous vendor were too excessive, and its

budget only allowed them to cover a

particular number of hosts, it was time to

look for an alternative solution. The goal

was to save costs, simplify disaster

recovery, and achieve simplicity.

FUNCTIONAL YET AFFORDABLE

NAKIVO Backup & Replication was Save

the Children's backup solution of choice

for several reasons. Not only is NAKIVO

Backup & Replication affordable, but the

total invoiced price allowed the

organisation to protect all the virtual hosts

with the same functionality that was

provided by the previous vendor. Overall,

the organisation saved money by switching

to NAKIVO Backup & Replication. It can

now use that money to finance other

projects. Since NAKIVO Backup &

Replication is compatible with various NAS

servers, Save the Children installed the

software on its Synology NAS. This allowed

the organisation to free up resources and

reduce backup strategy complexity.

A backup appliance based on Synology

NAS combines backup software,

hardware, backup storage, and

deduplication. With this installation, there

are multiple advantages, including higher

performance, smaller backups, faster

recovery, and storage space savings. "The

previous solution was installed on

Windows, but we always wanted to take

advantage of already available hardware.

The whole installation process was so

simple. We just had to add a package

manager, find it, and click install. The

whole process took 15 minutes at most.

With the Synology NAS installation, we

freed up production resources that were

previously spent on backup," explained

Alejandro Canet, IT Project Coordinator at

Save the Children Spain. "Today, a full

initial backup takes the organisation

roughly 24 hours, while a daily

incremental backup takes around 3 hours.

Storage space is always expensive, so

global data deduplication reduced space

and saved money on storage systems for

Save the Children. Moreover, instant

granular recovery is a key functionality that

saves a lot of time when recovering files or

objects from shared resources."

The IT department's objective was to

create a simple disaster recovery strategy

with the NAKIVO Backup & Replication

functionality, as Alejandro went on:

"Simplifying the disaster recovery strategy

was our main goal. Today, we can recover

data on the same day in a minimum

amount of time. We replicate VMs across

our data centres. This allows us to have

source VMs and VM replicas. Thus, if we

lose one data centre, we can always power

on VMs in another and the organisation

will be operational within a few hours.

Ensuring near-instant recovery and

uninterrupted business operations with

NAKIVO Backup & Replication is important

for us. NAKIVO Backup & Replication also

allows us to keep recovery points for

replicas and backups that we can rotate on

a daily, weekly, monthly, or yearly basis."

SIMPLER, FASTER, BETTER

With NAKIVO Backup & Replication, Save

the Children simplified its disaster recovery

strategy with VM replication and instant

recovery. All data and applications are

backed up daily, while VMs are replicated

to a different data centre to ensure nearinstant

recovery in case of a disaster. With

a backup appliance based on Synology

NAS, Save the Children freed up

production resources and achieved

business continuity. Instant granular

recovery is also as simple as opening the

web interface, clicking a couple of buttons,

and recovery is done in minutes.

"The licensing costs that were offered by

NAKIVO turned out to be cheaper than the

yearly maintenance costs of our previous

product," Alejandro concluded. "NAKIVO

Backup & Replication may be almost 10

times cheaper. Regarding installation and

setup time, we just spent 15 minutes and

everything was working. With the previous

product, installation could be over 2

hours, plus the software was more difficult

to use. Overall, we were able to simplify

disaster recovery, save costs, and utilise

existing NAS servers with NAKIVO Backup

& Replication."

More info: www.nakivo.com

www.storagemagazine.co.uk

@STMagAndAwards May/June 2021

STORAGE

MAGAZINE

15


MANAGEMENT: STORAGE TIERING

HAVING YOUR CAKE AND EATING IT

THE IDEAL 'STORAGE CAKE' HAS

THREE EQUALLY IMPORTANT

TIERS, ARGUES CRAIG WILSON,

TECHNICAL ARCHITECT AT OCF

As hard disk drives have grown in

capacity we are presented with an

interesting problem. Only a few years

ago a petabyte of capacity would require an

entire rack of equipment. Indeed, my first

project with OCF involved a storage solution

that clocked in at 1PB per 45U rack, but with

single drive capacity soon to hit 20TB we will

be able to house a petabyte of capacity in a

single storage shelf. This incredible

achievement presents a new problem - hard

drive performance is not improving in

lockstep with capacity.

In fact, per TB performance is going down

dramatically so hard drive storage is

effectively getting slower. 10 years ago, it

took 500 hard drives for a single petabyte of

storage and now it only takes 50 drives for

the same capacity. There has simply not

been a 10x increase in hard drive

performance in that time. Seagate has set

out a roadmap to 120TB HDDs by 2030

and while it has detailed some plans to

increase performance with its Mach.2 dual

actuator technology the per TB performance

will still decrease as capacities increase

beyond 30TB.

Today you must consider if the capacity

you require will provide the expected

performance or if smaller capacity drives

would be more appropriate - which not only

increases the rack space required but also

the power consumption and in turn the

TCO. But if hard drives are no longer the

go-to answer for large scale storage any

more, what is?

"FLASH, A-AH, SAVIOUR OF THE

UNIVERSE!"

We are all aware of the benefits that flash

storage brings to the party: you only need to

read the marketing material from any of the

flash vendors, they clearly believe that flash

is the future. The improvements on

throughput and IOPS performance is huge

when compared to hard drives and unlike

hard drives this shows no signs of stopping

with PCIe Gen4 NVMe drives now on the

market and hitting an incredible 7GBps

sequential read performance per drive.

16 STORAGE May/June 2021

@STMagAndAwards

www.storagemagazine.co.uk

MAGAZINE


MANAGEMENT: STORAGE TIERING

"A flash tier is always going to provide the most performance - however not

many projects need to utilise all storage data. Data is important and most

organisations need to keep data for longer than it is being actively used.

Have you considered how much of your data is used on a daily or weekly

basis? This is where tiering comes in. If you can identify a small percentage

of your data that needs to be accessed on a regular basis then you can start

to build a solution that takes the benefits from each storage technology and

truly maximise the ROI."

Capacities are increasing too, with

15.36TB drives available from Lenovo's DE

series storage arrays and IBM producing

38.4TB FlashCore modules for FlashSystem

storage arrays that are both available

today. With the increase in capacities we

can now get higher capacity density with

flash storage than we can with traditional

hard drives.

The downside to this of course is price;

per TB pricing on flash storage continues to

vastly exceed hard drive pricing and an allflash

storage solution can be a huge

investment when everyone is under everincreasing

pressure to maximise the return

on investment for any large-scale solution.

There is, of course, a third player in this

game: tape. Like hard drives, tape capacity

has continued to grow: IBM is due to

launch its LTO9 Ultrium Technology in the

first half of 2021 with 18TB native capacity

or 45TB compressed capacity per cartridge.

Unlike hard drives, performance has

continued to increase as well. For a typical

upgrade path LTO9 is offering a 33%

increase on uncompressed performance

over LTO7. Tape storage also has some

unique advantages. The ability to air-gap

data to protect from modern ransomware

attacks and the ability to have huge

capacities with minimal power usage is

often overlooked when comparisons are

made to traditional hard drive storage.

SO WHICH IS BEST FOR YOUR

PROJECT?

How do you maximise ROI? A flash tier is

always going to provide the most

performance - however not many projects

need to utilise all storage data. Data is

important and most organisations need to

keep data for longer than it is being actively

used. Have you considered how much of

your data is used on a daily or weekly

basis? This is where tiering comes in.

If you can identify a small percentage of

your data that needs to be accessed on a

regular basis then you can start to build a

solution that takes the benefits from each

storage technology and truly maximise the

ROI. A solution with, for example, 20 per

cent flash storage would present hot data

that is used regularly with maximum

performance to your compute environment

while warm data could be stored on a

cheaper hard drive-based storage array.

Data that has not been accessed in the last

six months could then be offloaded onto a

tape tier using the same physical

infrastructure as the backup process, which

would reduce overall power consumption.

The most popular parallel filesystems such

as IBM Spectrum Scale, BeeGFS and Lustre,

have support for tiering either directly or via

integration with the RobinHood Policy

Engine. There is also additional software

such as IBM's Spectrum Protect and

Spectrum Archive, Atempo's Miria or Starfish

that can augment these features.

Caching is also an option. IBM's Spectrum

Scale especially offers great flexibility in this

area with features such as local read-only

cache (LROC) and highly available write

cache (HAWC). LROC uses a local SSD on

the node as an extension to the buffer pool,

which works best for small random reads

where latency is a primary concern, while

HAWC uses a local SSD to reduce the

response time for small write operations - in

turn greatly reducing write latency

experienced by the client.

Deploying a single storage solution will

always be a strong proposition from a

management overhead. However, I don't see

hard drive storage ever being beaten by

flash storage on a pure capacity-to-cost

ratio basis any time soon. By deploying

tiering, caching or both it will be possible to

improve storage performance and thus

maximise ROI.

More info: www.ocf.co.uk

www.storagemagazine.co.uk

@STMagAndAwards May/June 2021

STORAGE

MAGAZINE

17


CASE STUDY:

CASE STUDY: IMPERIAL WAR MUSEUMS

PRESERVING PRICELESS MEMORIES OF

GLOBAL CONFLICTS

LIFECYCLE MANAGEMENT SOFTWARE SCANS AND MOVES INACTIVE DATA FROM PRIMARY STORAGE TO

'PERPETUAL TIER' FOR LONG-TERM ACCESS AND ARCHIVAL

Spectra Logic has announced that

Imperial War Museums (IWM)

deployed Spectra StorCycle Storage

Lifecycle Management Software to

enhance the Museums' existing storage

infrastructure, which supports its audiovisual

and exhibitions departments to

preserve invaluable data, including

thousands of films, videotapes, audio

recordings and photographs that would

otherwise disintegrate and be lost forever

were they not digitised. StorCycle software

is being used to manage a large amount

of unstructured data that resides on

expensive SAN and NAS storage outside

of IWM's existing DAMS (digital asset

management system) platform.

EVER-GROWING COLLECTION

Imperial War Museums tells the story of

people who have lived, fought and died

in conflicts involving Britain and the

Commonwealth since the First World War.

Its five branches attract over 2.5 million

visitors each year and house a collection

of over 10 million objects. The five sites

across the UK - IWM London, IWM

North, IWM Duxford, Churchill War

Rooms and HMS Belfast - are home to

approximately 750,000 digital assets,

representing a total of 1.5PB of data as

uncompressed files.

Their digital asset collection includes

5,000 film and video scan masters,

100,000 audio masters dating back to

the 1930s, nearly 500,00 image masters

and thousands of lower resolution

versions (access versions for commercial

and web use) of the above assets. This

volume is constantly growing, with new

scans in the Museums' film collection

generating an additional 10TB of data

per month, and its videotape scanning

project expected to create more than

900TB of data over the next four years.

A Spectra customer since 2009, IWM has

implemented a large-scale data archiving

18 STORAGE May/June 2021

@STMagAndAwards

www.storagemagazine.co.uk

MAGAZINE


CASE STUDY:

CASE STUDY: IMPERIAL WAR MUSEUMS

solution to reliably preserve and manage

its substantial digital archive pertaining to

UK and Commonwealth wartime history.

IWM's current archive infrastructure

consists of two Spectra T950 Tape

Libraries, one with LTO-7 tape media and

drives, and one with IBM TS1150 tape

media and drives, along with a Spectra

BlackPearl Converged Storage System,

BlackPearl Object Storage Disk and

BlackPearl NAS solution.

MANAGING THE LIFECYCLE

"When we set out on our search to find a

storage solution capable of preserving

Imperial War Museums' substantial digital

archive, there were specific criteria on

which we were not willing to compromise,"

explained Ian Crawford, chief information

officer, IWM. "Spectra met all of our

requirements and then some, and now

continues to deliver with StorCycle's

storage lifecycle management

capabilities."

IWM is on track to realise significant

long-term cost savings by deploying

Spectra StorCycle Storage Lifecycle

Management Software to optimise their

primary storage capacity through the

offloading of inactive data to the

Museums' archive infrastructure. Spectra

StorCycle identifies and moves inactive

data to a 'Perpetual Tier' of storage

consisting of object storage disk and tape.

StorCycle scans the IWM departments'

primary storage for media file types older

than two years and larger than 1GB, and

automatically moves them to the IWM

archive, maximising capacity on the

primary storage system.

Rob Tyler, IT infrastructure manager

(DAMS) at IWM said, "Spectra's StorCycle

storage lifecycle management software

has empowered us to move our data into

reliable, long-term storage, offloading our

primary storage and preserving media

files and unstructured data - all with the

push of a button."

"When we set out on our search to find a storage

solution capable of preserving Imperial War

Museums' substantial digital archive, there were

specific criteria on which we were not willing to

compromise. Spectra met all of our requirements and

then some, and now continues to deliver with

StorCycle's storage lifecycle management

capabilities."

Craig Bungay, vice president of EMEA

sales, Spectra Logic, commented on the

project: "IWM preserves invaluable

historical data and it is vital that their data

storage infrastructure be failsafe and

reliable in addition to providing flexibility

and affordability. This is achieved by

storing multiple copies of the Museums'

data on different media, and by

automatically offloading inactive data

from expensive primary storage to its

archive solution using StorCycle."

More info: www.SpectraLogic.com

www.storagemagazine.co.uk

@STMagAndAwards May/June 2021

STORAGE

MAGAZINE

19


OPINION:

OPINION: DISASTER RECOVERY AS A SERVICE

A DRAAS PRIMER FOR SMBS

AS MORE AND MORE SMBS ARE ATTRACTED TO DISASTER RECOVERY

AS A SERVICE, FLORIAN MALECKI OF STORAGECRAFT OUTLINES THE

KEY COMPONENTS AND REQUIREMENTS OF A DRAAS SOLUTION -

WHAT IS IMPORTANT, AND WHY

Regardless of size, every business gets

hurt when downtime strikes. Small

and medium sized businesses (SMBs)

take a big hit when their systems go down.

An ITIC study found that nearly half of

SMBs estimate that a single hour of

downtime costs as much as US$100,000

in lost revenue, end-user productivity, and

IT support. That's why more and more by

SMBs are adopting Disaster Recovery as a

Service (DRaaS). One study shows 34

percent of companies plan to migrate to

DRaaS in 2021.

Cloud-based backup and disaster

recovery solutions are often at the top of

the list when considering DRaaS solutions.

Such an approach allows businesses to

access data anywhere, any time, with

certainty because the best disaster

recovery clouds are highly distributed and

fault-tolerant, delivering 99.999+ percent

uptime. This article is intended to help

SMBs understand the key elements of a

DRaaS solution, beginning with an

explanation of the basics of DRaaS.

DISTRIBUTED BACKUPS MAXIMISE

PROTECTION

Data replication is the process of updating

copies of data in multiple places at the

same time. Replication serves a single

purpose: it makes sure data is available to

users when they need it.

Data replication synchronises data

source - say primary storage - with backup

target databases, so when changes are

made to source data, it is quickly updated

in backups. The target database could

include the same data as the source

database - full-database replication - or a

subset of the source database.

For backup and disaster recovery, it

makes sense to make full-database

replications. At the same time, companies

can also reduce their source database

workloads for analysis and reporting

functions by replicating subsets of source

data, say by business department or

country, to backup targets.

MANAGING BACKUP IMAGES

As companies continue to add more

backups over time, they'll need to manage

these accumulated images and the storage

space the images consume. Image

management solutions with a managedfolder

structure allow companies to spend

less time configuring settings on backups.

But that's just the start. These solutions

can also provide image verification so that

backup image files are ready and available

for fast, reliable recovery and advanced

image verification that delivers regular

visual confirmation that backups are

working correctly.

To reduce restoration time and the risk of

backup file corruption, and also reduce

storage space required, image management

solutions can automatically consolidate

continuous incremental backup image files.

Companies can also balance storage space

and file recovery by setting policies that suit

their needs and easily watch over backup

jobs in the user interface, with alerts sent

when any issues arise.

Image management solutions allow the

management of system resources to

enable throttling and concurrent

processing. Backups are replicated onto

backup targets - local, on-network, and

cloud - so companies are always prepared

for disaster. The solution also allows prestaging

of the recovery of a server before

disaster strikes to reduce downtime.

CORE DRIVER FOR BUSINESS

CONTINUITY

Failover is a backup operational mode

that switches to a standby database,

server, or network if the primary system

fails or is offline for maintenance. Failover

ensures business continuity by seamlessly

redirecting requests from the failed or

downed mission-critical system to the

backup system. The backup systems

should mimic the primary operating system

environment and be on another device or

in the cloud.

20 STORAGE May/June 2021

@STMagAndAwards

www.storagemagazine.co.uk

MAGAZINE


OPINION:

OPINION: DISASTER RECOVERY AS A SERVICE

"Companies also need to ensure that data stored in their disaster recovery site is

always secure. If a disaster strikes, it may be impossible to recover quickly.

Suppose a failover does occur and the company's operations are now running

from a disaster recovery cloud. In that case, they need to protect the data in that

virtual environment by replicating it to their backup targets immediately. That's

why network bandwidth is the next concern."

With failover capabilities for important

servers, back-end databases, and networks

can count on continuous availability and

near-certain reliability. Say the primary onsite

server fails. Failover takes over hosting

requirements with a single click. Failover also

lets companies run maintenance projects,

without human oversight, during scheduled

software updates. That ensures seamless

protection against cybersecurity risks.

WHY FAILOVER MATTERS

While failover integration may seem costly,

it's crucial to bear in mind the incredibly

high cost of downtime. Think of failover as

a critical safety and security insurance

policy. And failover should be an essential

part of any disaster recovery plan. From a

systems engineering standpoint, the focus

should be on minimising data transfers to

reduce bottlenecks while ensuring highquality

synchronisation between primary

and backup systems.

GETTING BACK TO NORMAL

Failback is the follow-on to failover. While

failover is switching to a backup source,

failback is the process of restoring data to

the original resource from a backup. Once

the cause of the failover is remedied, the

business can resume normal operations.

Failback also involves identifying any

changes made while the disaster recovery

site or virtual machine was running in place

of the primary site or virtual machine.

It's crucial that the disaster recovery

solution can run the company's workloads

and sustain the operations for as long as

necessary. That makes failback testing

critical as part of the disaster recovery

plan. It's essential to monitor any failback

tests closely and document any

implementation gaps so they can be

closed. Regular failback testing will save

critical time when the company needs to

get its house back in order.

Companies need to consider several

important areas regarding the failback

section of their disaster recovery plan.

Connectivity is first on the list. If there isn't

a reliable connection or pathway between

the primary and backup data, failback

likely won't even be possible. A secure

connection ensures that a failback can be

performed without interruption.

Companies can be sure that their source

data and backup target data are always

synchronised, so the potential for data loss

is minimised.

Companies also need to ensure that data

stored in their disaster recovery site is

always secure. If a disaster strikes, it may

be impossible to recover quickly. Suppose

a failover does occur and the company's

operations are now running from a

disaster recovery cloud. In that case, they

need to protect the data in that virtual

environment by replicating it to their

backup targets immediately. That's why

network bandwidth is the next concern. If

they don't have sufficient bandwidth,

bottlenecks and delays will interfere with

synchronisation and hamper recovery.

Testing is the most critical element for

ensuring failback is successful when

businesses need it. That means testing all

systems and networks to ensure they are

capable of resuming operations after

failback. It's advisable to use an alternate

location as the test environment and use

knowledge obtained from the test to

optimise the failback strategies.

FINAL THOUGHTS

Whether it's a natural disaster like a

hurricane or a flood, a regional power

outage, or even ransomware, there is little

doubt about the business case for DRaaS.

With DRaaS ensuring business continuity,

no matter what happens, recovery from a

site-wide disaster is fast and easy to

perform from a disaster recovery cloud.

Add up the cost to a business in dollars

and cents: lost data, lost productivity and

reputational damage. Just an hour of

downtime could pay for a year - or many

years, for that matter - of DRaaS.

More info: www.storagecraft.com

www.storagemagazine.co.uk

@STMagAndAwards May/June 2021

STORAGE

MAGAZINE

21


CASE STUDY:

CASE STUDY: TAMPERE VOCATIONAL COLLEGE TREDU

ENSURING THE SAFETY OF DATA IN THE CLOUD

TAMPERE VOCATIONAL COLLEGE TREDU IN FINLAND HAS IMPLEMENTED A HYBRID SOLUTION THAT

OFFERS A UNIFIED PORTAL FOR STORAGE AND BACKUP OF MULTIPLE SERVICES, ELIMINATING THE NEED

TO JUMP FROM ONE APPLICATION TO ANOTHER

With Microsoft 365's retention policy for

deleted items of 30 days, Tampere's

technical team needed to find a solution

to house, and always be able to retrieve,

this vital data. Meanwhile another factor

that needed to be addressed was the

operational expense for any solution

being deployed.

Tampere Vocational College Tredu is

a college based in Tampere, the

second largest city in Finland. The

college offers vocational programmes in

Finnish secondary education in various

fields including Technology, Natural

Sciences, Communications and Tourism.

Tampere's student population increased

significantly in 2013 when Pirkanmaa

Educational Consortium and the existing

Tampere College merged, and today

Tampere hosts approximately 18,000

students and 1,000 staff members across

its curriculum and campus.

A SERIES OF CHALLENGES

As an educational institution, Tampere

has a legal obligation to retain data

generated by both students and staff.

With an increasing reliance on services

such as Microsoft 365, this means more

data is being generated on the cloud

than ever. Coupled with the challenges

the global pandemic has brought, remote

working and offsite learning means

services such as these are leveraged even

more keenly and have become a

significant part of the educational

landscape.

Aside from the accounts of the 18,000

students and 1,000 faculty members,

Tampere college also need to protect

data in accounts of former students and

academic projects. This means they have

to contend with over 34,000 Drive,

Contact and Calendar accounts and in

excess of 68,000 mailboxes and over

11,000 SharePoints. Added to the

pressure of this, the school has a massive

domain system in which new accounts are

created frequently and old accounts are

closed, which in turn creates

management complexities.

Arttu Miettunen, Systems Analyst at

Tampere, began his search and

benchmarked various solutions from

major backup providers. Eventually it was

clear Synology could not only resolve the

issues of data storage, but also offered

backup for Microsoft services with no

license costs. Having the storage

hardware and backup as an integrated

solution brings further reassurance to the

team managing this task.

MEETING ALL REQUIREMENTS

An SA3600 unit was deployed with 12 x

12TB Enterprise HDDs, along with the

added benefit of 2 x SNV3500 400G,

Synology's M.2 NVMe SSDs to create a

cache. The current backup occupies

15TBs of storage, however, as Tampere's

data needed to grow, the team was

acutely aware that the solution also had

to offer scalability. This was an obstacle

that the SA3600 can readily handle, with

12 existing bays in the base unit and the

facility to scale up to 180 drives with use

of Synology expansion units. In addition,

Active Backup for Microsoft 365 comes

with de-duplication in place, which cuts

down the backup by 7 terabytes in the

first run, achieving 46% saving on

storage media.

Arttu and his team knew they wanted

22 STORAGE May/June 2021

@STMagAndAwards

www.storagemagazine.co.uk

MAGAZINE


CASE STUDY:

CASE STUDY: TAMPERE VOCATIONAL COLLEGE TREDU

"It could have been difficult to predict how performance might have been affected

as the number of users and amount of data increased, but this was resolved by

deploying an SSD cache with the Synology NVMe SSDs in place. This handled

substantial caching workloads in this multi-user environment by making the data

available on the lower latency NVMe SSDs instead of having to retrieve it from the

slower hard disk drives. By deploying a shrewd hybrid storage system with HDDs

and SSDs, Tampere enjoy maximum value from their disk array."

it from the slower hard disk drives. By

deploying a shrewd hybrid storage system

with HDDs and SSDs, Tampere enjoy

maximum value from their disk array.

MANAGEABLE & FUTUREPROOF

By utilising Synology's Active Backup for

Microsoft 365, Tampere benefit from:

Comprehensive protection and backup

for Teams, SharePoint Online,

OneDrive and Exchange Online

Full integration with Azure AD

Easy and centralised management

portal with advanced permissions

controls

Cost saving with license-free software

and data deduplication

Future-proofing with scalable storage

via expansion

one unified portal for the storage and

backup of multiple services to eliminate

the need to jump from one application to

another. When new students and faculty

join onto the school's Azure AD, accounts

must be detected and protected

automatically. The IT team wanted to

give restoration privileges to some users

but not all, and had to be able to tweak

the setting easily. After a trial with

Synology, Arttu is confident that this

solution covers all their requirements and

will last them for many years.

It could have been difficult to predict

how performance might have been

affected as the number of users and

amount of data increased, but this was

resolved by deploying an SSD cache with

the Synology NVMe SSDs in place. This

handled substantial caching workloads in

this multi-user environment by making the

data available on the lower latency

NVMe SSDs instead of having to retrieve

With Synology Active Backup for

Microsoft 365 deployed, the Tampere

team is now able to protect the school's

cloud workloads and lower ongoing costs

substantially.

"Synology is providing us a way to ensure

the safety of our data in the cloud,"

concludes Arttu Miettunen. "With Synology,

we're able to safeguard and restore our

data in Microsoft 365 services in case of

accidental deletion or data loss."

More info: www.synology.com

www.storagemagazine.co.uk

@STMagAndAwards May/June 2021

STORAGE

MAGAZINE

23


ROUNDTABLE: BACKUP BACKUP

EVERY DAY IS A BACKUP DAY

THIS YEAR'S WORLD BACKUP DAY HAS COME AND GONE, BUT IT MIGHT BE THE LINGERING

IMPACT OF COVID-19 THAT HAS A DEEPER EFFECT ON ORGANISATIONS' BACKUP THINKING.

STORAGE MAGAZINE GATHERED THE THOUGHTS OF INDUSTRY LEADERS

Just as a dog isn't just for Christmas, it is

increasingly clear that backup isn't

something we should only think about on

World Backup Day. This March saw the

landmark tenth WBD, but it is fair to say that

we haven't seen ten years of measurable

improvements in how organisations plan and

manage their backup and restore processes.

As data volumes soar and interconnectivity

spreads ever wider it might look like those

who evangelise about backup are fighting a

losing battle.

But the recent changes to all our working

patterns forced on us by the pandemic and

lockdown have brought a renewed focus for

many at board level on the importance of a

defined - and tested - backup strategy.

According to Nick Turner, VP EMEA, Druva:

"Whilst we've celebrated a decade's worth of

World Backup Days, this past year has tested

the ability to protect business data like no

other. According to our Value of Data report,

since the onset of the pandemic, IT leaders in

the UK and US have reported an increase in

data outages (43%), human error tampering

data (40%), phishing (28%), malware (25%)

and ransomware attacks (18%)."

IS YOUR BACKUP FIT FOR PURPOSE?

So is World Backup Day really anything more

than a PR opportunity for vendors? Zeki

Turedi, CTO EMEA at CrowdStrike says:

"Milestones like World Backup Day act as

reminders for IT professionals to look again

at their IT architecture and confirm it's still fit

for purpose. Like so many organisations

around the world, the last year taught us that

workers can adapt how they work, but our IT

infrastructure in some cases is not as flexible.

What the pandemic hasn't done at all is slow

the growth in threats posed to organisations."

It is important to understand the difference

between backup and business continuity,

argues Adrian Moir, Lead Technology

Evangelist at Quest Software: "Businesses

have rapidly adapted to remote working, and

many employees are now operating and

accessing data away from the traditional

corporate office. While the best practices

around data protection and recovery are still

there, it is critical that business evolve their

strategies just in the same way that our

24 STORAGE May/June 2021

@STMagAndAwards

www.storagemagazine.co.uk

MAGAZINE


ROUNDTABLE: ROUNDTABLE: BACKUP

"On-premises backup solutions have run out of favour due to

their expensive hardware requirements and inability to scale.

Cloud backup enables tapping into cloud economies of scale,

as well as being off premises, thus protecting against

catastrophic site failures such as fire or flood." - Aron Brand, CTERA

approach to data and access changes. We

also need to move away from the concept of

focusing just on backup. In order to get this

right, organisations need to consider

continuity - ensuring they have a platform in

place that will not only recover the data but

will do so with minimal downtime."

Has the growth in home-working pushed

more enterprise data into the cloud?

Though the move to public and hybrid

clouds is still seeing growth, the demand for

on-premises data backups is still buoyant,

as Alexander Ivanyuk, technology director at

Acronis explains: "Companies that deal with

very sensitive data such as government,

military, research, pharmaceuticals, and so

on, still prefer to keep data on site or in a

private cloud."

BEST OF BOTH WORLDS

Aron Brand, CTO at CTERA, supports the

thinking that on-premise backups may fade

over time in favour of cloud-based options:

"On-premises backup solutions have run out

of favour due to their expensive hardware

requirements and inability to scale. Cloud

backup enables tapping into cloud

economies of scale, as well as being off

premises, thus protecting against

catastrophic site failures such as fire or

flood." That said, hybrid solutions can help

organisations enjoy the best of both worlds;

with a backup on-premises and one in the

cloud, IT teams can back up sensitive data

(safely in house) and maintain cost-effective

and flexible scalability during major demand

increases (through the cloud). More and

more hybrid options are coming to market,

in response to the trend seeing organisations

often hesitant to lock all of their

organisations' data into the cloud. Hybrid

could indeed be the way to go according to

Christophe Bertrand, senior analyst at ESG

Global: "Everything's going to be hybrid, and

for a long time. Especially in terms of backup

and recovery."

Sascha Giese, Head Geek at SolarWinds,

highlights the importance of testing, wherever

your data is being backed up to: "From both

a business and personal point of view, we

are well placed to take advantage of the

cloud technologies that make data backup a

very simple process. That said, we still need

to treat data backup as a top priority. Despite

having the cloud platforms in place that

enable fairly quick recovery, IT professionals

should still be taking matters into their own

hands and ensuring in today's data heavy

environment, everything is backed up.

"This year, more than ever, I encourage IT

professionals everywhere to do two things.

First, take the '3, 2, 1' approach - create

three working backups, stored in two

different places, with one always being stored

offsite. Second, test! Treat and plan for a

data loss in the same way that you would for

a fire drill. Make sure you are regularly

testing for any disasters that cause data loss

and try to find ways that you can improve

disaster recovery. If you take these small

steps, any data loss can be rectified very

quickly with minimum downtime."

AT YOUR SERVICE?

The rush to cloud and containerisation brings

new risks, argues Druva's Turner: "The secret

to supporting a successful hybrid workforce

will be in recognising how the industry has

evolved and the gaps which may have been

overlooked in the rush to complete projects.

As we've surged the deployment of SaaS

applications, we need to acknowledge that

being the target of a cyber attack is now

almost inevitable. Therefore, prioritising data

protection in the cloud to prepare is vital.

"Remember, a robust approach to data

resiliency should include detection,

remediation, and recovery. Relying on

preventative measures is no longer sufficient.

With critical data, including ongoing research

around COVID-19 and vaccination trials,

being shared around the world, the stakes for

data protection have never been higher. It's

time we take a hard look at the existing

frameworks and leverage the latest

technologies to meet this moment."

Does a shift to the cloud mean that everyone

www.storagemagazine.co.uk

@STMagAndAwards

May/June 2021

STORAGE

MAGAZINE

25


ROUNDTABLE: BACKUP BACKUP

"Businesses have rapidly adapted to remote working, and many

employees are now operating and accessing data away from the

traditional corporate office. While the best practices around data

protection and recovery are still there, it is critical that business

evolve their strategies just in the same way that our approach to

data and access changes. We also need to move away from

the concept of focusing just on backup." - Adrian Moir, Quest Software

will move to a backup-as-a-service model?

As is so often the case, the question gets an 'it

depends' answer from most of our experts. "UK

organisations are aware that over-reliance on

legacy IT and data protection tools poses an

immediate threat to their ongoing DX

initiatives," said Dan Middleton, Vice President

UK&I at Veeam. "Over half of firms across the

country now use a third-party backup service

to help protect the data of critical remote work

applications such as Microsoft Office 365,

according to the Veeam Data Protection

Report 2021. Moving to subscription-based

data protection services will enable UK

companies to take advantage of more costeffective

solutions with the flexibility to pay only

for the services they use. This can ensure

processes such as software updates, patching

and testing are automated as opposed to

relying on manual protocols, providing

greater data protection while allowing

businesses to de-risk their transformation and

business continuity initiatives."

SEEING THE TRUE COSTS

Krista Macomber, senior analyst at the

Evaluator Group, describes some of the

factors organisations should consider if

thinking about switching to the cloud: "There

are a whole host of factors that go into

determining the total cost of ownership of a

backup solution that leverages the public

cloud. Egress fees, how much data is being

protected, how much that data is growing,

and how long it must be retained for, are just

a few factors."

Egress fees are indeed not a cost to be

ignored, as the migration of significant

amounts of data back to on-premises can

easily run into very considerable sums. In

addition, the cost of hosting backups in the

cloud goes beyond the fees attached to the

storage of that data, as ESG's Bertrand

explains: "I don't think we're there yet in terms

of fully understanding the actual costs of cloud

backup. The one thing that's more important

than the cost of the backup and recovery is the

cost to the organisation if they are not able to

recover data."

Scality's Chief Product Office Paul Speciale

concurs with this view, describing what to look

out for when opting for the cloud approach:

"As with all things in IT, we need to carefully

consider the overall cost of ownership for

backup solutions, including the trade-off

between shifting capital expenditures to

operational savings in the cloud. While it can

be true that cloud backups save money, it can

also be true that they are more expensive than

on-premises solutions. Hosted backups or

Backup-as-a-Service (BUaaS) offerings are

popular and widely embraced and do indeed

reduce the burden on IT administrators from a

time perspective, which has a bearing on

backup cost. Also, the cloud promises more

choices of classes of storage with

performance and cost trade-offs."

Looking ahead, both Evaluator Group's

Randy Kerns and Krista Macomber suggest

that Backup-as-a-Service will be popular in the

coming months, with Kerns saying: "I think the

key for IT operations will be evaluating Backup

as a Service options. Vendors will work on

developments in this area," and Macomber

adding: "I also think we'll see an ongoing tick

towards service-based delivery. This may mean

a public cloud-based solution, or it might

mean a managed services-based approach."

The last word goes to Sarah Doherty of iland

who sums up what many of our commentators

have said: "The importance of backup is often

overlooked by the latest security scare or large

attack making headlines. In most cases, the

focus is on other details rather than creating a

plan to keep all data safe and available from

any of these events. Both internal and external

threats are on the rise. In today's uncertain

times, keeping data safe and recoverable is

more important than ever. Let's take World

Backup Day as a reminder for your

organisation to create a backup and recovery

plan of action. The increase in disastrous

events, whether from nature, human error,

cyber-attacks or ransomware, makes it that

much more critical for organisations to

consider all that they have to lose and

highlights the need to create the right backup

and recovery solution." ST

26 STORAGE May/June 2021

@STMagAndAwards

www.storagemagazine.co.uk

MAGAZINE


The Way We Use Data Is Changing…

So Why Hasn’t Storage?

Break the status quo and rethink storage – with StorCycle ® Storage

Lifecycle Management software.

StorCycle is a software solution that identifies data, specifically

inactive data, and allows you to migrate it to the right tier of

storage. It helps you store your data for the long term, for

ransomware preparedness, or to preserve it for its future value

and use. And finally, StorCycle allows users to easily access data

when they need it.

Schedule a Demo


MANAGEMENT: INTERNET OF THINGS

AS IOT EXPANDS, ONE SIZE WON'T FIT ALL

FROM MEDICAL WEARABLES THROUGH SEARCH-AND-RESCUE DRONES TO SMART CITIES, CHECHUNG

LIN, DIRECTOR OF TECHNICAL PRODUCT MARKETING AT WESTERN DIGITAL, DESCRIBES WAYS TO

OPTIMISE THE EVER-GROWING VOLUMES OF IOT DATA USING PURPOSE-BUILT STORAGE

Whilst the digital environment has

been expanding rapidly for many

years, the pandemic ushered in,

by necessity, a degree of digital

transformation that is unprecedented in

both its scale and scope. With

organisations throughout private and public

sectors alike forced to roll out digital

systems, there has been a sharp uptake in

the adoption of connected technologies.

As the Internet of Things (IoT) landscape

experiences large-scale growth - from

automated supply chains to help maintain

social distancing, to more efficient and

convenient smart cities and vehicles - the

amount of data produced grows rapidly,

as well. It is estimated that by 2025,

connected IoT devices will generate

73.1 zettabytes of data.

Not only does this data need to

be captured, it also needs to be

stored, accessed and transformed

into valuable insights. This process

requires a comprehensive data

architecture that can

accommodate the demands of a

large range of use applications

throughout the data journey.

WHAT IS THE IOT DATA

JOURNEY?

The vast majority of IoT data is

stored in the cloud where highcapacity

drives - now reaching 20TB -

store massive amounts of data for big

data and fast data workloads. These

could include genomic research, batch

analytics, predictive modelling, and supply

chain optimisation.

For some use cases, data then

migrates to the edge,

where it is often

cached in

distributed, edge

servers for realtime

applications

such as autonomous vehicles, cloud

gaming, manufacturing robotics, and

4K/8K video streaming.

Finally, we reach the endpoints, where

data is generated by connected machines,

smart devices, and wearables. The key aim

here is to reduce network latencies and

increase throughput between these layers

(cloud-to-endpoints and endpoints-tocloud)

for data-intensive use cases. A

potential solution could be 5G, by using

millimetre wave (mmWave) bands between

20-100 GHz to create "data

superhighways" for latency and bandwidthsensitive

innovations.

WHAT IS THE VALUE OF YOUR

IOT DATA?

Data infrastructure is critical in our digital

world as data must be stored and analysed

quickly, efficiently, and securely. Thus, data

architectures need to go beyond simple

data capture and storage to data

transformation and creating business

value, in a 'value creation' approach.

Examples include:

Autonomous vehicles - These vehicles

are loaded with sensors, cameras,

LIDAR, radar, and other devices

generating so much data that it is

estimated it will reach 2 terabytes per

day. That data is used to inform realtime

driving decisions using

technologies such as 3D-mapping,

advanced driver assistance systems

(ADAS), over-the-air (OTA) updates, and

vehicle-to-everything (V2X)

communication. In addition, IoT data

creates value in personalised

infotainment and in-vehicle services that

28 STORAGE May/June 2021

@STMagAndAwards

www.storagemagazine.co.uk

MAGAZINE


MANAGEMENT: INTERNET OF THINGS

improve the passenger experience. In

order to enable real-time decision

making, which is crucial for passenger

safety, the priority for this data

architecture is reducing network

latencies, along with enabling heavy

throughput to facilitate predictive

maintenance.

Medical wearables - It has been

predicted that in 2021, worldwide enduser

spending on wearable devices will

total US$81.5 billion. These devices

generate important data to track sleep

patterns, measure daily movements,

and identify nutrition and blood oxygen

levels. This IoT data can be transformed

into daily, monthly, and yearly trends

that can identify opportunities to

improve health habits using datainformed

decisions. Such data could

also create more personalised and

proactive treatments, especially as

telehealth and remote healthcare

continue to progress, even after the

pandemic subsides. Here, the storage

priority for data architecture is offering

long-term retention for critical health

records.

In addition, the following IoT applications

provide key examples as to why storage

considerations vary according to each

specific case, and how the requirements

can be met.

Search-and-rescue drones - This is a

key example of an IoT use case which

requires a very specific data storage

solution to get maximum value from the

application. Such drones are often

required to operate in harsh natural

environments with extreme temperatures

and weather patterns. Therefore, the

storage solutions used in these

technologies must be especially durable

and resilient, such as the highendurance

and highly reliable

industrial-grade e.MMC and UFS

embedded flash drives.

Search-and-rescue drones are also

commonly used in combination as part of a

wider network, utilising optimised routes

and shared automated missions. This

means that the data architecture must be

scalable, enabling the operation of multiple

technologies in conjunction with extreme

efficiency, performance, and durability.

Smart cities - For smart cities to

function, they require the storage of

huge amounts of both archived and

real-time data. In order to analyse and

act on real-time data, IoT technologies

are relying on storage at the edge and

endpoints. For example, smart public

transport systems require real-time data

on traffic, in order to quickly and

accurately adjust to spikes in demand,

such as rush-hour traffic. This means

that, similar to smart cars, this

application requires data storage that

facilitates low network latencies.

The storage for archival data, in

comparison, requires less of a focus on

real-time rapid transfer, instead prioritising

long-term retention. Here, cloud solutions

come into play. Intelligent carbon mapping

tools enable another IoT use case which

relies on historical data of carbon emissions

in order to identify trends and deploy

carbon reduction measures.

GENERAL-PURPOSE TO PURPOSE-

BUILT ARCHITECTURE

Various connected technologies have

different requirements when it comes to

how data must be stored in the most

appropriate way and how to get the best

value from it. For example, NVMe storage

solutions are ideal for use cases that

require very high performance and low

latency in the data journey. Specialised

storage is therefore necessary in order to

create optimum value from IoT data, which

must be considered when building out the

wider data infrastructure.

Many businesses, however, still use

general-purpose architecture to manage

their IoT data. This architecture does not

fully meet the varying needs of IoT

applications and workloads for consumers

and enterprises.

For example, whilst search and rescue

drones prioritise endurance and resilience,

storage solutions in digital healthcare

applications must focus on offering longterm

retention and security for critical health

records. Therefore, there must be a move

from general-purpose storage, to purposebuilt

data storage and different solutions for

different needs.

For any data architecture, the goal is to

maximise the value of data. For real-time

IoT use cases, your storage strategy has to

be designed specifically for IoT, and

address the following considerations:

1. Accessibility: what is its serviceability,

connectivity and maintenance?

2. Wear endurance: It is WRITE-intensive or

READ-intensive?

3. Storage requirements: what data and

how much needs to be processed, analysed

and saved at the endpoints, at the edge,

and in the cloud?

4. Environment: what is the altitude,

temperature, humidity and vibration levels

of the environment in which data will be

captured and kept?

SPECIALISATION FOR OPTIMISATION

Taking optimal advantage of the evolving

IoT data landscape means using specialised

storage solutions to bring unique business

value. It is no longer sufficient to rely on

standard, 'one size fits all' storage solutions,

when the requirements for different IoT

applications vary so drastically. The

deployment of innovative and specific data

storage solutions will help businesses and

enterprises to navigate the accelerating

journey of the IoT landscape, and will

ensure that the value of data isn't lost

unnecessarily in the process.

More info: www.westerndigital.com

www.storagemagazine.co.uk

@STMagAndAwards May/June 2021

STORAGE

MAGAZINE

29


CASE STUDY:

CASE STUDY: CANAL EXTREMADURA

FOCUSED ON DELIVERING CONTENT,

INSTEAD OF WORRYING ABOUT WHERE IT'S

STORED

SPANISH TV NETWORK CANAL EXTREMADURA HAS REVAMPED ITS I.T. INFRASTRUCTURE WITH

QUANTUM STORNEXT FILE SYSTEM SOFTWARE AND TAPE SOLUTIONS

Headquartered in Merida, Spain,

Canal Extremadura is in the middle

of a large-scale digital

transformation. To make the transition from

a traditional radio and TV business to a

modern multimedia corporation, the

company needed to upgrade its outdated

and complex IT infrastructure. By adopting

the Quantum StorNext File System software

as part of its archive solution, it has

accelerated the retrieval of media projects

and achieved scalability for its rapidly

growing business.

The company's existing archive had

become a significant pain point. "We ran

out of room in the tape library and had to

migrate some video to a NAS just to free

up space," said Francisco Reyes, technical

chief at Canal Extremadura.

Unfortunately, expanding the system was

not financially feasible.

Canal Extremadura's new archive

solution needed to merge

seamlessly with

its

preferred media asset management (MAM)

system from Dalet, which is at the centre of

its media production and post-production

workflow. Additionally, the new archive

needed to enable a smooth transition from

the existing large-scale environment, which

contained a huge volume of old files in

legacy media formats.

SCALABILITY IN THE ARCHIVE

Canal Extremadura's IT group requested

proposals from multiple storage vendors,

but ultimately chose Quantum based on

Dalet's recommendation. This carried

significant weight, especially given the

importance of integrating the archive with

the MAM system in order to achieve the

flexibility and scalability needed.

"We tend to keep solutions for a very long

time-we had been using the

previous system for

about 12

years - so we needed to be very confident

in a new solution before making the

selection," says Reyes. "The advice and

technical information we received from the

Dalet and Quantum teams was very

helpful. They gave a very clear picture of

how the solution would work and how it

would be implemented."

After consulting with Dalet and Quantum,

the IT group decided on a solution based

on Quantum StorNext File System software

with Xcellis storage servers, an Xcellis

metadata array, a QXS disk storage array,

and a StorNext AEL6000 tape library. The

tape library, which has 400 slots, uses LTO-

8 drives - a notable upgrade from the LTO-

3 drives the company was using previously.

The environment is fully integrated with the

30 STORAGE May/June 2021

@STMagAndAwards

www.storagemagazine.co.uk

MAGAZINE


CASE STUDY:

CASE STUDY: CANAL EXTREMADURA

"We have more than 100 TB of online storage from Quantum. So if someone has

completed a project six months ago, it will probably still be online. Adding online

storage to our previous system would have been much too costly - that's really not

how that system was designed. For us, the Quantum StorNext approach works

much better. In the past, users knew they had to wait for content to be retrieved from

the archive. Now it's much faster than before. We have more drives and faster

drives with the Quantum archive."

Dalet Galaxy MAM system.

The networking flexibility of the

Quantum platform has been beneficial

for the IT group in supporting a range of

client systems. Specifically, the storage

environment is configured to offer Fibre

Channel connectivity to 10 SAN clients

plus 10-GbE connections to multiple NAS

clients, while the metadata network uses

1 GbE.

MAKING CONTENT READILY

AVAILABLE

Thanks to the StorNext File System

software and integrated online storage,

Canal Extremadura's journalists,

producers, and other team members can

now retrieve content much faster than

before.

"We have more than 100 TB of online

storage from Quantum. So if someone

has completed a project six months ago,

it will probably still be online," says Reyes.

"Adding online storage to our previous

system would have been much too costly -

that's really not how that system was

designed. For us, the Quantum StorNext

approach works much better."

Even when content has been archived to

tape, the IT group can deliver it to users

swiftly. "In the past, users knew they had to

wait for content to be retrieved from the

archive," says Reyes. "Now it's much faster

than before. We have more drives and

faster drives with the Quantum archive."

Transitioning to the latest LTO

technology has also helped expedite

retrieval. By upgrading from LTO-3 to

LTO-8, Canal Extremadura can store

significantly more data on each tape.

Consequently, there is a greater chance

that each retrieval request can be satisfied

without having to load multiple tapes.

Explaining the benefit of faster archival

retrieval for users, Reyes says, "Journalists

might be in a hurry to assemble a new

video for that day's news broadcast. With

a faster archive, we can help them meet

their deadlines."

GET UP TO SPEED FAST

To ensure Canal Extremadura gets the

most of out of its new archive solution,

Quantum provided multi-day onsite

training. At the same time, the Dalet

implementation team helped Canal

Extremadura migrate its existing archive to

the Quantum environment-a process that

involved transcoding some archived

content from legacy formats. "The process

took some time because we had a lot of

data to migrate, but it was quite smooth,"

says Reyes.

SIMPLIFYING SUPPORT,

ENHANCING COMPATIBILITY

The StorNext File System software has

helped consolidate a complex archive

environment that was previously

comprised of systems from multiple

vendors. Collaborating with a single

vendor removes some of the possible

compatibility problems from the multivendor

environment. It also simplifies the

provision of ongoing support as the IT

group has a single point of contact for the

Quantum environment if it ever needs to

address issues or make changes.

The StorNext software platform facilitates

seamless data movement from online disk

storage to the tape library. The integrated

environment works with the Dalet MAM

system to support a complete production

and post-production workflow, from ingest

to archiving. The new archive environment

provides the long-term scalability to

support the organisation's multimedia

transformation.

"If we ever need to expand the archive in

the future, we can simply add tapes-it's

very straightforward," says Reyes in

conclusion. "With a scalable archive, our

company can stay focused on delivering

engaging content instead of worrying

about where to store it."

More info: www.quantum.com

www.storagemagazine.co.uk

@STMagAndAwards

May/June 2021

STORAGE

MAGAZINE

31


TECHNOLOGY: NAS

NAS

THE FUTURE OF

SHARED STORAGE? IT

HAS TO BE NAS

WITH THE GREATER PERFORMANCE, FUNCTIONALITY AND

EASE OF USE OF NAS, IT IS INCREASINGLY HARD TO

JUSTIFY THE NEED FOR A SAN IN MODERN CREATIVE

WORKFLOWS, ARGUES BEN PEARCE OF GB LABS

As technology moves forward and IP

connectivity continues to revolutionise

workflows in the media industry, we are

starting to see SAN (Storage Area Network) as

an inconvenient and overly complicated way

of sharing our digital storage amongst the

various platforms that most businesses use.

This article looks at the differences between

the two technologies and highlights the major

advantages that NAS (Network Attached

Storage) offers to modern businesses.

SAN LIMITATIONS

A SAN architecture is required when

providing 'block level' shared access to hard

drive storage for multiple workstations.

Access and management of the storage

comes from the MDC (Meta Data Controller)

which introduces a big limitation to how

many users can simultaneously access a

particular share point.

This number is generally no more than 20

machines, and this problem is known as

meta data contention. Whilst MDCs can

failover to another backup MDC, there can

be only one active MDC and its workload

cannot be load balanced across multiple

machines, so additional MDCs are literally

redundant until required.

SANs tend to suit particular operating systems

meaning that it is rare to have PC, Mac and

Linux machines working together. Also the fact

that software needs to be installed prevents

certain workstations or servers being

connected at all and limits the compatibility

with the many generations of operating

systems used.

Most SANs are Fibre Channel based,

therefore cards need to be installed into

workstations and specific cables, switches and

transceivers are needed. In addition to this the

management of the SAN is done through

standard Ethernet networks.

THE NAS DIFFERENCE

A NAS (Network Attached Storage) is a storage

server that can offer its own connected storage

as 'file level' shares to a network of Ethernet

connected clients using various different

sharing protocols for maximum compatibility

and flexibility. No software needs to be

installed and no hardware (such as a Fibre

Channel card) needs to be installed. NAS

works with standard Ethernet networks which

keeps costs low and flexibility high.

NAS can potentially connect to anything and

encourages collaboration between all the

platforms within an organisation. Unlike SAN,

NAS does not suffer meta data contention and

therefore allows many more users on a share

point and much greater scalability in terms of

users and performance.

The functionality of a NAS is far greater than

a SAN; Analytics, bandwidth control, quotas,

cloud integration, AD synchronisation, profiles

and monitoring are just some of the additional

features a NAS can bring. As mentioned

before the NAS is a storage server and can

therefore run many beneficial applications and

workflow tools that are just not possible with a

SAN MDC.

EASIER SCALABILITY

More users means more switch ports and more

cost, but the really big problem is the uplinks

between switches. The uplinks in Fibre Channel

switches create bottlenecks that cannot be

ignored. Every switch port should be able to

deliver full bandwidth, but if the uplink from

another switch is a fraction of the total port

bandwidth, then the performance per port

becomes truly sub optimal.

Ethernet switches are easier to deploy with

faster uplinks and ultrafast backplanes

available within blade switches. Multiple

32 STORAGE May/June 2021

@STMagAndAwards

www.storagemagazine.co.uk

MAGAZINE


TECHNOLOGY: TECHNOLOGY: NAS

"NAS can potentially connect to anything and encourages

collaboration between all the platforms within an organisation.

Unlike SAN, NAS does not suffer meta data contention and

therefore allows many more users on a share point and much

greater scalability in terms of users and performance.

The functionality of a NAS is far greater than a SAN;

Analytics, bandwidth control, quotas, cloud integration, AD

synchronisation, profiles and monitoring are just some of the

additional features a NAS can bring."

networks can easily be attached to the NAS

allowing good network design to eliminate

bottlenecks.

Some NAS platforms support dynamic scaling

of capacity meaning almost no down time.

Whereas adding storage to a SAN, especially

resizing existing volumes is usually a 'data off,

expand and then copy back' procedure,

wasting days of downtime.

FLEXIBLE COST

Licenses are a big part of the cost and

inflexibility of SAN ownership. Each user,

including the MDC and any failover MDCs

must be licensed as either a one-off cost or

annual ongoing expenditure. Specific

additional hardware is also required, such as

Fibre Channel switches and cards.

NAS does not require software licenses and

most likely requires no additional hardware or

software installation. Almost all computers

come standard with at least one 1Gb Ethernet

port and standard network hardware is cheap

and easy to source. For higher bandwidth

usage 10Gb, 40Gb or 100Gb Ethernet can

be added to a client machine in the form of a

PCI card or Thunderbolt/USB C interface to

dramatically improve performance.

MAKING CONNECTIONS

Looking at the speed of connections

available today, it is easy to see how NAS is

surpassing SAN;

NAS options: 100Gb, 40Gb, 10Gb and

1Gb Ethernet.

SAN options: 32Gb, 16Gb, 8Gb and 4Gb

Fibre Channel

Copper or optical cables can be used with

SAN or NAS and very large distances can be

achieved with optical cable and advanced

transceivers.

As seen in the example above, Ethernet

connectivity has surpassed Fibre Channel many

years ago and additionally server end

connections can be channel bonded to

produce very fast interfaces to serve large

numbers of clients and provide cable

redundancy. Load balancing connections in a

SAN is far less flexible and not truly compatible

across platforms, such as Mac OS.

COMPLEX SUPPORT

SAN is comparatively complicated and involves

many more elements, which in turn bring about

many more possible points of failure.

Operating systems need to be matched and

software needs to remain compatible after

updates or data is simply not available, as the

storage cannot be mounted. Deployments are

very involved and installation time, training and

ongoing support is considerable.

CLEAR CHOICE

The biggest potential issue with NAS is that

most systems are not built for demanding

usage and large scalability, so the choice of

NAS is restricted to manufacturers that actually

understand high bandwidth usage and also

provide genuine sustained performance for

mission critical usage. By comparison a SAN is

very restrictive, complicated and expensive and

only really achieves the simple function of

sharing storage.

'Block level' access can be beneficial for

certain uses, but the reduced latency and

improved efficiency found in modern high

performance NAS storage systems means that

this marginal benefit has lessened over time.

If you are looking for large capacity, scalable

shared storage that will connect to everything in

your facility then the choice is clear.

More info:

www.gblabs.com/component/k2/the-future-ofshared-storage-is-nas

www.storagemagazine.co.uk

@STMagAndAwards May/June 2021

STORAGE

MAGAZINE

33


OPINION: CLOUD STORAGECLOUD STORAGE

PANDEMIC ACTS AS 'CLOUD

CATALYST' FOR REMOTE

WORKING

THE COVID-19 PANDEMIC HAS FORCED BUSINESSES TO EVOLVE

QUICKLY AND ADJUST TO THE NEW WORKING DYNAMIC - BUT

SOME HAVE BEEN BETTER PREPARED THAN OTHERS, EXPLAINS RUSS

KENNEDY, CHIEF PRODUCT OFFICER, NASUNI

Since 2017, the Microsoft Teams user

base has grown astronomically by far

beyond 100 million users and Virtual

Desktop Infrastructure (VDI) and Desktop

as a Service (DaaS) take-up has exploded,

as a consequence of work moving out of

the office. Organisations have had to find

ways to ensure workers can remain

productive as part of this shift. In the past,

VDI deployments were sold as IT cost

savings efforts, which didn't always play

out. Performance also suffered, because

virtual infrastructures had to reach over

the wire to access the files end users

needed. With VDI and DaaS now being

delivered from the cloud, flexibility and

performance are now enabling the 'work

from anywhere' use case.

The game has changed dramatically

however, with desktop virtualisation now

more about business continuity and

remote productivity than cutting costs -

and the pandemic has forced many

companies to move in this direction. Three

businesses we've worked with recently

provide good examples of how to use a

combination of cloud file storage and a

powerful cloud VDI provider to maintain

productivity in difficult circumstances.

The first is global oil and gas services

firm Penspen which rapidly transitioned to

VDI at the start of the pandemic, standing

up Windows Virtual Desktop (WVD)

instances in the Azure regions closest to its

employees. During the same period,

professional services provider SDL also

transitioned 1,500 global workers to

Amazon WorkSpaces over the course of a

weekend. And, after pandemic-related

events shut engineering giant Ramboll out

of a key data centre, the firm deployed a

Nasuni VM in Azure and restored access

to 300 remote users within two hours.

These examples prove the transformative

change at work across different industries -

the incredible climb in cloud adoption and

towards cloud-centric infrastructure.

Moving servers and applications to the

cloud has been the focus of infrastructure

modernisation efforts for the past several

years. But now companies, large and

small, are looking for ways to leverage the

benefits of the cloud for file storage.

Cloud file storage is clearly helping

enterprises deliver file data to users when

and where needed with great performance

as a productivity enabler.

The shift to remote or hybrid work is here

to stay. From a file access and storage

perspective, this is clear from the way

Google Cloud now makes use of the same

network that evolved to support YouTube.

The same technology that loads an

obscure video in less than a second works

to ensure users can access the files they

need on demand.

This is important because files are often

the hardest piece of the puzzle. That's why

enterprises need to be able to deliver file

data to their users when and where they

need it, with great performance. Cloud file

storage makes that possible; and the

latest approaches to enterprise file storage

can drive efficiencies and lower costs by

up to 70%.

At the same time, many large enterprises

managing multi-petabyte environments

need to be able to scale up without being

constrained by hardware limits. The

pandemic has driven a dramatic

acceleration and transition of anything onprem

in physical data centres to the cloud.

That transition has put a significant strain

on organisations as they need to ensure

they have all the capabilities they've grown

accustomed to in the on-prem world, in

the cloud.

No one's dipping their toes into the cloud

world any more - they're diving in.

Enterprises need to be able to deliver file

data services to users when and where

they need it with great performance - and

the evolution of cloud file storage is

making that possible.

More info: www.nasuni.com

34 STORAGE May/June 2021

@STMagAndAwards

www.storagemagazine.co.uk

MAGAZINE


ENTERPRISE STORAGE AND

SERVERS FROM LENOVO

Now at Northamber

• The UK’s SMB, Mid-Market, Education and Local Government

Infrastructure Specialists

• In-house configuration, design, demo and training centre

• Easy access to Lenovo’s Alliance Partners

• Flexible payment options with Hardware as a Service

Talk to the Northamber Solutions experts today or

for more details visit northamber.com/lenovo

Call us on

020 8296 7015

northamber.com/lenovo | follow us

©Northamber 2021 E and O.E. June ‘21

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!