ST May-Jun 2021
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
<strong>ST</strong>OR<br />
MAGAZINE<br />
<strong>ST</strong>ORAGE<br />
The UK’s number one in IT Storage<br />
<strong>May</strong>/<strong>Jun</strong>e <strong>2021</strong><br />
Vol 21, Issue 3<br />
NAS VS. SAN:<br />
The future of shared storage<br />
INTERNET OF THINGS:<br />
Where will all the data go?<br />
<strong>ST</strong>ORAGE TIERING:<br />
Have your cake and eat it<br />
DR-AS-A-SERVICE:<br />
A primer for SMBs<br />
COMMENT - NEWS - NEWS ANALYSIS - CASE <strong>ST</strong>UDIES - OPINION - PRODUCT REVIEWS
Keep ahead of the<br />
ransomware waves!<br />
Ride with balance and poise.<br />
Over the horizon ransomwares just keep on coming,<br />
sometimes daily, wave after wave.<br />
From ransomware, unpredictable data capacity needs,<br />
compliance requirements to high standards of data availability,<br />
security and fast recovery, we equip our customers and<br />
partners to meet the future head-on, with modular, flexible,<br />
and future-proof data management and business continuity<br />
solutions for the next generation of hybrid data centers.<br />
When prevention fails, StorageCraft protects your data!<br />
One vendor, one solution, total business continuity.<br />
www.StorageCraft.com<br />
WHERE YOUR DATA IS ALWAYS SAFE, ALWAYS ACCESSIBLE, ALWAYS OPTIMIZED
The UK’s number one in IT Storage<br />
NAS VS. SAN:<br />
DR-AS-A-SERVICE:<br />
A primer for SMBs<br />
<strong>May</strong>/<strong>Jun</strong>e <strong>2021</strong><br />
Vol 21, Issue 3<br />
CONTENTS<br />
<strong>ST</strong>OR<br />
MAGAZINE<br />
<strong>ST</strong>ORAGE<br />
CONTENTS<br />
The future of shared storage<br />
INTERNET OF THINGS:<br />
Where wi l a l the data go?<br />
<strong>ST</strong>ORAGE TIERING:<br />
Have your cake and eat it<br />
COMMENT - NEWS - NEWS ANALYSIS - CASE <strong>ST</strong>UDIES - OPINION - PRODUCT REVIEWS<br />
COMMENT….....................................................................4<br />
Backup is not just for March 31st<br />
<strong>ST</strong>RATEGY: HARD DISK DRIVES…..…….........................6<br />
The amount of data grows globally by several billion terabytes every year as more<br />
machines and devices generate data - in the age of IoT, argues Rainer Kaese of<br />
Toshiba, HDDs remain indispensable<br />
10<br />
INTERVIEW: INFINIDAT….............................................…8<br />
Storage magazine editor Dave Tyler caught up with Phil Bullinger, Infinidat's new CEO<br />
to discuss the challenges of taking on a new role in the middle of a global crisis<br />
CASE <strong>ST</strong>UDY: TOTAL EXPLORATION & PRODUCTION.......10<br />
Total has migrated its offshore oil and gas platforms to a hyperconverged<br />
infrastructure that delivers space efficiencies along with improved replication and DR<br />
RESEARCH: DATA PROTECTION………...............................12<br />
New research shows 58% of backups are failing, creating data protection challenges<br />
and limiting the ability of organisations to pursue Digital Transformation initiatives<br />
12<br />
CASE <strong>ST</strong>UDY: SAVE THE CHILDREN SPAIN………......14<br />
Non-profit organisation Save the Children Spain has reduced the costs of its backup and<br />
simplified disaster recovery thanks to NAKIVO Backup & Replication<br />
MANAGEMENT: <strong>ST</strong>ORAGE TIERING…………............................16<br />
The ideal 'storage cake' has three equally important tiers, argues Craig Wilson, Technical<br />
Architect at OCF<br />
CASE <strong>ST</strong>UDY: IMPERIAL WAR MUSEUMS……............18<br />
Lifecycle Management Software scans and moves inactive data from primary storage to<br />
'perpetual tier' for long-term access and archival<br />
16<br />
OPINION: DISA<strong>ST</strong>ER RECOVERY AS A SERVICE…...20<br />
As more and more SMBs are attracted to Disaster Recovery as a Service, Florian Malecki<br />
of StorageCraft outlines the key requirements of a DRaaS solution - what is important,<br />
and why<br />
CASE <strong>ST</strong>UDY: TAMPERE VOCATIONAL COLLEGE TREDU..22<br />
A college in Finland has implemented a hybrid solution that offers a unified portal for<br />
storage and backup of multiple services, eliminating the need to jump from one<br />
application to another<br />
ROUNDTABLE: BACKUP……......................................…24<br />
This year's World Backup Day has come and gone, but it might be the lingering impact<br />
of Covid-19 that has a deeper effect on organisations' backup thinking. Storage<br />
magazine gathered the thoughts of industry leaders<br />
24<br />
MANAGEMENT: INTERNET OF THINGS……...............28<br />
From medical wearables through search-and-rescue drones to smart cities, CheChung<br />
Lin of Western Digital, describes ways to optimise ever-growing volumes of IoT data<br />
using purpose-built storage<br />
CASE <strong>ST</strong>UDY: CANAL EXTREMADURA……............…30<br />
Spanish TV network Canal Extremadura has revamped its IT infrastructure with Quantum<br />
StorNext File System software and tape solutions<br />
TECHNOLOGY: NAS……………….....................................32<br />
With the greater performance, functionality and ease of use of NAS, it is increasingly<br />
hard to justify the need for a SAN in modern creative workflows, argues Ben Pearce of<br />
GB Labs<br />
32<br />
OPINION: CLOUD <strong>ST</strong>ORAGE…..................................….34<br />
The Covid-19 pandemic has forced businesses to evolve quickly and adjust to the new<br />
working dynamic - but some have been better prepared than others, explains Russ<br />
Kennedy of Nasuni<br />
www.storagemagazine.co.uk @<strong>ST</strong>MagAndAwards <strong>May</strong>/<strong>Jun</strong>e <strong>2021</strong><br />
<strong>ST</strong>ORAGE<br />
MAGAZINE<br />
03
COMMENT<br />
EDITOR: David Tyler<br />
david.tyler@btc.co.uk<br />
SUB EDITOR: Mark Lyward<br />
mark.lyward@btc.co.uk<br />
REVIEWS: Dave Mitchell<br />
PRODUCTION MANAGER: Abby Penn<br />
abby.penn@btc.co.uk<br />
PUBLISHER: John Jageurs<br />
john.jageurs@btc.co.uk<br />
LAYOUT/DESIGN: Ian Collis<br />
ian.collis@btc.co.uk<br />
SALES/COMMERCIAL ENQUIRIES:<br />
Lyndsey Camplin<br />
lyndsey.camplin@storagemagazine.co.uk<br />
Stuart Leigh<br />
stuart.leigh@btc.co.uk<br />
MANAGING DIRECTOR: John Jageurs<br />
john.jageurs@btc.co.uk<br />
DI<strong>ST</strong>RIBUTION/SUBSCRIPTIONS:<br />
Christina Willis<br />
christina.willis@btc.co.uk<br />
PUBLISHED BY: Barrow & Thompkins<br />
Connexions Ltd. (BTC)<br />
35 Station Square, Petts Wood<br />
Kent BR5 1LZ, UK<br />
Tel: +44 (0)1689 616 000<br />
Fax: +44 (0)1689 82 66 22<br />
SUBSCRIPTIONS:<br />
UK £35/year, £60/two years,<br />
£80/three years;<br />
Europe: £48/year, £85 two years,<br />
£127/three years;<br />
Rest of World: £62/year<br />
£115/two years, £168/three years.<br />
Single copies can be bought for £8.50<br />
(includes postage & packaging).<br />
Published 6 times a year.<br />
No part of this magazine may be<br />
reproduced without prior consent, in<br />
writing, from the publisher.<br />
©Copyright <strong>2021</strong><br />
Barrow & Thompkins Connexions Ltd<br />
Articles published reflect the opinions<br />
of the authors and are not necessarily those<br />
of the publisher or of BTC employees. While<br />
every reasonable effort is made to ensure<br />
that the contents of articles, editorial and<br />
advertising are accurate no responsibility<br />
can be accepted by the publisher or BTC for<br />
errors, misrepresentations or any<br />
resulting effects<br />
BACKUP IS NOT JU<strong>ST</strong> FOR<br />
MARCH 31<strong>ST</strong><br />
BY DAVID TYLER<br />
EDITOR<br />
Following on from last issue's thought-provoking and occasionally controversial<br />
interview with Eric Siron about backup, this time around we are pleased to<br />
feature an industry roundtable article developed in the weeks after World Backup<br />
Day. The storage sector has tried hard to make March 31st a significant date to<br />
remind people and organisations of the vital importance of backup, but it could<br />
perhaps be argued that by putting it on the same level as Star Wars Day or St.<br />
Swithin's Day, we are instead trivialising data protection and distracting users from the<br />
key point that, in fact, backup is something we should be constantly thinking about -<br />
and regularly testing!<br />
As Nick Turner of Druva says in the article: "Whilst we've celebrated a decade's worth<br />
of World Backup Days, this past year has tested the ability to protect business data like<br />
no other." If something as world-changing as Covid-19 can't make us focus on<br />
protecting critical assets, what might?<br />
Businesses have adapted remarkably rapidly to remote working, but while best<br />
practices around data protection and recovery are still there, it is critical that those<br />
businesses evolve their strategies just in the same way that our approach to data and<br />
access changes. Quest Software's Adiran Moir comments: "We also need to move away<br />
from the concept of focusing just on backup. In order to get this right, organisations<br />
need to consider continuity - ensuring they have a platform in place that will not only<br />
recover the data but will do so with minimal downtime."<br />
Does a shift to the cloud mean that everyone will move to a Backup-as-a-Service<br />
model? As is so often the case, the question gets an 'it depends' answer from most of<br />
the experts in our article. Scality's Paul Speciale describes what to be wary of when<br />
opting for the cloud approach: "As with all things in IT, we need to carefully consider<br />
the overall cost of ownership for backup solutions, including the trade-off between<br />
shifting capital expenditures to operational savings in the cloud. While it can be true<br />
that cloud backups save money, it can also be true that they are more expensive than<br />
on-premises solutions."<br />
It is surely no coincidence that World Backup Day falls just one day before April Fools'<br />
Day - there is a none-too-subtle suggestion that only an idiot isn't keeping a very<br />
watchful eye on how his backup processes are functioning. As Zeki Turedi of<br />
CrowdStrike concludes: "Milestones like World Backup Day act as reminders for IT<br />
professionals to look again at their IT architecture and confirm it's still fit for purpose."<br />
Amen to that.<br />
^<br />
04 <strong>ST</strong>ORAGE<br />
MAGAZINE<br />
<strong>May</strong>/<strong>Jun</strong>e <strong>2021</strong><br />
@<strong>ST</strong>MagAndAwards<br />
www.storagemagazine.co.uk
Performance Begins Now<br />
Introducing the Most Comprehensive Portfolio of X12 Server and Storage Systems<br />
with 3 rd Gen Intel® Xeon® Scalable processors<br />
Learn More at www.supermicro.com<br />
© Supermicro and Supermicro logo are trademarks of Super Micro Computer, Inc. in the U.S. and/or other countries.
<strong>ST</strong>RATEGY:<br />
<strong>ST</strong>RATEGY: HARD DISK DRIVES<br />
RACK TO THE FUTURE<br />
THE AMOUNT OF DATA WORLDWIDE GROWS BY SEVERAL BILLION<br />
TERABYTES EVERY YEAR AS MORE AND MORE MACHINES AND<br />
DEVICES ARE GENERATING DATA - BUT WHERE WILL WE PUT IT ALL?<br />
EVEN IN THIS AGE OF IOT, ARGUES RAINER KAESE OF TOSHIBA<br />
ELECTRONICS EUROPE GMBH, HARD DRIVES REMAIN INDISPENSABLE<br />
Data volumes have multiplied in<br />
recent decades, but the real data<br />
explosion is yet to come. Whereas,<br />
in the past, data was mainly created by<br />
people, such as photos, videos and<br />
documents, with the advent of the IoT<br />
age, machines, devices and sensors are<br />
now becoming the biggest data<br />
producers. There are already far more of<br />
them than people and they generate data<br />
much faster than us. A single autonomous<br />
car, for example, creates several terabytes<br />
per day. Then there is the particle<br />
accelerator at CERN that generates a<br />
petabyte per second, although 'only'<br />
around 10 petabytes per month are<br />
retained for later analysis.<br />
In addition to autonomous driving and<br />
research, video surveillance and industry<br />
are the key contributors to this data flood.<br />
The market research company IDC<br />
assumes that the global data volume will<br />
grow from 45 zettabytes last year to 175<br />
zettabytes in 2025 (IDC "Data Age 2025"<br />
Whitepaper, Update from <strong>May</strong> 2020).<br />
This means that, within six years, three<br />
times as much data will be generated as<br />
existed in total in 2019, namely 130<br />
zettabytes - that is 130 billion terabytes.<br />
Much of this data will be evaluated at<br />
the point of creating, for example, in the<br />
sensors feeding an autonomous vehicle or<br />
production facility (known as edge<br />
computing). Here, fast results and<br />
reactions in real-time are essential, so the<br />
time required for data transmission and<br />
central analysis is unacceptable. However,<br />
on-site storage space and computing<br />
power are limited, so sooner or later,<br />
most data ends up in a data centre. It can<br />
then be post-processed and merged with<br />
data from other sources, analysed further<br />
and archived.<br />
This poses enormous challenges for the<br />
storage infrastructures of companies and<br />
research institutions. They must be able to<br />
absorb a constant influx of large amounts<br />
of data and store it reliably. This is only<br />
possible with scale-out architectures that<br />
provide storage capacities of several<br />
dozen petabytes and can be continuously<br />
expanded. And they need reliable<br />
suppliers of storage hardware who can<br />
satisfy this continuous and growing<br />
storage demand. After all, we cannot<br />
afford for the data to end up flowing into<br />
a void. The public cloud is often touted as<br />
a suitable solution. Still, the reality is that<br />
the bandwidth for the data volumes being<br />
discussed is insufficient and the costs are<br />
not economically viable.<br />
For organisations that store IoT data,<br />
storage becomes, in a sense, a<br />
commodity. It is not consumed in the true<br />
sense of the word but, like other<br />
consumer goods, it is purchased regularly<br />
and requires continuing investment. A<br />
blueprint of how storage infrastructures<br />
and storage procurement models can<br />
look in the IoT age is provided by<br />
research institutions such as CERN that<br />
already process and store vast amounts of<br />
data. The European research centre for<br />
particle physics is continuously adding<br />
new storage expansion units to its data<br />
centre, each of which contains several<br />
hundred hard drives of the most recent<br />
generation. In total, their 100,000 hard<br />
disks have attained a total storage<br />
capacity of 350 petabytes.<br />
THE PRICE DECIDES THE MEDIUM<br />
The CERN example demonstrates that<br />
there is no way around hard disks when it<br />
comes to storing such enormous amounts<br />
of data. HDDs remain the cheapest<br />
medium that meets the dual requirements<br />
06 <strong>ST</strong>ORAGE <strong>May</strong>/<strong>Jun</strong>e <strong>2021</strong><br />
@<strong>ST</strong>MagAndAwards<br />
www.storagemagazine.co.uk<br />
MAGAZINE
<strong>ST</strong>RATEGY:<br />
<strong>ST</strong>RATEGY: HARD DISK DRIVES<br />
"Although the prices for SSDs are falling, they are doing so at a similar rate to HDDs.<br />
Moreover, HDDs are very well suited to meet the performance requirements of highcapacity<br />
storage environments. A single HDD may be inferior to a single SSD, but the<br />
combination of several fast-spinning HDDs achieve very high IOPS values that can<br />
reliably supply analytics applications with the data they require."<br />
of storage space and easy access. By<br />
comparison, tape is very inexpensive but<br />
is not suitable as an offline medium and<br />
is only appropriate for archiving data.<br />
Flash memory, on the other hand, is<br />
currently still eight to ten times more<br />
expensive per unit capacity than hard<br />
disks. Although the prices for SSDs are<br />
falling, they are doing so at a similar rate<br />
to HDDs. Moreover, HDDs are very well<br />
suited to meet the performance<br />
requirements of high-capacity storage<br />
environments. A single HDD may be<br />
inferior to a single SSD, but the<br />
combination of several fast-spinning<br />
HDDs achieve very high IOPS values that<br />
can reliably supply analytics applications<br />
with the data they require.<br />
In the end, price alone is the decisive<br />
criterion - especially since the data<br />
volumes to be stored in the IoT world can<br />
only be compressed minimally to save<br />
valuable storage space. If at all possible,<br />
compression typically takes place within<br />
the endpoint or at the edge to reduce the<br />
amount of data to be transmitted. Thus, it<br />
arrives in compressed form at the data<br />
centre and must be stored without further<br />
compression. Furthermore, deduplication<br />
offers little potential savings because,<br />
unlike on typical corporate file shares or<br />
backups, there is hardly any identical<br />
data.<br />
Because of the flood of data in IoT and<br />
the resultant large quantity of drives<br />
required, the reliability of the hard disks<br />
used is of great importance. This is less to<br />
do with possible data losses, as these can<br />
be handled using appropriate backup<br />
mechanisms, and more to do with<br />
maintenance of the hardware. With an<br />
Annualised Failure Rate (AFR) of 0.7 per<br />
cent, instead of the 0.35 per cent<br />
achieved by CERN with Toshiba hard<br />
disks, a storage solution using 100,000<br />
hard disks would require that 350 drives<br />
are replaced annually - on average<br />
almost one drive replacement more per<br />
day.<br />
HDDS <strong>ST</strong>ICK AROUND FOR YEARS<br />
TO COME<br />
In the coming years, little will change with<br />
the main burden of IoT data storage<br />
borne by hard disks. Flash production<br />
capacities will simply remain too low for<br />
SSDs to outstrip HDDs. To cover the<br />
current storage demand with SSDs alone,<br />
flash production would have to increase<br />
significantly. Bearing in mind that the<br />
construction costs for a single flash<br />
fabrication facility run to several billion<br />
Euros, this is an undertaking that is<br />
challenging to finance. Moreover, it would<br />
only result in higher flash output after<br />
around two years that would only cover<br />
the demand of 2020 and not that of<br />
2022.<br />
The production of hard disks, on the<br />
other hand, can be increased much more<br />
easily because less cleanroom production<br />
is needed than in semiconductor<br />
production. Additionally, the development<br />
of hard disks is progressing continuously,<br />
and new technologies such as HAMR<br />
(Heat-Assisted Magnetic Recording) and<br />
MAMR (Microwave-Assisted Magnetic<br />
Recording) are continuing to deliver<br />
capacity increases. Experts assume that<br />
HDD storage capacity will continue to<br />
increase at a rate of around 2 terabytes<br />
per year for a few more years at constant<br />
cost. Thus, IDC predicts that by the end of<br />
2025, more than 80 per cent of the<br />
capacity required in the enterprise sector<br />
for core and edge data centres will<br />
continue to be obtained in the form of<br />
HDDs and less than 20 per cent on SSDs<br />
and other flash media.<br />
More info: www.toshiba-storage.com<br />
www.storagemagazine.co.uk<br />
@<strong>ST</strong>MagAndAwards <strong>May</strong>/<strong>Jun</strong>e <strong>2021</strong><br />
<strong>ST</strong>ORAGE<br />
MAGAZINE<br />
07
INTERVIEW: INFINIDAT INFINIDAT<br />
BUILDING MOMENTUM<br />
PHIL BULLINGER WAS APPOINTED CEO AT INFINIDAT AT THE <strong>ST</strong>ART<br />
OF <strong>2021</strong>, HAVING PREVIOUSLY SHONE AT <strong>ST</strong>ORAGE INDU<strong>ST</strong>RY<br />
NAMES INCLUDING LSI, ORACLE, DELL EMC AND WD. <strong>ST</strong>ORAGE<br />
MAGAZINE EDITOR DAVE TYLER CAUGHT UP WITH PHIL TO DISCUSS<br />
THE CHALLENGES OF TAKING ON A NEW ROLE IN THE MIDDLE OF A<br />
GLOBAL CRISIS<br />
Dave Tyler: You have around 30 years of<br />
experience across many of the biggest<br />
names in the sector - which presumably<br />
was a large part of why Infinidat wanted to<br />
bring you in to helm the business. What was the<br />
draw from your side?<br />
Phil Bullinger: I was really attracted to the<br />
Infinidat opportunity because of their great<br />
reputation as well as the fantastic customer<br />
experience around the product: in my years in<br />
enterprise storage, whenever I came across an<br />
Infinidat customer they were never shy to talk<br />
about how much they loved the platform. I<br />
know - again from my own experience - that<br />
behind every great product is a great team, and<br />
since I joined the company in January these first<br />
few months have been incredibly positive -<br />
everything I had expected and hoped to find<br />
here has been reinforced.<br />
There is a real focus on innovation, but<br />
crucially the customer comes first in every<br />
conversation we have: it's genuinely a part of<br />
the DNA of the company. I know a lot of<br />
businesses say that, but so much of how<br />
Infinidat is organised pivots around the<br />
customer experience, whether it's our L3 support<br />
team, our technical advisers in the field, or how<br />
the product itself operates. I've been delighted<br />
to find the extent to which the company is<br />
organised around the customer.<br />
DT: Tell us a little more about the specifics of<br />
your role and how you fit in to Infinidat's<br />
strategy?<br />
PB: I've been blessed on many occasions in my<br />
career to lead companies through growth and<br />
scale: stepping into a business at a specific<br />
point in its lifecycle where it had good products<br />
and satisfied customers, and the task then was<br />
to efficiently scale the business to new levels of<br />
success. That is exactly what we're focussed on<br />
right now here at Infinidat: investing in<br />
engineering, sales, marketing, and raising the<br />
profile of the company in the markets that<br />
we're targeting.<br />
We have a lot of momentum as a business:<br />
throughout 2020 we had sequential growth -<br />
every quarter was larger than the one before.<br />
08 <strong>ST</strong>ORAGE <strong>May</strong>/<strong>Jun</strong>e <strong>2021</strong><br />
@<strong>ST</strong>MagAndAwards<br />
www.storagemagazine.co.uk<br />
MAGAZINE
INTERVIEW: INTERVIEW: INFINIDAT<br />
Q4 saw very large growth, year over year, and<br />
having just completed our first calendar quarter<br />
of <strong>2021</strong> I'm really pleased to say it was a record<br />
for the company. That gives us a lot of<br />
confidence and impetus as we look ahead to<br />
the rest of <strong>2021</strong>.<br />
DT: Given the global situation as we start to<br />
come out of the worst of the pandemic, how<br />
has the storage sector been affected?<br />
PB: There are some points that might look<br />
contradictory: from a macro-economic point of<br />
view there are emerging tailwinds in the global<br />
economy. There is no doubt that in the last year<br />
Covid-19 had both a positive and a negative<br />
impact on business. Large enterprise spending<br />
was constrained last year, and many enterprises<br />
were uncertain as to what the future held. But at<br />
the same time there were fundamental drivers<br />
of storage activity as well, because of the<br />
pressing need for digital transformation.<br />
Companies - of all sizes - were moving as fast<br />
as they could to transform their businesses<br />
around, what frankly, has become a digital<br />
economy, based around digital user experience<br />
and digital connectivity.<br />
How companies use their data is now the<br />
primary determinant not just of whether a<br />
company will be successful, but of whether it will<br />
even continue to exist going forward. What<br />
we're starting to see right now is projects that<br />
may have been on hold for a year coming back<br />
to the forefront. As a result, we're finding our<br />
own sales activity globally is seeing more<br />
'lubrication' in the process as customers are<br />
much more interested in investing in<br />
transformational projects.<br />
DT: What is Infinidat doing to address not just<br />
the post-Covid world, but the way businesses<br />
see data and storage more generally?<br />
PB: This is the key question for us at Infinidat:<br />
what are customers motivated now to do with<br />
their data infrastructure? The landscape looks<br />
very different today to ten years ago, with the<br />
advent of the public cloud where data and<br />
applications are accessed via the internet as<br />
opposed to locally. As new models have<br />
emerged, private cloud remains a crucial part<br />
of almost every business infrastructure - so<br />
almost every customer has a hybrid model.<br />
There has been a lot of hype around seamless<br />
movement of data and applications from onpremises<br />
to off-premises (and back again!), but<br />
the fact is that most enterprise customers tend<br />
not to pursue that line. They are more likely to<br />
think about which applications or data are<br />
super-critical to the business, need the highest<br />
levels of performance, availability and of course<br />
security - and those will go into a private cloud.<br />
For almost all of our customers, Infinidat forms<br />
the centrepiece of their private cloud<br />
infrastructure, because of our massive<br />
scalability, great reliability, and enterprise data<br />
features - all at lower costs than our competitors<br />
and often offering better performance than allflash<br />
systems. It's easy to see why Infinidat then<br />
becomes a compelling consolidation platform.<br />
As the intersection of digital transformation,<br />
private cloud architecture and Infinidat come<br />
together, that's what is creating such momentum<br />
around the business for us.<br />
DT: What about partners? I know Infinidat<br />
puts great store in its relationships with the<br />
likes of VMware and AWS - how important<br />
are those companies (and others) to your<br />
continued success?<br />
PB: Those two are really good examples of<br />
solution ecosystem partners for Infinidat.<br />
Customers don't generally buy just a platform -<br />
they buy a solution to a problem. Those<br />
solutions almost always involve an ecosystem<br />
web of ISVs, applications, compute, network,<br />
and storage - so we're very cognisant of the fact<br />
that Infinidat has to be tightly integrated and<br />
closely working with a whole range of different<br />
partner offerings.<br />
Infinidat has also always emphasised the very<br />
highest levels of integration with VMware: even<br />
from our very first release we had an<br />
exceptional level of native integration. These<br />
days some of the largest VMware deployments<br />
at enterprise customers are on Infinidat simply<br />
because we integrate so well and provide the<br />
scale that those customers need. I believe we<br />
are solving the long-standing issue between<br />
application administrators and storage<br />
administrators: when you can give the VMware<br />
administrator the native ability to manage our<br />
storage, snapshots, replication etc., that can<br />
really help them to manage the overall<br />
application infrastructure of that enterprise.<br />
As well as VMware and AWS we also have<br />
critical relationships with companies such as<br />
Veeam, Veritas, Commvault, Red Hat, and tight<br />
integrations with SAP and Oracle and other<br />
enterprise ISVs.<br />
DT: Who would you say Infinidat views as its<br />
primary competition in today's market?<br />
PB: At its simplest our competitive landscape is<br />
other primary storage systems you would find<br />
in an enterprise sovereign secure data centre -<br />
but there's more to it than that. You can look at<br />
the Gartner Magic Quadrant for primary<br />
storage and see all the 'usual suspects'. We<br />
compete against the best and most capable<br />
primary storage products that branded system<br />
OEMs and storage companies are bringing to<br />
the market.<br />
The public cloud has raised the tide for all the<br />
boats; the world currently only stores a fraction<br />
of all the data that it generates. This means that<br />
companies are constantly striving to work out<br />
how to collect, and reason over, and drive<br />
insight from, more and more data. Public cloud<br />
models have certainly accelerated that, and<br />
therefore more data is being created in the<br />
enterprise, and some of that data - usually the<br />
most important parts - will almost always be onpremises,<br />
or in a colocation, in a private cloud<br />
architecture. The way I look at it is that the<br />
cloud is a driver for business activity, and<br />
business activity drives data, and data of course<br />
drives storage, which is good for Infinidat and<br />
our business opportunity.<br />
Our customers trust our products and support<br />
to protect petabytes of their most important data<br />
- data that they 'hold most dearly', and which<br />
needs the very highest levels of availability,<br />
protection, and security.<br />
More info: www.infinidat.com<br />
www.storagemagazine.co.uk<br />
@<strong>ST</strong>MagAndAwards <strong>May</strong>/<strong>Jun</strong>e <strong>2021</strong><br />
<strong>ST</strong>ORAGE<br />
MAGAZINE<br />
09
CASE <strong>ST</strong>UDY:<br />
CASE <strong>ST</strong>UDY: TOTAL EXPLORATION & PRODUCTION<br />
DELIVERING A TOTAL SOLUTION<br />
TOTAL EXPLORATION & PRODUCTION HAS MIGRATED ITS OFFSHORE OIL AND GAS PLATFORMS TO A<br />
HYPERCONVERGED INFRA<strong>ST</strong>RUCTURE THAT DELIVERS SPACE EFFICIENCIES AS WELL AS IMPROVED<br />
REPLICATION AND DISA<strong>ST</strong>ER RECOVERY CAPABILITIES<br />
Nutanix has announced the<br />
completion of a project for Total<br />
Exploration & Production UK Limited<br />
(TEPUK) to deploy its market-leading<br />
hyperconverged infrastructure both in<br />
Aberdeen and to its North Sea oil and gas<br />
installations. TEPUK operates across the<br />
entire oil and gas value chain, aiming to<br />
provide hydrocarbons that are more<br />
affordable, more reliable, cleaner and<br />
accessible to as many people as possible.<br />
Offshore oil and gas platforms are a<br />
challenging environment in which to install,<br />
manage and support IT of any description.<br />
Due, not least, to logistical challenges and<br />
strict safety requirements, but also because<br />
physical space, Internet connectivity and<br />
manpower are at a premium.<br />
TEPUK chose to replace end-of-life legacy<br />
servers and storage networks on its rigs with<br />
Nutanix hyperconverged infrastructure.<br />
Requiring less than half the equivalent rack<br />
space of alternative solutions, the Nutanix<br />
infrastructure isn't just space efficient. Other<br />
benefits include on-demand scalability, selfhealing<br />
high availability, integrated Prism<br />
Central remote management and hypervisor<br />
neutral virtualisation capabilities.<br />
Aberdeen-based Nutanix partner NorthPort<br />
Technologies was also involved. With its<br />
extensive experience in this field, it is able to<br />
provide engineers fully trained and certified<br />
to meet the critical safety requirements of the<br />
offshore industry.<br />
"Our engineers don't just have to be<br />
trained in IT, they need to be physically fit<br />
and pretty committed to do this kind of job,"<br />
explains Russell Robertson, Consulting IT<br />
Specialist at NorthPort Technologies. "Not<br />
least because they have to travel to and<br />
from the rigs in all weathers and be<br />
prepared to undergo the same rigorous<br />
training as anyone working offshore."<br />
The principal role of the offshore equipment<br />
is to host local infrastructure services, such as<br />
Windows domain controllers, file and print<br />
sharing and all the usual business<br />
productivity apps. These are supported using<br />
VMs alongside other specialist industrial<br />
control and safety workloads.<br />
Despite the many challenges plus<br />
additional issues associated with the Covid-<br />
19 lockdown, the NorthPort team has now<br />
completed installation of the last set of 3-<br />
node offshore clusters, bringing the total<br />
installs to nine. In addition, the Aberdeen<br />
reseller has configured a coordinating 15-<br />
node Nutanix cluster on-shore with another<br />
at a separate TEPUK site for replication and<br />
disaster recovery (DR).<br />
"With this announcement we are delighted<br />
to welcome TEPUK to a growing list of oil<br />
and gas companies using Nutanix<br />
hyperconverged infrastructure to deliver<br />
industrial strength IT services in some of the<br />
most challenging environments around the<br />
globe," commented Dom Poloniecki, Vice<br />
President & General Manager, Sales,<br />
Western Europe and Sub-Saharan Africa<br />
region, Nutanix.<br />
"More than that, it reflects the versatility of<br />
the Nutanix platform which is equally at home<br />
providing core IT services on an offshore oil<br />
rig as it is supporting leading edge hybrid<br />
cloud in a corporate data centre."<br />
More info: www.nutanix.com<br />
10 <strong>ST</strong>ORAGE <strong>May</strong>/<strong>Jun</strong>e <strong>2021</strong><br />
@<strong>ST</strong>MagAndAwards<br />
www.storagemagazine.co.uk<br />
MAGAZINE
Copyright ©<strong>2021</strong> QNAP UK Limited All rights reserved<br />
uksales@qnap.com
RESEARCH:<br />
RESEARCH: DATA PROTECTION<br />
CAN YOU RELY ON YOUR BACKUP?<br />
NEW RESEARCH SHOWS THAT 58% OF BACKUPS ARE FAILING, CREATING DATA PROTECTION CHALLENGES<br />
AND LIMITING THE ABILITY OF ORGANISATIONS TO PURSUE DIGITAL TRANSFORMATION INITIATIVES<br />
URGENT ACTION ON DATA<br />
PROTECTION REQUIRED<br />
Respondents stated that their data<br />
protection capabilities are unable to keep<br />
pace with the DX demands of their<br />
organisation, posing a threat to business<br />
continuity, potentially leading to severe<br />
consequences for both business reputation<br />
and performance. Despite the integral role<br />
backup plays in modern data protection,<br />
14% of all data is not backed up at all, and<br />
58% of recoveries fail, leaving<br />
organisations' data unprotected and<br />
irretrievable in the event of an outage by<br />
cyber-attack.<br />
Data protection challenges are<br />
undermining organisations' ability to<br />
execute Digital Transformation<br />
initiatives globally, according to the<br />
recently-published Veeam Data Protection<br />
Report <strong>2021</strong>, which has found that 58% of<br />
backups fail leaving data unprotected.<br />
Veeam's research found that against the<br />
backdrop of COVID-19 and ensuing<br />
economic uncertainty - which 40% of CXOs<br />
cite as the biggest threat to their<br />
organisation's DX in the next 12 months -<br />
inadequate data protection and the<br />
challenges to business continuity posed by<br />
the pandemic are hindering companies'<br />
initiatives to transform.<br />
The Veeam Data Protection Report <strong>2021</strong><br />
surveyed more than 3,000 IT decision<br />
makers at global enterprises to understand<br />
their approaches to data protection and<br />
data management. The largest of its kind,<br />
the study examines how organisations<br />
expect to be prepared for the IT challenges<br />
they face, including reacting to demand<br />
changes and interruptions in service,<br />
global influences (such as COVID-19), and<br />
more aspirational goals of IT<br />
modernisation and DX.<br />
"Over the past 12 months, CXOs across<br />
the globe have faced a unique set of<br />
challenges around how to ensure data<br />
remains protected in a highly diverse,<br />
operational landscape," said Danny Allan,<br />
Chief Technology Officer and Senior Vice<br />
President of Product Strategy at Veeam. "In<br />
response to the pandemic, we have seen<br />
organisations accelerate DX initiatives by<br />
years and months in order to stay in<br />
business. However, the way data is<br />
managed and protected continues to<br />
undermine them. Businesses are being held<br />
back by legacy IT and outdated data<br />
protection capabilities, as well as the time<br />
and money invested in responding to the<br />
most urgent challenges posed by COVID-<br />
19. Until these inadequacies are<br />
addressed, genuine transformation will<br />
continue to evade organisations."<br />
Furthermore, unexpected outages are<br />
common, with 95% of firms experiencing<br />
them in the last 12 months; and with 1 in 4<br />
servers having at least one unexpected<br />
outage in the prior year, the impact of<br />
downtime and data loss is experienced all<br />
too frequently. Crucially, businesses are<br />
seeing this hit their bottom line, with more<br />
than half of CXOs saying this can lead to a<br />
loss of confidence towards their<br />
organisation from customers, employees<br />
and stakeholders.<br />
"There are two main reasons for the lack of<br />
backup and restore success: Backups are<br />
ending with errors or are overrunning the<br />
allocated backup window, and secondly,<br />
restorations are failing to deliver their<br />
required SLAs," said Allan. "Simply put, if a<br />
backup fails, the data remains unprotected,<br />
which is a huge concern for businesses given<br />
that the impacts of data loss and unplanned<br />
downtime span from customer backlash to<br />
reduced corporate share prices. Further<br />
compounding this challenge is the fact that<br />
the digital threat landscape is evolving at an<br />
exponential rate. The result is an<br />
unquestionable gap between the data<br />
12 <strong>ST</strong>ORAGE <strong>May</strong>/<strong>Jun</strong>e <strong>2021</strong><br />
@<strong>ST</strong>MagAndAwards<br />
www.storagemagazine.co.uk<br />
MAGAZINE
RESEARCH:<br />
RESEARCH: DATA PROTECTION<br />
"In response to the pandemic, we have seen organisations accelerate DX initiatives by<br />
years and months in order to stay in business. However, the way data is managed and<br />
protected continues to undermine them. Businesses are being held back by legacy IT<br />
and outdated data protection capabilities, as well as the time and money invested in<br />
responding to the most urgent challenges posed by COVID-19."<br />
protection capabilities of businesses versus<br />
their DX needs. It is urgent that this shortfall is<br />
addressed given the pressure on<br />
organisations to accelerate their use of cloudbased<br />
technologies to serve customers in the<br />
digital economy."<br />
I.T. <strong>ST</strong>RATEGIES IMPACTED BY<br />
COVID-19<br />
CXOs are aware of the need to adopt a<br />
cloud-first approach and change the way IT is<br />
delivered in response to the digital<br />
acceleration brought about by COVID-19.<br />
Many have already done so, with 91%<br />
increasing their cloud services usage in the<br />
first months of the pandemic, and the majority<br />
will continue to do so, with 60% planning to<br />
add more cloud services to their IT delivery<br />
strategy. However, while businesses recognise<br />
the need to accelerate their DX journeys over<br />
the next 12 months, 40% acknowledge that<br />
economic uncertainty poses a threat to their<br />
DX initiatives.<br />
UK-specific highlights from the Veeam<br />
research included:<br />
Hybrid-IT across physical, virtual and<br />
cloud: Over the next two years, most<br />
organisations expect to gradually, but<br />
continually, reduce their physical servers,<br />
maintain and fortify their virtualised<br />
infrastructure, while embracing 'cloud-first'<br />
strategies. This will result in half of<br />
production workloads being cloud-hosted<br />
by 2023, forcing most firms to re-imagine<br />
their data protection strategy for new<br />
production landscapes.<br />
Rapid growth in cloud-based backup:<br />
Backup is shifting from on-premises to<br />
cloud-based solutions that are managed<br />
by a service provider, with trajectory<br />
reported from 33% in 2020 to 50%<br />
anticipated by 2023.<br />
Importance of reliability: 'To improve<br />
reliability' was the most important driver of<br />
a UK organisation to change its primary<br />
backup solution, stated by 33% of<br />
respondents.<br />
Improving ROI: 33% stated that the most<br />
important driver for change was<br />
improving the economics of their solution,<br />
including improving ROI and reducing<br />
TCO.<br />
<br />
Availability gap: 78% of companies have<br />
an 'availability gap' between how fast they<br />
can recover applications and how fast<br />
they need to recover them.<br />
Reality gap: 77% have a 'protection gap'<br />
between how frequently data is backed-up<br />
versus how much data they can afford to<br />
lose after an outage.<br />
Modern data protection: Over half (51%)<br />
of organisations now use a third-party<br />
backup service for Microsoft Office 365<br />
data, and 43% plan to adopt Disaster<br />
Recovery as a Service (DRaaS) by 2023.<br />
TRANSFORMATION <strong>ST</strong>ARTS WITH<br />
DIGITAL RESILIENCY<br />
As organisations increasingly adopt modern IT<br />
services at rapid pace, inadequate data<br />
protection capabilities and resources will lead<br />
to DX initiatives faltering, even failing. CXOs<br />
already feel the impact, with 30% admitting<br />
that their DX initiatives have slowed or halted<br />
in the past 12 months. The impediments to<br />
transformation are multifaceted, including IT<br />
teams being too focused on maintaining<br />
operations during the pandemic (53%), a<br />
dependency on legacy IT systems (51%), and<br />
a lack of IT staff skills to implement new<br />
technology (49%). In the next 12 months, IT<br />
leaders will look to get their DX journeys back<br />
on track by finding immediate solutions to<br />
their critical data protection needs, with<br />
almost a third looking to move data<br />
protection to the cloud.<br />
"One of the major shifts we have seen<br />
over the past 12 months is undoubtedly an<br />
increased digital divide between those who<br />
had a plan for Digital Transformation and<br />
those who were less prepared, with the<br />
former accelerating their ability to execute<br />
and the latter slowing down," concluded<br />
Allan. "Step one to digitally transforming is<br />
being digitally resilient. Across the board<br />
organisations are urgently looking to<br />
modernise their data protection through<br />
cloud adoption. By 2023, 77% of<br />
businesses globally will be using cloud-first<br />
backup, increasing the reliability of<br />
backups, shifting cost management and<br />
freeing up IT resources to focus on DX<br />
projects that allow the organisation to excel<br />
in the digital economy."<br />
More info: www.veeam.com/wp-executivebrief-<strong>2021</strong>.html<br />
www.storagemagazine.co.uk<br />
@<strong>ST</strong>MagAndAwards <strong>May</strong>/<strong>Jun</strong>e <strong>2021</strong><br />
<strong>ST</strong>ORAGE<br />
MAGAZINE<br />
13
CASE <strong>ST</strong>UDY:<br />
CASE <strong>ST</strong>UDY: SAVE THE CHILDREN SPAIN<br />
CHILD'S PLAY<br />
NON-PROFIT ORGANISATION SAVE THE CHILDREN SPAIN HAS REDUCED THE CO<strong>ST</strong>S OF ITS BACKUP AND<br />
SIMPLIFIED DISA<strong>ST</strong>ER RECOVERY THANKS TO NAKIVO BACKUP & REPLICATION<br />
Save the Children Spain is a member<br />
of Save the Children International, a<br />
non-profit organisation that is aimed<br />
at making the world a better place for<br />
children through better education,<br />
healthcare, and economic opportunities.<br />
Nowadays, with 25,000 dedicated staff<br />
across 120 countries, Save the Children<br />
responds to major emergencies, delivers<br />
innovative development programs, and<br />
ensures children's voices are heard<br />
through campaigning to build a better<br />
future for and with children. Save the<br />
Children Spain has 10 locations across<br />
Spain and two main data centres in<br />
different locations.<br />
The organisation has a hybrid<br />
infrastructure with a mix of private cloud<br />
services and on-premises servers. A part of<br />
the infrastructure is virtualised with nearly<br />
40 virtual machines storing file servers,<br />
databases, reporting services, and other<br />
applications. The main objective of the IT<br />
department is to ensure consistency and<br />
reliability of all data. As Save the Children<br />
is a non-profit organisation, all money<br />
spent on projects must be properly justified<br />
to donors, while all invoices, transfer<br />
orders, and project reports kept safe. That<br />
is why data protection is imperative for the<br />
organisation to continue to be successful<br />
in the future. The organisation's goal is to<br />
be able to back up all data and services<br />
every day and restore those services easily<br />
and quickly in case of a disaster.<br />
Until recently, Save the Children Spain<br />
was using a different backup product to<br />
protect its data. However, the cost of that<br />
software was too expensive for a nonprofit.<br />
The organisation also wanted to<br />
take advantage of its NAS servers, but the<br />
previous software did not support NAS<br />
installation. Installation on NAS servers<br />
was essential for reducing the overall<br />
complexity of the backup strategy and<br />
freeing up a server from the environment.<br />
When Save the Children understood that<br />
the costs of updating its licensing with the<br />
14 <strong>ST</strong>ORAGE <strong>May</strong>/<strong>Jun</strong>e <strong>2021</strong><br />
@<strong>ST</strong>MagAndAwards<br />
www.storagemagazine.co.uk<br />
MAGAZINE
CASE <strong>ST</strong>UDY:<br />
CASE <strong>ST</strong>UDY: SAVE THE CHILDREN SPAIN<br />
"Simplifying the disaster recovery strategy was our main goal. Today, we can<br />
recover data on the same day in a minimum amount of time. We replicate VMs<br />
across our data centres. This allows us to have source VMs and VM replicas.<br />
Thus, if we lose one data centre, we can always power on VMs in another and the<br />
organisation will be operational within a few hours."<br />
previous vendor were too excessive, and its<br />
budget only allowed them to cover a<br />
particular number of hosts, it was time to<br />
look for an alternative solution. The goal<br />
was to save costs, simplify disaster<br />
recovery, and achieve simplicity.<br />
FUNCTIONAL YET AFFORDABLE<br />
NAKIVO Backup & Replication was Save<br />
the Children's backup solution of choice<br />
for several reasons. Not only is NAKIVO<br />
Backup & Replication affordable, but the<br />
total invoiced price allowed the<br />
organisation to protect all the virtual hosts<br />
with the same functionality that was<br />
provided by the previous vendor. Overall,<br />
the organisation saved money by switching<br />
to NAKIVO Backup & Replication. It can<br />
now use that money to finance other<br />
projects. Since NAKIVO Backup &<br />
Replication is compatible with various NAS<br />
servers, Save the Children installed the<br />
software on its Synology NAS. This allowed<br />
the organisation to free up resources and<br />
reduce backup strategy complexity.<br />
A backup appliance based on Synology<br />
NAS combines backup software,<br />
hardware, backup storage, and<br />
deduplication. With this installation, there<br />
are multiple advantages, including higher<br />
performance, smaller backups, faster<br />
recovery, and storage space savings. "The<br />
previous solution was installed on<br />
Windows, but we always wanted to take<br />
advantage of already available hardware.<br />
The whole installation process was so<br />
simple. We just had to add a package<br />
manager, find it, and click install. The<br />
whole process took 15 minutes at most.<br />
With the Synology NAS installation, we<br />
freed up production resources that were<br />
previously spent on backup," explained<br />
Alejandro Canet, IT Project Coordinator at<br />
Save the Children Spain. "Today, a full<br />
initial backup takes the organisation<br />
roughly 24 hours, while a daily<br />
incremental backup takes around 3 hours.<br />
Storage space is always expensive, so<br />
global data deduplication reduced space<br />
and saved money on storage systems for<br />
Save the Children. Moreover, instant<br />
granular recovery is a key functionality that<br />
saves a lot of time when recovering files or<br />
objects from shared resources."<br />
The IT department's objective was to<br />
create a simple disaster recovery strategy<br />
with the NAKIVO Backup & Replication<br />
functionality, as Alejandro went on:<br />
"Simplifying the disaster recovery strategy<br />
was our main goal. Today, we can recover<br />
data on the same day in a minimum<br />
amount of time. We replicate VMs across<br />
our data centres. This allows us to have<br />
source VMs and VM replicas. Thus, if we<br />
lose one data centre, we can always power<br />
on VMs in another and the organisation<br />
will be operational within a few hours.<br />
Ensuring near-instant recovery and<br />
uninterrupted business operations with<br />
NAKIVO Backup & Replication is important<br />
for us. NAKIVO Backup & Replication also<br />
allows us to keep recovery points for<br />
replicas and backups that we can rotate on<br />
a daily, weekly, monthly, or yearly basis."<br />
SIMPLER, FA<strong>ST</strong>ER, BETTER<br />
With NAKIVO Backup & Replication, Save<br />
the Children simplified its disaster recovery<br />
strategy with VM replication and instant<br />
recovery. All data and applications are<br />
backed up daily, while VMs are replicated<br />
to a different data centre to ensure nearinstant<br />
recovery in case of a disaster. With<br />
a backup appliance based on Synology<br />
NAS, Save the Children freed up<br />
production resources and achieved<br />
business continuity. Instant granular<br />
recovery is also as simple as opening the<br />
web interface, clicking a couple of buttons,<br />
and recovery is done in minutes.<br />
"The licensing costs that were offered by<br />
NAKIVO turned out to be cheaper than the<br />
yearly maintenance costs of our previous<br />
product," Alejandro concluded. "NAKIVO<br />
Backup & Replication may be almost 10<br />
times cheaper. Regarding installation and<br />
setup time, we just spent 15 minutes and<br />
everything was working. With the previous<br />
product, installation could be over 2<br />
hours, plus the software was more difficult<br />
to use. Overall, we were able to simplify<br />
disaster recovery, save costs, and utilise<br />
existing NAS servers with NAKIVO Backup<br />
& Replication."<br />
More info: www.nakivo.com<br />
www.storagemagazine.co.uk<br />
@<strong>ST</strong>MagAndAwards <strong>May</strong>/<strong>Jun</strong>e <strong>2021</strong><br />
<strong>ST</strong>ORAGE<br />
MAGAZINE<br />
15
MANAGEMENT: <strong>ST</strong>ORAGE TIERING<br />
HAVING YOUR CAKE AND EATING IT<br />
THE IDEAL '<strong>ST</strong>ORAGE CAKE' HAS<br />
THREE EQUALLY IMPORTANT<br />
TIERS, ARGUES CRAIG WILSON,<br />
TECHNICAL ARCHITECT AT OCF<br />
As hard disk drives have grown in<br />
capacity we are presented with an<br />
interesting problem. Only a few years<br />
ago a petabyte of capacity would require an<br />
entire rack of equipment. Indeed, my first<br />
project with OCF involved a storage solution<br />
that clocked in at 1PB per 45U rack, but with<br />
single drive capacity soon to hit 20TB we will<br />
be able to house a petabyte of capacity in a<br />
single storage shelf. This incredible<br />
achievement presents a new problem - hard<br />
drive performance is not improving in<br />
lockstep with capacity.<br />
In fact, per TB performance is going down<br />
dramatically so hard drive storage is<br />
effectively getting slower. 10 years ago, it<br />
took 500 hard drives for a single petabyte of<br />
storage and now it only takes 50 drives for<br />
the same capacity. There has simply not<br />
been a 10x increase in hard drive<br />
performance in that time. Seagate has set<br />
out a roadmap to 120TB HDDs by 2030<br />
and while it has detailed some plans to<br />
increase performance with its Mach.2 dual<br />
actuator technology the per TB performance<br />
will still decrease as capacities increase<br />
beyond 30TB.<br />
Today you must consider if the capacity<br />
you require will provide the expected<br />
performance or if smaller capacity drives<br />
would be more appropriate - which not only<br />
increases the rack space required but also<br />
the power consumption and in turn the<br />
TCO. But if hard drives are no longer the<br />
go-to answer for large scale storage any<br />
more, what is?<br />
"FLASH, A-AH, SAVIOUR OF THE<br />
UNIVERSE!"<br />
We are all aware of the benefits that flash<br />
storage brings to the party: you only need to<br />
read the marketing material from any of the<br />
flash vendors, they clearly believe that flash<br />
is the future. The improvements on<br />
throughput and IOPS performance is huge<br />
when compared to hard drives and unlike<br />
hard drives this shows no signs of stopping<br />
with PCIe Gen4 NVMe drives now on the<br />
market and hitting an incredible 7GBps<br />
sequential read performance per drive.<br />
16 <strong>ST</strong>ORAGE <strong>May</strong>/<strong>Jun</strong>e <strong>2021</strong><br />
@<strong>ST</strong>MagAndAwards<br />
www.storagemagazine.co.uk<br />
MAGAZINE
MANAGEMENT: <strong>ST</strong>ORAGE TIERING<br />
"A flash tier is always going to provide the most performance - however not<br />
many projects need to utilise all storage data. Data is important and most<br />
organisations need to keep data for longer than it is being actively used.<br />
Have you considered how much of your data is used on a daily or weekly<br />
basis? This is where tiering comes in. If you can identify a small percentage<br />
of your data that needs to be accessed on a regular basis then you can start<br />
to build a solution that takes the benefits from each storage technology and<br />
truly maximise the ROI."<br />
Capacities are increasing too, with<br />
15.36TB drives available from Lenovo's DE<br />
series storage arrays and IBM producing<br />
38.4TB FlashCore modules for FlashSystem<br />
storage arrays that are both available<br />
today. With the increase in capacities we<br />
can now get higher capacity density with<br />
flash storage than we can with traditional<br />
hard drives.<br />
The downside to this of course is price;<br />
per TB pricing on flash storage continues to<br />
vastly exceed hard drive pricing and an allflash<br />
storage solution can be a huge<br />
investment when everyone is under everincreasing<br />
pressure to maximise the return<br />
on investment for any large-scale solution.<br />
There is, of course, a third player in this<br />
game: tape. Like hard drives, tape capacity<br />
has continued to grow: IBM is due to<br />
launch its LTO9 Ultrium Technology in the<br />
first half of <strong>2021</strong> with 18TB native capacity<br />
or 45TB compressed capacity per cartridge.<br />
Unlike hard drives, performance has<br />
continued to increase as well. For a typical<br />
upgrade path LTO9 is offering a 33%<br />
increase on uncompressed performance<br />
over LTO7. Tape storage also has some<br />
unique advantages. The ability to air-gap<br />
data to protect from modern ransomware<br />
attacks and the ability to have huge<br />
capacities with minimal power usage is<br />
often overlooked when comparisons are<br />
made to traditional hard drive storage.<br />
SO WHICH IS BE<strong>ST</strong> FOR YOUR<br />
PROJECT?<br />
How do you maximise ROI? A flash tier is<br />
always going to provide the most<br />
performance - however not many projects<br />
need to utilise all storage data. Data is<br />
important and most organisations need to<br />
keep data for longer than it is being actively<br />
used. Have you considered how much of<br />
your data is used on a daily or weekly<br />
basis? This is where tiering comes in.<br />
If you can identify a small percentage of<br />
your data that needs to be accessed on a<br />
regular basis then you can start to build a<br />
solution that takes the benefits from each<br />
storage technology and truly maximise the<br />
ROI. A solution with, for example, 20 per<br />
cent flash storage would present hot data<br />
that is used regularly with maximum<br />
performance to your compute environment<br />
while warm data could be stored on a<br />
cheaper hard drive-based storage array.<br />
Data that has not been accessed in the last<br />
six months could then be offloaded onto a<br />
tape tier using the same physical<br />
infrastructure as the backup process, which<br />
would reduce overall power consumption.<br />
The most popular parallel filesystems such<br />
as IBM Spectrum Scale, BeeGFS and Lustre,<br />
have support for tiering either directly or via<br />
integration with the RobinHood Policy<br />
Engine. There is also additional software<br />
such as IBM's Spectrum Protect and<br />
Spectrum Archive, Atempo's Miria or Starfish<br />
that can augment these features.<br />
Caching is also an option. IBM's Spectrum<br />
Scale especially offers great flexibility in this<br />
area with features such as local read-only<br />
cache (LROC) and highly available write<br />
cache (HAWC). LROC uses a local SSD on<br />
the node as an extension to the buffer pool,<br />
which works best for small random reads<br />
where latency is a primary concern, while<br />
HAWC uses a local SSD to reduce the<br />
response time for small write operations - in<br />
turn greatly reducing write latency<br />
experienced by the client.<br />
Deploying a single storage solution will<br />
always be a strong proposition from a<br />
management overhead. However, I don't see<br />
hard drive storage ever being beaten by<br />
flash storage on a pure capacity-to-cost<br />
ratio basis any time soon. By deploying<br />
tiering, caching or both it will be possible to<br />
improve storage performance and thus<br />
maximise ROI.<br />
More info: www.ocf.co.uk<br />
www.storagemagazine.co.uk<br />
@<strong>ST</strong>MagAndAwards <strong>May</strong>/<strong>Jun</strong>e <strong>2021</strong><br />
<strong>ST</strong>ORAGE<br />
MAGAZINE<br />
17
CASE <strong>ST</strong>UDY:<br />
CASE <strong>ST</strong>UDY: IMPERIAL WAR MUSEUMS<br />
PRESERVING PRICELESS MEMORIES OF<br />
GLOBAL CONFLICTS<br />
LIFECYCLE MANAGEMENT SOFTWARE SCANS AND MOVES INACTIVE DATA FROM PRIMARY <strong>ST</strong>ORAGE TO<br />
'PERPETUAL TIER' FOR LONG-TERM ACCESS AND ARCHIVAL<br />
Spectra Logic has announced that<br />
Imperial War Museums (IWM)<br />
deployed Spectra StorCycle Storage<br />
Lifecycle Management Software to<br />
enhance the Museums' existing storage<br />
infrastructure, which supports its audiovisual<br />
and exhibitions departments to<br />
preserve invaluable data, including<br />
thousands of films, videotapes, audio<br />
recordings and photographs that would<br />
otherwise disintegrate and be lost forever<br />
were they not digitised. StorCycle software<br />
is being used to manage a large amount<br />
of unstructured data that resides on<br />
expensive SAN and NAS storage outside<br />
of IWM's existing DAMS (digital asset<br />
management system) platform.<br />
EVER-GROWING COLLECTION<br />
Imperial War Museums tells the story of<br />
people who have lived, fought and died<br />
in conflicts involving Britain and the<br />
Commonwealth since the First World War.<br />
Its five branches attract over 2.5 million<br />
visitors each year and house a collection<br />
of over 10 million objects. The five sites<br />
across the UK - IWM London, IWM<br />
North, IWM Duxford, Churchill War<br />
Rooms and HMS Belfast - are home to<br />
approximately 750,000 digital assets,<br />
representing a total of 1.5PB of data as<br />
uncompressed files.<br />
Their digital asset collection includes<br />
5,000 film and video scan masters,<br />
100,000 audio masters dating back to<br />
the 1930s, nearly 500,00 image masters<br />
and thousands of lower resolution<br />
versions (access versions for commercial<br />
and web use) of the above assets. This<br />
volume is constantly growing, with new<br />
scans in the Museums' film collection<br />
generating an additional 10TB of data<br />
per month, and its videotape scanning<br />
project expected to create more than<br />
900TB of data over the next four years.<br />
A Spectra customer since 2009, IWM has<br />
implemented a large-scale data archiving<br />
18 <strong>ST</strong>ORAGE <strong>May</strong>/<strong>Jun</strong>e <strong>2021</strong><br />
@<strong>ST</strong>MagAndAwards<br />
www.storagemagazine.co.uk<br />
MAGAZINE
CASE <strong>ST</strong>UDY:<br />
CASE <strong>ST</strong>UDY: IMPERIAL WAR MUSEUMS<br />
solution to reliably preserve and manage<br />
its substantial digital archive pertaining to<br />
UK and Commonwealth wartime history.<br />
IWM's current archive infrastructure<br />
consists of two Spectra T950 Tape<br />
Libraries, one with LTO-7 tape media and<br />
drives, and one with IBM TS1150 tape<br />
media and drives, along with a Spectra<br />
BlackPearl Converged Storage System,<br />
BlackPearl Object Storage Disk and<br />
BlackPearl NAS solution.<br />
MANAGING THE LIFECYCLE<br />
"When we set out on our search to find a<br />
storage solution capable of preserving<br />
Imperial War Museums' substantial digital<br />
archive, there were specific criteria on<br />
which we were not willing to compromise,"<br />
explained Ian Crawford, chief information<br />
officer, IWM. "Spectra met all of our<br />
requirements and then some, and now<br />
continues to deliver with StorCycle's<br />
storage lifecycle management<br />
capabilities."<br />
IWM is on track to realise significant<br />
long-term cost savings by deploying<br />
Spectra StorCycle Storage Lifecycle<br />
Management Software to optimise their<br />
primary storage capacity through the<br />
offloading of inactive data to the<br />
Museums' archive infrastructure. Spectra<br />
StorCycle identifies and moves inactive<br />
data to a 'Perpetual Tier' of storage<br />
consisting of object storage disk and tape.<br />
StorCycle scans the IWM departments'<br />
primary storage for media file types older<br />
than two years and larger than 1GB, and<br />
automatically moves them to the IWM<br />
archive, maximising capacity on the<br />
primary storage system.<br />
Rob Tyler, IT infrastructure manager<br />
(DAMS) at IWM said, "Spectra's StorCycle<br />
storage lifecycle management software<br />
has empowered us to move our data into<br />
reliable, long-term storage, offloading our<br />
primary storage and preserving media<br />
files and unstructured data - all with the<br />
push of a button."<br />
"When we set out on our search to find a storage<br />
solution capable of preserving Imperial War<br />
Museums' substantial digital archive, there were<br />
specific criteria on which we were not willing to<br />
compromise. Spectra met all of our requirements and<br />
then some, and now continues to deliver with<br />
StorCycle's storage lifecycle management<br />
capabilities."<br />
Craig Bungay, vice president of EMEA<br />
sales, Spectra Logic, commented on the<br />
project: "IWM preserves invaluable<br />
historical data and it is vital that their data<br />
storage infrastructure be failsafe and<br />
reliable in addition to providing flexibility<br />
and affordability. This is achieved by<br />
storing multiple copies of the Museums'<br />
data on different media, and by<br />
automatically offloading inactive data<br />
from expensive primary storage to its<br />
archive solution using StorCycle."<br />
More info: www.SpectraLogic.com<br />
www.storagemagazine.co.uk<br />
@<strong>ST</strong>MagAndAwards <strong>May</strong>/<strong>Jun</strong>e <strong>2021</strong><br />
<strong>ST</strong>ORAGE<br />
MAGAZINE<br />
19
OPINION:<br />
OPINION: DISA<strong>ST</strong>ER RECOVERY AS A SERVICE<br />
A DRAAS PRIMER FOR SMBS<br />
AS MORE AND MORE SMBS ARE ATTRACTED TO DISA<strong>ST</strong>ER RECOVERY<br />
AS A SERVICE, FLORIAN MALECKI OF <strong>ST</strong>ORAGECRAFT OUTLINES THE<br />
KEY COMPONENTS AND REQUIREMENTS OF A DRAAS SOLUTION -<br />
WHAT IS IMPORTANT, AND WHY<br />
Regardless of size, every business gets<br />
hurt when downtime strikes. Small<br />
and medium sized businesses (SMBs)<br />
take a big hit when their systems go down.<br />
An ITIC study found that nearly half of<br />
SMBs estimate that a single hour of<br />
downtime costs as much as US$100,000<br />
in lost revenue, end-user productivity, and<br />
IT support. That's why more and more by<br />
SMBs are adopting Disaster Recovery as a<br />
Service (DRaaS). One study shows 34<br />
percent of companies plan to migrate to<br />
DRaaS in <strong>2021</strong>.<br />
Cloud-based backup and disaster<br />
recovery solutions are often at the top of<br />
the list when considering DRaaS solutions.<br />
Such an approach allows businesses to<br />
access data anywhere, any time, with<br />
certainty because the best disaster<br />
recovery clouds are highly distributed and<br />
fault-tolerant, delivering 99.999+ percent<br />
uptime. This article is intended to help<br />
SMBs understand the key elements of a<br />
DRaaS solution, beginning with an<br />
explanation of the basics of DRaaS.<br />
DI<strong>ST</strong>RIBUTED BACKUPS MAXIMISE<br />
PROTECTION<br />
Data replication is the process of updating<br />
copies of data in multiple places at the<br />
same time. Replication serves a single<br />
purpose: it makes sure data is available to<br />
users when they need it.<br />
Data replication synchronises data<br />
source - say primary storage - with backup<br />
target databases, so when changes are<br />
made to source data, it is quickly updated<br />
in backups. The target database could<br />
include the same data as the source<br />
database - full-database replication - or a<br />
subset of the source database.<br />
For backup and disaster recovery, it<br />
makes sense to make full-database<br />
replications. At the same time, companies<br />
can also reduce their source database<br />
workloads for analysis and reporting<br />
functions by replicating subsets of source<br />
data, say by business department or<br />
country, to backup targets.<br />
MANAGING BACKUP IMAGES<br />
As companies continue to add more<br />
backups over time, they'll need to manage<br />
these accumulated images and the storage<br />
space the images consume. Image<br />
management solutions with a managedfolder<br />
structure allow companies to spend<br />
less time configuring settings on backups.<br />
But that's just the start. These solutions<br />
can also provide image verification so that<br />
backup image files are ready and available<br />
for fast, reliable recovery and advanced<br />
image verification that delivers regular<br />
visual confirmation that backups are<br />
working correctly.<br />
To reduce restoration time and the risk of<br />
backup file corruption, and also reduce<br />
storage space required, image management<br />
solutions can automatically consolidate<br />
continuous incremental backup image files.<br />
Companies can also balance storage space<br />
and file recovery by setting policies that suit<br />
their needs and easily watch over backup<br />
jobs in the user interface, with alerts sent<br />
when any issues arise.<br />
Image management solutions allow the<br />
management of system resources to<br />
enable throttling and concurrent<br />
processing. Backups are replicated onto<br />
backup targets - local, on-network, and<br />
cloud - so companies are always prepared<br />
for disaster. The solution also allows prestaging<br />
of the recovery of a server before<br />
disaster strikes to reduce downtime.<br />
CORE DRIVER FOR BUSINESS<br />
CONTINUITY<br />
Failover is a backup operational mode<br />
that switches to a standby database,<br />
server, or network if the primary system<br />
fails or is offline for maintenance. Failover<br />
ensures business continuity by seamlessly<br />
redirecting requests from the failed or<br />
downed mission-critical system to the<br />
backup system. The backup systems<br />
should mimic the primary operating system<br />
environment and be on another device or<br />
in the cloud.<br />
20 <strong>ST</strong>ORAGE <strong>May</strong>/<strong>Jun</strong>e <strong>2021</strong><br />
@<strong>ST</strong>MagAndAwards<br />
www.storagemagazine.co.uk<br />
MAGAZINE
OPINION:<br />
OPINION: DISA<strong>ST</strong>ER RECOVERY AS A SERVICE<br />
"Companies also need to ensure that data stored in their disaster recovery site is<br />
always secure. If a disaster strikes, it may be impossible to recover quickly.<br />
Suppose a failover does occur and the company's operations are now running<br />
from a disaster recovery cloud. In that case, they need to protect the data in that<br />
virtual environment by replicating it to their backup targets immediately. That's<br />
why network bandwidth is the next concern."<br />
With failover capabilities for important<br />
servers, back-end databases, and networks<br />
can count on continuous availability and<br />
near-certain reliability. Say the primary onsite<br />
server fails. Failover takes over hosting<br />
requirements with a single click. Failover also<br />
lets companies run maintenance projects,<br />
without human oversight, during scheduled<br />
software updates. That ensures seamless<br />
protection against cybersecurity risks.<br />
WHY FAILOVER MATTERS<br />
While failover integration may seem costly,<br />
it's crucial to bear in mind the incredibly<br />
high cost of downtime. Think of failover as<br />
a critical safety and security insurance<br />
policy. And failover should be an essential<br />
part of any disaster recovery plan. From a<br />
systems engineering standpoint, the focus<br />
should be on minimising data transfers to<br />
reduce bottlenecks while ensuring highquality<br />
synchronisation between primary<br />
and backup systems.<br />
GETTING BACK TO NORMAL<br />
Failback is the follow-on to failover. While<br />
failover is switching to a backup source,<br />
failback is the process of restoring data to<br />
the original resource from a backup. Once<br />
the cause of the failover is remedied, the<br />
business can resume normal operations.<br />
Failback also involves identifying any<br />
changes made while the disaster recovery<br />
site or virtual machine was running in place<br />
of the primary site or virtual machine.<br />
It's crucial that the disaster recovery<br />
solution can run the company's workloads<br />
and sustain the operations for as long as<br />
necessary. That makes failback testing<br />
critical as part of the disaster recovery<br />
plan. It's essential to monitor any failback<br />
tests closely and document any<br />
implementation gaps so they can be<br />
closed. Regular failback testing will save<br />
critical time when the company needs to<br />
get its house back in order.<br />
Companies need to consider several<br />
important areas regarding the failback<br />
section of their disaster recovery plan.<br />
Connectivity is first on the list. If there isn't<br />
a reliable connection or pathway between<br />
the primary and backup data, failback<br />
likely won't even be possible. A secure<br />
connection ensures that a failback can be<br />
performed without interruption.<br />
Companies can be sure that their source<br />
data and backup target data are always<br />
synchronised, so the potential for data loss<br />
is minimised.<br />
Companies also need to ensure that data<br />
stored in their disaster recovery site is<br />
always secure. If a disaster strikes, it may<br />
be impossible to recover quickly. Suppose<br />
a failover does occur and the company's<br />
operations are now running from a<br />
disaster recovery cloud. In that case, they<br />
need to protect the data in that virtual<br />
environment by replicating it to their<br />
backup targets immediately. That's why<br />
network bandwidth is the next concern. If<br />
they don't have sufficient bandwidth,<br />
bottlenecks and delays will interfere with<br />
synchronisation and hamper recovery.<br />
Testing is the most critical element for<br />
ensuring failback is successful when<br />
businesses need it. That means testing all<br />
systems and networks to ensure they are<br />
capable of resuming operations after<br />
failback. It's advisable to use an alternate<br />
location as the test environment and use<br />
knowledge obtained from the test to<br />
optimise the failback strategies.<br />
FINAL THOUGHTS<br />
Whether it's a natural disaster like a<br />
hurricane or a flood, a regional power<br />
outage, or even ransomware, there is little<br />
doubt about the business case for DRaaS.<br />
With DRaaS ensuring business continuity,<br />
no matter what happens, recovery from a<br />
site-wide disaster is fast and easy to<br />
perform from a disaster recovery cloud.<br />
Add up the cost to a business in dollars<br />
and cents: lost data, lost productivity and<br />
reputational damage. Just an hour of<br />
downtime could pay for a year - or many<br />
years, for that matter - of DRaaS.<br />
More info: www.storagecraft.com<br />
www.storagemagazine.co.uk<br />
@<strong>ST</strong>MagAndAwards <strong>May</strong>/<strong>Jun</strong>e <strong>2021</strong><br />
<strong>ST</strong>ORAGE<br />
MAGAZINE<br />
21
CASE <strong>ST</strong>UDY:<br />
CASE <strong>ST</strong>UDY: TAMPERE VOCATIONAL COLLEGE TREDU<br />
ENSURING THE SAFETY OF DATA IN THE CLOUD<br />
TAMPERE VOCATIONAL COLLEGE TREDU IN FINLAND HAS IMPLEMENTED A HYBRID SOLUTION THAT<br />
OFFERS A UNIFIED PORTAL FOR <strong>ST</strong>ORAGE AND BACKUP OF MULTIPLE SERVICES, ELIMINATING THE NEED<br />
TO JUMP FROM ONE APPLICATION TO ANOTHER<br />
With Microsoft 365's retention policy for<br />
deleted items of 30 days, Tampere's<br />
technical team needed to find a solution<br />
to house, and always be able to retrieve,<br />
this vital data. Meanwhile another factor<br />
that needed to be addressed was the<br />
operational expense for any solution<br />
being deployed.<br />
Tampere Vocational College Tredu is<br />
a college based in Tampere, the<br />
second largest city in Finland. The<br />
college offers vocational programmes in<br />
Finnish secondary education in various<br />
fields including Technology, Natural<br />
Sciences, Communications and Tourism.<br />
Tampere's student population increased<br />
significantly in 2013 when Pirkanmaa<br />
Educational Consortium and the existing<br />
Tampere College merged, and today<br />
Tampere hosts approximately 18,000<br />
students and 1,000 staff members across<br />
its curriculum and campus.<br />
A SERIES OF CHALLENGES<br />
As an educational institution, Tampere<br />
has a legal obligation to retain data<br />
generated by both students and staff.<br />
With an increasing reliance on services<br />
such as Microsoft 365, this means more<br />
data is being generated on the cloud<br />
than ever. Coupled with the challenges<br />
the global pandemic has brought, remote<br />
working and offsite learning means<br />
services such as these are leveraged even<br />
more keenly and have become a<br />
significant part of the educational<br />
landscape.<br />
Aside from the accounts of the 18,000<br />
students and 1,000 faculty members,<br />
Tampere college also need to protect<br />
data in accounts of former students and<br />
academic projects. This means they have<br />
to contend with over 34,000 Drive,<br />
Contact and Calendar accounts and in<br />
excess of 68,000 mailboxes and over<br />
11,000 SharePoints. Added to the<br />
pressure of this, the school has a massive<br />
domain system in which new accounts are<br />
created frequently and old accounts are<br />
closed, which in turn creates<br />
management complexities.<br />
Arttu Miettunen, Systems Analyst at<br />
Tampere, began his search and<br />
benchmarked various solutions from<br />
major backup providers. Eventually it was<br />
clear Synology could not only resolve the<br />
issues of data storage, but also offered<br />
backup for Microsoft services with no<br />
license costs. Having the storage<br />
hardware and backup as an integrated<br />
solution brings further reassurance to the<br />
team managing this task.<br />
MEETING ALL REQUIREMENTS<br />
An SA3600 unit was deployed with 12 x<br />
12TB Enterprise HDDs, along with the<br />
added benefit of 2 x SNV3500 400G,<br />
Synology's M.2 NVMe SSDs to create a<br />
cache. The current backup occupies<br />
15TBs of storage, however, as Tampere's<br />
data needed to grow, the team was<br />
acutely aware that the solution also had<br />
to offer scalability. This was an obstacle<br />
that the SA3600 can readily handle, with<br />
12 existing bays in the base unit and the<br />
facility to scale up to 180 drives with use<br />
of Synology expansion units. In addition,<br />
Active Backup for Microsoft 365 comes<br />
with de-duplication in place, which cuts<br />
down the backup by 7 terabytes in the<br />
first run, achieving 46% saving on<br />
storage media.<br />
Arttu and his team knew they wanted<br />
22 <strong>ST</strong>ORAGE <strong>May</strong>/<strong>Jun</strong>e <strong>2021</strong><br />
@<strong>ST</strong>MagAndAwards<br />
www.storagemagazine.co.uk<br />
MAGAZINE
CASE <strong>ST</strong>UDY:<br />
CASE <strong>ST</strong>UDY: TAMPERE VOCATIONAL COLLEGE TREDU<br />
"It could have been difficult to predict how performance might have been affected<br />
as the number of users and amount of data increased, but this was resolved by<br />
deploying an SSD cache with the Synology NVMe SSDs in place. This handled<br />
substantial caching workloads in this multi-user environment by making the data<br />
available on the lower latency NVMe SSDs instead of having to retrieve it from the<br />
slower hard disk drives. By deploying a shrewd hybrid storage system with HDDs<br />
and SSDs, Tampere enjoy maximum value from their disk array."<br />
it from the slower hard disk drives. By<br />
deploying a shrewd hybrid storage system<br />
with HDDs and SSDs, Tampere enjoy<br />
maximum value from their disk array.<br />
MANAGEABLE & FUTUREPROOF<br />
By utilising Synology's Active Backup for<br />
Microsoft 365, Tampere benefit from:<br />
Comprehensive protection and backup<br />
for Teams, SharePoint Online,<br />
OneDrive and Exchange Online<br />
Full integration with Azure AD<br />
Easy and centralised management<br />
portal with advanced permissions<br />
controls<br />
Cost saving with license-free software<br />
and data deduplication<br />
Future-proofing with scalable storage<br />
via expansion<br />
one unified portal for the storage and<br />
backup of multiple services to eliminate<br />
the need to jump from one application to<br />
another. When new students and faculty<br />
join onto the school's Azure AD, accounts<br />
must be detected and protected<br />
automatically. The IT team wanted to<br />
give restoration privileges to some users<br />
but not all, and had to be able to tweak<br />
the setting easily. After a trial with<br />
Synology, Arttu is confident that this<br />
solution covers all their requirements and<br />
will last them for many years.<br />
It could have been difficult to predict<br />
how performance might have been<br />
affected as the number of users and<br />
amount of data increased, but this was<br />
resolved by deploying an SSD cache with<br />
the Synology NVMe SSDs in place. This<br />
handled substantial caching workloads in<br />
this multi-user environment by making the<br />
data available on the lower latency<br />
NVMe SSDs instead of having to retrieve<br />
With Synology Active Backup for<br />
Microsoft 365 deployed, the Tampere<br />
team is now able to protect the school's<br />
cloud workloads and lower ongoing costs<br />
substantially.<br />
"Synology is providing us a way to ensure<br />
the safety of our data in the cloud,"<br />
concludes Arttu Miettunen. "With Synology,<br />
we're able to safeguard and restore our<br />
data in Microsoft 365 services in case of<br />
accidental deletion or data loss."<br />
More info: www.synology.com<br />
www.storagemagazine.co.uk<br />
@<strong>ST</strong>MagAndAwards <strong>May</strong>/<strong>Jun</strong>e <strong>2021</strong><br />
<strong>ST</strong>ORAGE<br />
MAGAZINE<br />
23
ROUNDTABLE: BACKUP BACKUP<br />
EVERY DAY IS A BACKUP DAY<br />
THIS YEAR'S WORLD BACKUP DAY HAS COME AND GONE, BUT IT MIGHT BE THE LINGERING<br />
IMPACT OF COVID-19 THAT HAS A DEEPER EFFECT ON ORGANISATIONS' BACKUP THINKING.<br />
<strong>ST</strong>ORAGE MAGAZINE GATHERED THE THOUGHTS OF INDU<strong>ST</strong>RY LEADERS<br />
Just as a dog isn't just for Christmas, it is<br />
increasingly clear that backup isn't<br />
something we should only think about on<br />
World Backup Day. This March saw the<br />
landmark tenth WBD, but it is fair to say that<br />
we haven't seen ten years of measurable<br />
improvements in how organisations plan and<br />
manage their backup and restore processes.<br />
As data volumes soar and interconnectivity<br />
spreads ever wider it might look like those<br />
who evangelise about backup are fighting a<br />
losing battle.<br />
But the recent changes to all our working<br />
patterns forced on us by the pandemic and<br />
lockdown have brought a renewed focus for<br />
many at board level on the importance of a<br />
defined - and tested - backup strategy.<br />
According to Nick Turner, VP EMEA, Druva:<br />
"Whilst we've celebrated a decade's worth of<br />
World Backup Days, this past year has tested<br />
the ability to protect business data like no<br />
other. According to our Value of Data report,<br />
since the onset of the pandemic, IT leaders in<br />
the UK and US have reported an increase in<br />
data outages (43%), human error tampering<br />
data (40%), phishing (28%), malware (25%)<br />
and ransomware attacks (18%)."<br />
IS YOUR BACKUP FIT FOR PURPOSE?<br />
So is World Backup Day really anything more<br />
than a PR opportunity for vendors? Zeki<br />
Turedi, CTO EMEA at CrowdStrike says:<br />
"Milestones like World Backup Day act as<br />
reminders for IT professionals to look again<br />
at their IT architecture and confirm it's still fit<br />
for purpose. Like so many organisations<br />
around the world, the last year taught us that<br />
workers can adapt how they work, but our IT<br />
infrastructure in some cases is not as flexible.<br />
What the pandemic hasn't done at all is slow<br />
the growth in threats posed to organisations."<br />
It is important to understand the difference<br />
between backup and business continuity,<br />
argues Adrian Moir, Lead Technology<br />
Evangelist at Quest Software: "Businesses<br />
have rapidly adapted to remote working, and<br />
many employees are now operating and<br />
accessing data away from the traditional<br />
corporate office. While the best practices<br />
around data protection and recovery are still<br />
there, it is critical that business evolve their<br />
strategies just in the same way that our<br />
24 <strong>ST</strong>ORAGE <strong>May</strong>/<strong>Jun</strong>e <strong>2021</strong><br />
@<strong>ST</strong>MagAndAwards<br />
www.storagemagazine.co.uk<br />
MAGAZINE
ROUNDTABLE: ROUNDTABLE: BACKUP<br />
"On-premises backup solutions have run out of favour due to<br />
their expensive hardware requirements and inability to scale.<br />
Cloud backup enables tapping into cloud economies of scale,<br />
as well as being off premises, thus protecting against<br />
catastrophic site failures such as fire or flood." - Aron Brand, CTERA<br />
approach to data and access changes. We<br />
also need to move away from the concept of<br />
focusing just on backup. In order to get this<br />
right, organisations need to consider<br />
continuity - ensuring they have a platform in<br />
place that will not only recover the data but<br />
will do so with minimal downtime."<br />
Has the growth in home-working pushed<br />
more enterprise data into the cloud?<br />
Though the move to public and hybrid<br />
clouds is still seeing growth, the demand for<br />
on-premises data backups is still buoyant,<br />
as Alexander Ivanyuk, technology director at<br />
Acronis explains: "Companies that deal with<br />
very sensitive data such as government,<br />
military, research, pharmaceuticals, and so<br />
on, still prefer to keep data on site or in a<br />
private cloud."<br />
BE<strong>ST</strong> OF BOTH WORLDS<br />
Aron Brand, CTO at CTERA, supports the<br />
thinking that on-premise backups may fade<br />
over time in favour of cloud-based options:<br />
"On-premises backup solutions have run out<br />
of favour due to their expensive hardware<br />
requirements and inability to scale. Cloud<br />
backup enables tapping into cloud<br />
economies of scale, as well as being off<br />
premises, thus protecting against<br />
catastrophic site failures such as fire or<br />
flood." That said, hybrid solutions can help<br />
organisations enjoy the best of both worlds;<br />
with a backup on-premises and one in the<br />
cloud, IT teams can back up sensitive data<br />
(safely in house) and maintain cost-effective<br />
and flexible scalability during major demand<br />
increases (through the cloud). More and<br />
more hybrid options are coming to market,<br />
in response to the trend seeing organisations<br />
often hesitant to lock all of their<br />
organisations' data into the cloud. Hybrid<br />
could indeed be the way to go according to<br />
Christophe Bertrand, senior analyst at ESG<br />
Global: "Everything's going to be hybrid, and<br />
for a long time. Especially in terms of backup<br />
and recovery."<br />
Sascha Giese, Head Geek at SolarWinds,<br />
highlights the importance of testing, wherever<br />
your data is being backed up to: "From both<br />
a business and personal point of view, we<br />
are well placed to take advantage of the<br />
cloud technologies that make data backup a<br />
very simple process. That said, we still need<br />
to treat data backup as a top priority. Despite<br />
having the cloud platforms in place that<br />
enable fairly quick recovery, IT professionals<br />
should still be taking matters into their own<br />
hands and ensuring in today's data heavy<br />
environment, everything is backed up.<br />
"This year, more than ever, I encourage IT<br />
professionals everywhere to do two things.<br />
First, take the '3, 2, 1' approach - create<br />
three working backups, stored in two<br />
different places, with one always being stored<br />
offsite. Second, test! Treat and plan for a<br />
data loss in the same way that you would for<br />
a fire drill. Make sure you are regularly<br />
testing for any disasters that cause data loss<br />
and try to find ways that you can improve<br />
disaster recovery. If you take these small<br />
steps, any data loss can be rectified very<br />
quickly with minimum downtime."<br />
AT YOUR SERVICE?<br />
The rush to cloud and containerisation brings<br />
new risks, argues Druva's Turner: "The secret<br />
to supporting a successful hybrid workforce<br />
will be in recognising how the industry has<br />
evolved and the gaps which may have been<br />
overlooked in the rush to complete projects.<br />
As we've surged the deployment of SaaS<br />
applications, we need to acknowledge that<br />
being the target of a cyber attack is now<br />
almost inevitable. Therefore, prioritising data<br />
protection in the cloud to prepare is vital.<br />
"Remember, a robust approach to data<br />
resiliency should include detection,<br />
remediation, and recovery. Relying on<br />
preventative measures is no longer sufficient.<br />
With critical data, including ongoing research<br />
around COVID-19 and vaccination trials,<br />
being shared around the world, the stakes for<br />
data protection have never been higher. It's<br />
time we take a hard look at the existing<br />
frameworks and leverage the latest<br />
technologies to meet this moment."<br />
Does a shift to the cloud mean that everyone<br />
www.storagemagazine.co.uk<br />
@<strong>ST</strong>MagAndAwards<br />
<strong>May</strong>/<strong>Jun</strong>e <strong>2021</strong><br />
<strong>ST</strong>ORAGE<br />
MAGAZINE<br />
25
ROUNDTABLE: BACKUP BACKUP<br />
"Businesses have rapidly adapted to remote working, and many<br />
employees are now operating and accessing data away from the<br />
traditional corporate office. While the best practices around data<br />
protection and recovery are still there, it is critical that business<br />
evolve their strategies just in the same way that our approach to<br />
data and access changes. We also need to move away from<br />
the concept of focusing just on backup." - Adrian Moir, Quest Software<br />
will move to a backup-as-a-service model?<br />
As is so often the case, the question gets an 'it<br />
depends' answer from most of our experts. "UK<br />
organisations are aware that over-reliance on<br />
legacy IT and data protection tools poses an<br />
immediate threat to their ongoing DX<br />
initiatives," said Dan Middleton, Vice President<br />
UK&I at Veeam. "Over half of firms across the<br />
country now use a third-party backup service<br />
to help protect the data of critical remote work<br />
applications such as Microsoft Office 365,<br />
according to the Veeam Data Protection<br />
Report <strong>2021</strong>. Moving to subscription-based<br />
data protection services will enable UK<br />
companies to take advantage of more costeffective<br />
solutions with the flexibility to pay only<br />
for the services they use. This can ensure<br />
processes such as software updates, patching<br />
and testing are automated as opposed to<br />
relying on manual protocols, providing<br />
greater data protection while allowing<br />
businesses to de-risk their transformation and<br />
business continuity initiatives."<br />
SEEING THE TRUE CO<strong>ST</strong>S<br />
Krista Macomber, senior analyst at the<br />
Evaluator Group, describes some of the<br />
factors organisations should consider if<br />
thinking about switching to the cloud: "There<br />
are a whole host of factors that go into<br />
determining the total cost of ownership of a<br />
backup solution that leverages the public<br />
cloud. Egress fees, how much data is being<br />
protected, how much that data is growing,<br />
and how long it must be retained for, are just<br />
a few factors."<br />
Egress fees are indeed not a cost to be<br />
ignored, as the migration of significant<br />
amounts of data back to on-premises can<br />
easily run into very considerable sums. In<br />
addition, the cost of hosting backups in the<br />
cloud goes beyond the fees attached to the<br />
storage of that data, as ESG's Bertrand<br />
explains: "I don't think we're there yet in terms<br />
of fully understanding the actual costs of cloud<br />
backup. The one thing that's more important<br />
than the cost of the backup and recovery is the<br />
cost to the organisation if they are not able to<br />
recover data."<br />
Scality's Chief Product Office Paul Speciale<br />
concurs with this view, describing what to look<br />
out for when opting for the cloud approach:<br />
"As with all things in IT, we need to carefully<br />
consider the overall cost of ownership for<br />
backup solutions, including the trade-off<br />
between shifting capital expenditures to<br />
operational savings in the cloud. While it can<br />
be true that cloud backups save money, it can<br />
also be true that they are more expensive than<br />
on-premises solutions. Hosted backups or<br />
Backup-as-a-Service (BUaaS) offerings are<br />
popular and widely embraced and do indeed<br />
reduce the burden on IT administrators from a<br />
time perspective, which has a bearing on<br />
backup cost. Also, the cloud promises more<br />
choices of classes of storage with<br />
performance and cost trade-offs."<br />
Looking ahead, both Evaluator Group's<br />
Randy Kerns and Krista Macomber suggest<br />
that Backup-as-a-Service will be popular in the<br />
coming months, with Kerns saying: "I think the<br />
key for IT operations will be evaluating Backup<br />
as a Service options. Vendors will work on<br />
developments in this area," and Macomber<br />
adding: "I also think we'll see an ongoing tick<br />
towards service-based delivery. This may mean<br />
a public cloud-based solution, or it might<br />
mean a managed services-based approach."<br />
The last word goes to Sarah Doherty of iland<br />
who sums up what many of our commentators<br />
have said: "The importance of backup is often<br />
overlooked by the latest security scare or large<br />
attack making headlines. In most cases, the<br />
focus is on other details rather than creating a<br />
plan to keep all data safe and available from<br />
any of these events. Both internal and external<br />
threats are on the rise. In today's uncertain<br />
times, keeping data safe and recoverable is<br />
more important than ever. Let's take World<br />
Backup Day as a reminder for your<br />
organisation to create a backup and recovery<br />
plan of action. The increase in disastrous<br />
events, whether from nature, human error,<br />
cyber-attacks or ransomware, makes it that<br />
much more critical for organisations to<br />
consider all that they have to lose and<br />
highlights the need to create the right backup<br />
and recovery solution." <strong>ST</strong><br />
26 <strong>ST</strong>ORAGE <strong>May</strong>/<strong>Jun</strong>e <strong>2021</strong><br />
@<strong>ST</strong>MagAndAwards<br />
www.storagemagazine.co.uk<br />
MAGAZINE
The Way We Use Data Is Changing…<br />
So Why Hasn’t Storage?<br />
Break the status quo and rethink storage – with StorCycle ® Storage<br />
Lifecycle Management software.<br />
StorCycle is a software solution that identifies data, specifically<br />
inactive data, and allows you to migrate it to the right tier of<br />
storage. It helps you store your data for the long term, for<br />
ransomware preparedness, or to preserve it for its future value<br />
and use. And finally, StorCycle allows users to easily access data<br />
when they need it.<br />
Schedule a Demo
MANAGEMENT: INTERNET OF THINGS<br />
AS IOT EXPANDS, ONE SIZE WON'T FIT ALL<br />
FROM MEDICAL WEARABLES THROUGH SEARCH-AND-RESCUE DRONES TO SMART CITIES, CHECHUNG<br />
LIN, DIRECTOR OF TECHNICAL PRODUCT MARKETING AT WE<strong>ST</strong>ERN DIGITAL, DESCRIBES WAYS TO<br />
OPTIMISE THE EVER-GROWING VOLUMES OF IOT DATA USING PURPOSE-BUILT <strong>ST</strong>ORAGE<br />
Whilst the digital environment has<br />
been expanding rapidly for many<br />
years, the pandemic ushered in,<br />
by necessity, a degree of digital<br />
transformation that is unprecedented in<br />
both its scale and scope. With<br />
organisations throughout private and public<br />
sectors alike forced to roll out digital<br />
systems, there has been a sharp uptake in<br />
the adoption of connected technologies.<br />
As the Internet of Things (IoT) landscape<br />
experiences large-scale growth - from<br />
automated supply chains to help maintain<br />
social distancing, to more efficient and<br />
convenient smart cities and vehicles - the<br />
amount of data produced grows rapidly,<br />
as well. It is estimated that by 2025,<br />
connected IoT devices will generate<br />
73.1 zettabytes of data.<br />
Not only does this data need to<br />
be captured, it also needs to be<br />
stored, accessed and transformed<br />
into valuable insights. This process<br />
requires a comprehensive data<br />
architecture that can<br />
accommodate the demands of a<br />
large range of use applications<br />
throughout the data journey.<br />
WHAT IS THE IOT DATA<br />
JOURNEY?<br />
The vast majority of IoT data is<br />
stored in the cloud where highcapacity<br />
drives - now reaching 20TB -<br />
store massive amounts of data for big<br />
data and fast data workloads. These<br />
could include genomic research, batch<br />
analytics, predictive modelling, and supply<br />
chain optimisation.<br />
For some use cases, data then<br />
migrates to the edge,<br />
where it is often<br />
cached in<br />
distributed, edge<br />
servers for realtime<br />
applications<br />
such as autonomous vehicles, cloud<br />
gaming, manufacturing robotics, and<br />
4K/8K video streaming.<br />
Finally, we reach the endpoints, where<br />
data is generated by connected machines,<br />
smart devices, and wearables. The key aim<br />
here is to reduce network latencies and<br />
increase throughput between these layers<br />
(cloud-to-endpoints and endpoints-tocloud)<br />
for data-intensive use cases. A<br />
potential solution could be 5G, by using<br />
millimetre wave (mmWave) bands between<br />
20-100 GHz to create "data<br />
superhighways" for latency and bandwidthsensitive<br />
innovations.<br />
WHAT IS THE VALUE OF YOUR<br />
IOT DATA?<br />
Data infrastructure is critical in our digital<br />
world as data must be stored and analysed<br />
quickly, efficiently, and securely. Thus, data<br />
architectures need to go beyond simple<br />
data capture and storage to data<br />
transformation and creating business<br />
value, in a 'value creation' approach.<br />
Examples include:<br />
Autonomous vehicles - These vehicles<br />
are loaded with sensors, cameras,<br />
LIDAR, radar, and other devices<br />
generating so much data that it is<br />
estimated it will reach 2 terabytes per<br />
day. That data is used to inform realtime<br />
driving decisions using<br />
technologies such as 3D-mapping,<br />
advanced driver assistance systems<br />
(ADAS), over-the-air (OTA) updates, and<br />
vehicle-to-everything (V2X)<br />
communication. In addition, IoT data<br />
creates value in personalised<br />
infotainment and in-vehicle services that<br />
28 <strong>ST</strong>ORAGE <strong>May</strong>/<strong>Jun</strong>e <strong>2021</strong><br />
@<strong>ST</strong>MagAndAwards<br />
www.storagemagazine.co.uk<br />
MAGAZINE
MANAGEMENT: INTERNET OF THINGS<br />
improve the passenger experience. In<br />
order to enable real-time decision<br />
making, which is crucial for passenger<br />
safety, the priority for this data<br />
architecture is reducing network<br />
latencies, along with enabling heavy<br />
throughput to facilitate predictive<br />
maintenance.<br />
Medical wearables - It has been<br />
predicted that in <strong>2021</strong>, worldwide enduser<br />
spending on wearable devices will<br />
total US$81.5 billion. These devices<br />
generate important data to track sleep<br />
patterns, measure daily movements,<br />
and identify nutrition and blood oxygen<br />
levels. This IoT data can be transformed<br />
into daily, monthly, and yearly trends<br />
that can identify opportunities to<br />
improve health habits using datainformed<br />
decisions. Such data could<br />
also create more personalised and<br />
proactive treatments, especially as<br />
telehealth and remote healthcare<br />
continue to progress, even after the<br />
pandemic subsides. Here, the storage<br />
priority for data architecture is offering<br />
long-term retention for critical health<br />
records.<br />
In addition, the following IoT applications<br />
provide key examples as to why storage<br />
considerations vary according to each<br />
specific case, and how the requirements<br />
can be met.<br />
Search-and-rescue drones - This is a<br />
key example of an IoT use case which<br />
requires a very specific data storage<br />
solution to get maximum value from the<br />
application. Such drones are often<br />
required to operate in harsh natural<br />
environments with extreme temperatures<br />
and weather patterns. Therefore, the<br />
storage solutions used in these<br />
technologies must be especially durable<br />
and resilient, such as the highendurance<br />
and highly reliable<br />
industrial-grade e.MMC and UFS<br />
embedded flash drives.<br />
Search-and-rescue drones are also<br />
commonly used in combination as part of a<br />
wider network, utilising optimised routes<br />
and shared automated missions. This<br />
means that the data architecture must be<br />
scalable, enabling the operation of multiple<br />
technologies in conjunction with extreme<br />
efficiency, performance, and durability.<br />
Smart cities - For smart cities to<br />
function, they require the storage of<br />
huge amounts of both archived and<br />
real-time data. In order to analyse and<br />
act on real-time data, IoT technologies<br />
are relying on storage at the edge and<br />
endpoints. For example, smart public<br />
transport systems require real-time data<br />
on traffic, in order to quickly and<br />
accurately adjust to spikes in demand,<br />
such as rush-hour traffic. This means<br />
that, similar to smart cars, this<br />
application requires data storage that<br />
facilitates low network latencies.<br />
The storage for archival data, in<br />
comparison, requires less of a focus on<br />
real-time rapid transfer, instead prioritising<br />
long-term retention. Here, cloud solutions<br />
come into play. Intelligent carbon mapping<br />
tools enable another IoT use case which<br />
relies on historical data of carbon emissions<br />
in order to identify trends and deploy<br />
carbon reduction measures.<br />
GENERAL-PURPOSE TO PURPOSE-<br />
BUILT ARCHITECTURE<br />
Various connected technologies have<br />
different requirements when it comes to<br />
how data must be stored in the most<br />
appropriate way and how to get the best<br />
value from it. For example, NVMe storage<br />
solutions are ideal for use cases that<br />
require very high performance and low<br />
latency in the data journey. Specialised<br />
storage is therefore necessary in order to<br />
create optimum value from IoT data, which<br />
must be considered when building out the<br />
wider data infrastructure.<br />
Many businesses, however, still use<br />
general-purpose architecture to manage<br />
their IoT data. This architecture does not<br />
fully meet the varying needs of IoT<br />
applications and workloads for consumers<br />
and enterprises.<br />
For example, whilst search and rescue<br />
drones prioritise endurance and resilience,<br />
storage solutions in digital healthcare<br />
applications must focus on offering longterm<br />
retention and security for critical health<br />
records. Therefore, there must be a move<br />
from general-purpose storage, to purposebuilt<br />
data storage and different solutions for<br />
different needs.<br />
For any data architecture, the goal is to<br />
maximise the value of data. For real-time<br />
IoT use cases, your storage strategy has to<br />
be designed specifically for IoT, and<br />
address the following considerations:<br />
1. Accessibility: what is its serviceability,<br />
connectivity and maintenance?<br />
2. Wear endurance: It is WRITE-intensive or<br />
READ-intensive?<br />
3. Storage requirements: what data and<br />
how much needs to be processed, analysed<br />
and saved at the endpoints, at the edge,<br />
and in the cloud?<br />
4. Environment: what is the altitude,<br />
temperature, humidity and vibration levels<br />
of the environment in which data will be<br />
captured and kept?<br />
SPECIALISATION FOR OPTIMISATION<br />
Taking optimal advantage of the evolving<br />
IoT data landscape means using specialised<br />
storage solutions to bring unique business<br />
value. It is no longer sufficient to rely on<br />
standard, 'one size fits all' storage solutions,<br />
when the requirements for different IoT<br />
applications vary so drastically. The<br />
deployment of innovative and specific data<br />
storage solutions will help businesses and<br />
enterprises to navigate the accelerating<br />
journey of the IoT landscape, and will<br />
ensure that the value of data isn't lost<br />
unnecessarily in the process.<br />
More info: www.westerndigital.com<br />
www.storagemagazine.co.uk<br />
@<strong>ST</strong>MagAndAwards <strong>May</strong>/<strong>Jun</strong>e <strong>2021</strong><br />
<strong>ST</strong>ORAGE<br />
MAGAZINE<br />
29
CASE <strong>ST</strong>UDY:<br />
CASE <strong>ST</strong>UDY: CANAL EXTREMADURA<br />
FOCUSED ON DELIVERING CONTENT,<br />
IN<strong>ST</strong>EAD OF WORRYING ABOUT WHERE IT'S<br />
<strong>ST</strong>ORED<br />
SPANISH TV NETWORK CANAL EXTREMADURA HAS REVAMPED ITS I.T. INFRA<strong>ST</strong>RUCTURE WITH<br />
QUANTUM <strong>ST</strong>ORNEXT FILE SY<strong>ST</strong>EM SOFTWARE AND TAPE SOLUTIONS<br />
Headquartered in Merida, Spain,<br />
Canal Extremadura is in the middle<br />
of a large-scale digital<br />
transformation. To make the transition from<br />
a traditional radio and TV business to a<br />
modern multimedia corporation, the<br />
company needed to upgrade its outdated<br />
and complex IT infrastructure. By adopting<br />
the Quantum StorNext File System software<br />
as part of its archive solution, it has<br />
accelerated the retrieval of media projects<br />
and achieved scalability for its rapidly<br />
growing business.<br />
The company's existing archive had<br />
become a significant pain point. "We ran<br />
out of room in the tape library and had to<br />
migrate some video to a NAS just to free<br />
up space," said Francisco Reyes, technical<br />
chief at Canal Extremadura.<br />
Unfortunately, expanding the system was<br />
not financially feasible.<br />
Canal Extremadura's new archive<br />
solution needed to merge<br />
seamlessly with<br />
its<br />
preferred media asset management (MAM)<br />
system from Dalet, which is at the centre of<br />
its media production and post-production<br />
workflow. Additionally, the new archive<br />
needed to enable a smooth transition from<br />
the existing large-scale environment, which<br />
contained a huge volume of old files in<br />
legacy media formats.<br />
SCALABILITY IN THE ARCHIVE<br />
Canal Extremadura's IT group requested<br />
proposals from multiple storage vendors,<br />
but ultimately chose Quantum based on<br />
Dalet's recommendation. This carried<br />
significant weight, especially given the<br />
importance of integrating the archive with<br />
the MAM system in order to achieve the<br />
flexibility and scalability needed.<br />
"We tend to keep solutions for a very long<br />
time-we had been using the<br />
previous system for<br />
about 12<br />
years - so we needed to be very confident<br />
in a new solution before making the<br />
selection," says Reyes. "The advice and<br />
technical information we received from the<br />
Dalet and Quantum teams was very<br />
helpful. They gave a very clear picture of<br />
how the solution would work and how it<br />
would be implemented."<br />
After consulting with Dalet and Quantum,<br />
the IT group decided on a solution based<br />
on Quantum StorNext File System software<br />
with Xcellis storage servers, an Xcellis<br />
metadata array, a QXS disk storage array,<br />
and a StorNext AEL6000 tape library. The<br />
tape library, which has 400 slots, uses LTO-<br />
8 drives - a notable upgrade from the LTO-<br />
3 drives the company was using previously.<br />
The environment is fully integrated with the<br />
30 <strong>ST</strong>ORAGE <strong>May</strong>/<strong>Jun</strong>e <strong>2021</strong><br />
@<strong>ST</strong>MagAndAwards<br />
www.storagemagazine.co.uk<br />
MAGAZINE
CASE <strong>ST</strong>UDY:<br />
CASE <strong>ST</strong>UDY: CANAL EXTREMADURA<br />
"We have more than 100 TB of online storage from Quantum. So if someone has<br />
completed a project six months ago, it will probably still be online. Adding online<br />
storage to our previous system would have been much too costly - that's really not<br />
how that system was designed. For us, the Quantum StorNext approach works<br />
much better. In the past, users knew they had to wait for content to be retrieved from<br />
the archive. Now it's much faster than before. We have more drives and faster<br />
drives with the Quantum archive."<br />
Dalet Galaxy MAM system.<br />
The networking flexibility of the<br />
Quantum platform has been beneficial<br />
for the IT group in supporting a range of<br />
client systems. Specifically, the storage<br />
environment is configured to offer Fibre<br />
Channel connectivity to 10 SAN clients<br />
plus 10-GbE connections to multiple NAS<br />
clients, while the metadata network uses<br />
1 GbE.<br />
MAKING CONTENT READILY<br />
AVAILABLE<br />
Thanks to the StorNext File System<br />
software and integrated online storage,<br />
Canal Extremadura's journalists,<br />
producers, and other team members can<br />
now retrieve content much faster than<br />
before.<br />
"We have more than 100 TB of online<br />
storage from Quantum. So if someone<br />
has completed a project six months ago,<br />
it will probably still be online," says Reyes.<br />
"Adding online storage to our previous<br />
system would have been much too costly -<br />
that's really not how that system was<br />
designed. For us, the Quantum StorNext<br />
approach works much better."<br />
Even when content has been archived to<br />
tape, the IT group can deliver it to users<br />
swiftly. "In the past, users knew they had to<br />
wait for content to be retrieved from the<br />
archive," says Reyes. "Now it's much faster<br />
than before. We have more drives and<br />
faster drives with the Quantum archive."<br />
Transitioning to the latest LTO<br />
technology has also helped expedite<br />
retrieval. By upgrading from LTO-3 to<br />
LTO-8, Canal Extremadura can store<br />
significantly more data on each tape.<br />
Consequently, there is a greater chance<br />
that each retrieval request can be satisfied<br />
without having to load multiple tapes.<br />
Explaining the benefit of faster archival<br />
retrieval for users, Reyes says, "Journalists<br />
might be in a hurry to assemble a new<br />
video for that day's news broadcast. With<br />
a faster archive, we can help them meet<br />
their deadlines."<br />
GET UP TO SPEED FA<strong>ST</strong><br />
To ensure Canal Extremadura gets the<br />
most of out of its new archive solution,<br />
Quantum provided multi-day onsite<br />
training. At the same time, the Dalet<br />
implementation team helped Canal<br />
Extremadura migrate its existing archive to<br />
the Quantum environment-a process that<br />
involved transcoding some archived<br />
content from legacy formats. "The process<br />
took some time because we had a lot of<br />
data to migrate, but it was quite smooth,"<br />
says Reyes.<br />
SIMPLIFYING SUPPORT,<br />
ENHANCING COMPATIBILITY<br />
The StorNext File System software has<br />
helped consolidate a complex archive<br />
environment that was previously<br />
comprised of systems from multiple<br />
vendors. Collaborating with a single<br />
vendor removes some of the possible<br />
compatibility problems from the multivendor<br />
environment. It also simplifies the<br />
provision of ongoing support as the IT<br />
group has a single point of contact for the<br />
Quantum environment if it ever needs to<br />
address issues or make changes.<br />
The StorNext software platform facilitates<br />
seamless data movement from online disk<br />
storage to the tape library. The integrated<br />
environment works with the Dalet MAM<br />
system to support a complete production<br />
and post-production workflow, from ingest<br />
to archiving. The new archive environment<br />
provides the long-term scalability to<br />
support the organisation's multimedia<br />
transformation.<br />
"If we ever need to expand the archive in<br />
the future, we can simply add tapes-it's<br />
very straightforward," says Reyes in<br />
conclusion. "With a scalable archive, our<br />
company can stay focused on delivering<br />
engaging content instead of worrying<br />
about where to store it."<br />
More info: www.quantum.com<br />
www.storagemagazine.co.uk<br />
@<strong>ST</strong>MagAndAwards<br />
<strong>May</strong>/<strong>Jun</strong>e <strong>2021</strong><br />
<strong>ST</strong>ORAGE<br />
MAGAZINE<br />
31
TECHNOLOGY: NAS<br />
NAS<br />
THE FUTURE OF<br />
SHARED <strong>ST</strong>ORAGE? IT<br />
HAS TO BE NAS<br />
WITH THE GREATER PERFORMANCE, FUNCTIONALITY AND<br />
EASE OF USE OF NAS, IT IS INCREASINGLY HARD TO<br />
JU<strong>ST</strong>IFY THE NEED FOR A SAN IN MODERN CREATIVE<br />
WORKFLOWS, ARGUES BEN PEARCE OF GB LABS<br />
As technology moves forward and IP<br />
connectivity continues to revolutionise<br />
workflows in the media industry, we are<br />
starting to see SAN (Storage Area Network) as<br />
an inconvenient and overly complicated way<br />
of sharing our digital storage amongst the<br />
various platforms that most businesses use.<br />
This article looks at the differences between<br />
the two technologies and highlights the major<br />
advantages that NAS (Network Attached<br />
Storage) offers to modern businesses.<br />
SAN LIMITATIONS<br />
A SAN architecture is required when<br />
providing 'block level' shared access to hard<br />
drive storage for multiple workstations.<br />
Access and management of the storage<br />
comes from the MDC (Meta Data Controller)<br />
which introduces a big limitation to how<br />
many users can simultaneously access a<br />
particular share point.<br />
This number is generally no more than 20<br />
machines, and this problem is known as<br />
meta data contention. Whilst MDCs can<br />
failover to another backup MDC, there can<br />
be only one active MDC and its workload<br />
cannot be load balanced across multiple<br />
machines, so additional MDCs are literally<br />
redundant until required.<br />
SANs tend to suit particular operating systems<br />
meaning that it is rare to have PC, Mac and<br />
Linux machines working together. Also the fact<br />
that software needs to be installed prevents<br />
certain workstations or servers being<br />
connected at all and limits the compatibility<br />
with the many generations of operating<br />
systems used.<br />
Most SANs are Fibre Channel based,<br />
therefore cards need to be installed into<br />
workstations and specific cables, switches and<br />
transceivers are needed. In addition to this the<br />
management of the SAN is done through<br />
standard Ethernet networks.<br />
THE NAS DIFFERENCE<br />
A NAS (Network Attached Storage) is a storage<br />
server that can offer its own connected storage<br />
as 'file level' shares to a network of Ethernet<br />
connected clients using various different<br />
sharing protocols for maximum compatibility<br />
and flexibility. No software needs to be<br />
installed and no hardware (such as a Fibre<br />
Channel card) needs to be installed. NAS<br />
works with standard Ethernet networks which<br />
keeps costs low and flexibility high.<br />
NAS can potentially connect to anything and<br />
encourages collaboration between all the<br />
platforms within an organisation. Unlike SAN,<br />
NAS does not suffer meta data contention and<br />
therefore allows many more users on a share<br />
point and much greater scalability in terms of<br />
users and performance.<br />
The functionality of a NAS is far greater than<br />
a SAN; Analytics, bandwidth control, quotas,<br />
cloud integration, AD synchronisation, profiles<br />
and monitoring are just some of the additional<br />
features a NAS can bring. As mentioned<br />
before the NAS is a storage server and can<br />
therefore run many beneficial applications and<br />
workflow tools that are just not possible with a<br />
SAN MDC.<br />
EASIER SCALABILITY<br />
More users means more switch ports and more<br />
cost, but the really big problem is the uplinks<br />
between switches. The uplinks in Fibre Channel<br />
switches create bottlenecks that cannot be<br />
ignored. Every switch port should be able to<br />
deliver full bandwidth, but if the uplink from<br />
another switch is a fraction of the total port<br />
bandwidth, then the performance per port<br />
becomes truly sub optimal.<br />
Ethernet switches are easier to deploy with<br />
faster uplinks and ultrafast backplanes<br />
available within blade switches. Multiple<br />
32 <strong>ST</strong>ORAGE <strong>May</strong>/<strong>Jun</strong>e <strong>2021</strong><br />
@<strong>ST</strong>MagAndAwards<br />
www.storagemagazine.co.uk<br />
MAGAZINE
TECHNOLOGY: TECHNOLOGY: NAS<br />
"NAS can potentially connect to anything and encourages<br />
collaboration between all the platforms within an organisation.<br />
Unlike SAN, NAS does not suffer meta data contention and<br />
therefore allows many more users on a share point and much<br />
greater scalability in terms of users and performance.<br />
The functionality of a NAS is far greater than a SAN;<br />
Analytics, bandwidth control, quotas, cloud integration, AD<br />
synchronisation, profiles and monitoring are just some of the<br />
additional features a NAS can bring."<br />
networks can easily be attached to the NAS<br />
allowing good network design to eliminate<br />
bottlenecks.<br />
Some NAS platforms support dynamic scaling<br />
of capacity meaning almost no down time.<br />
Whereas adding storage to a SAN, especially<br />
resizing existing volumes is usually a 'data off,<br />
expand and then copy back' procedure,<br />
wasting days of downtime.<br />
FLEXIBLE CO<strong>ST</strong><br />
Licenses are a big part of the cost and<br />
inflexibility of SAN ownership. Each user,<br />
including the MDC and any failover MDCs<br />
must be licensed as either a one-off cost or<br />
annual ongoing expenditure. Specific<br />
additional hardware is also required, such as<br />
Fibre Channel switches and cards.<br />
NAS does not require software licenses and<br />
most likely requires no additional hardware or<br />
software installation. Almost all computers<br />
come standard with at least one 1Gb Ethernet<br />
port and standard network hardware is cheap<br />
and easy to source. For higher bandwidth<br />
usage 10Gb, 40Gb or 100Gb Ethernet can<br />
be added to a client machine in the form of a<br />
PCI card or Thunderbolt/USB C interface to<br />
dramatically improve performance.<br />
MAKING CONNECTIONS<br />
Looking at the speed of connections<br />
available today, it is easy to see how NAS is<br />
surpassing SAN;<br />
NAS options: 100Gb, 40Gb, 10Gb and<br />
1Gb Ethernet.<br />
SAN options: 32Gb, 16Gb, 8Gb and 4Gb<br />
Fibre Channel<br />
Copper or optical cables can be used with<br />
SAN or NAS and very large distances can be<br />
achieved with optical cable and advanced<br />
transceivers.<br />
As seen in the example above, Ethernet<br />
connectivity has surpassed Fibre Channel many<br />
years ago and additionally server end<br />
connections can be channel bonded to<br />
produce very fast interfaces to serve large<br />
numbers of clients and provide cable<br />
redundancy. Load balancing connections in a<br />
SAN is far less flexible and not truly compatible<br />
across platforms, such as Mac OS.<br />
COMPLEX SUPPORT<br />
SAN is comparatively complicated and involves<br />
many more elements, which in turn bring about<br />
many more possible points of failure.<br />
Operating systems need to be matched and<br />
software needs to remain compatible after<br />
updates or data is simply not available, as the<br />
storage cannot be mounted. Deployments are<br />
very involved and installation time, training and<br />
ongoing support is considerable.<br />
CLEAR CHOICE<br />
The biggest potential issue with NAS is that<br />
most systems are not built for demanding<br />
usage and large scalability, so the choice of<br />
NAS is restricted to manufacturers that actually<br />
understand high bandwidth usage and also<br />
provide genuine sustained performance for<br />
mission critical usage. By comparison a SAN is<br />
very restrictive, complicated and expensive and<br />
only really achieves the simple function of<br />
sharing storage.<br />
'Block level' access can be beneficial for<br />
certain uses, but the reduced latency and<br />
improved efficiency found in modern high<br />
performance NAS storage systems means that<br />
this marginal benefit has lessened over time.<br />
If you are looking for large capacity, scalable<br />
shared storage that will connect to everything in<br />
your facility then the choice is clear.<br />
More info:<br />
www.gblabs.com/component/k2/the-future-ofshared-storage-is-nas<br />
www.storagemagazine.co.uk<br />
@<strong>ST</strong>MagAndAwards <strong>May</strong>/<strong>Jun</strong>e <strong>2021</strong><br />
<strong>ST</strong>ORAGE<br />
MAGAZINE<br />
33
OPINION: CLOUD <strong>ST</strong>ORAGECLOUD <strong>ST</strong>ORAGE<br />
PANDEMIC ACTS AS 'CLOUD<br />
CATALY<strong>ST</strong>' FOR REMOTE<br />
WORKING<br />
THE COVID-19 PANDEMIC HAS FORCED BUSINESSES TO EVOLVE<br />
QUICKLY AND ADJU<strong>ST</strong> TO THE NEW WORKING DYNAMIC - BUT<br />
SOME HAVE BEEN BETTER PREPARED THAN OTHERS, EXPLAINS RUSS<br />
KENNEDY, CHIEF PRODUCT OFFICER, NASUNI<br />
Since 2017, the Microsoft Teams user<br />
base has grown astronomically by far<br />
beyond 100 million users and Virtual<br />
Desktop Infrastructure (VDI) and Desktop<br />
as a Service (DaaS) take-up has exploded,<br />
as a consequence of work moving out of<br />
the office. Organisations have had to find<br />
ways to ensure workers can remain<br />
productive as part of this shift. In the past,<br />
VDI deployments were sold as IT cost<br />
savings efforts, which didn't always play<br />
out. Performance also suffered, because<br />
virtual infrastructures had to reach over<br />
the wire to access the files end users<br />
needed. With VDI and DaaS now being<br />
delivered from the cloud, flexibility and<br />
performance are now enabling the 'work<br />
from anywhere' use case.<br />
The game has changed dramatically<br />
however, with desktop virtualisation now<br />
more about business continuity and<br />
remote productivity than cutting costs -<br />
and the pandemic has forced many<br />
companies to move in this direction. Three<br />
businesses we've worked with recently<br />
provide good examples of how to use a<br />
combination of cloud file storage and a<br />
powerful cloud VDI provider to maintain<br />
productivity in difficult circumstances.<br />
The first is global oil and gas services<br />
firm Penspen which rapidly transitioned to<br />
VDI at the start of the pandemic, standing<br />
up Windows Virtual Desktop (WVD)<br />
instances in the Azure regions closest to its<br />
employees. During the same period,<br />
professional services provider SDL also<br />
transitioned 1,500 global workers to<br />
Amazon WorkSpaces over the course of a<br />
weekend. And, after pandemic-related<br />
events shut engineering giant Ramboll out<br />
of a key data centre, the firm deployed a<br />
Nasuni VM in Azure and restored access<br />
to 300 remote users within two hours.<br />
These examples prove the transformative<br />
change at work across different industries -<br />
the incredible climb in cloud adoption and<br />
towards cloud-centric infrastructure.<br />
Moving servers and applications to the<br />
cloud has been the focus of infrastructure<br />
modernisation efforts for the past several<br />
years. But now companies, large and<br />
small, are looking for ways to leverage the<br />
benefits of the cloud for file storage.<br />
Cloud file storage is clearly helping<br />
enterprises deliver file data to users when<br />
and where needed with great performance<br />
as a productivity enabler.<br />
The shift to remote or hybrid work is here<br />
to stay. From a file access and storage<br />
perspective, this is clear from the way<br />
Google Cloud now makes use of the same<br />
network that evolved to support YouTube.<br />
The same technology that loads an<br />
obscure video in less than a second works<br />
to ensure users can access the files they<br />
need on demand.<br />
This is important because files are often<br />
the hardest piece of the puzzle. That's why<br />
enterprises need to be able to deliver file<br />
data to their users when and where they<br />
need it, with great performance. Cloud file<br />
storage makes that possible; and the<br />
latest approaches to enterprise file storage<br />
can drive efficiencies and lower costs by<br />
up to 70%.<br />
At the same time, many large enterprises<br />
managing multi-petabyte environments<br />
need to be able to scale up without being<br />
constrained by hardware limits. The<br />
pandemic has driven a dramatic<br />
acceleration and transition of anything onprem<br />
in physical data centres to the cloud.<br />
That transition has put a significant strain<br />
on organisations as they need to ensure<br />
they have all the capabilities they've grown<br />
accustomed to in the on-prem world, in<br />
the cloud.<br />
No one's dipping their toes into the cloud<br />
world any more - they're diving in.<br />
Enterprises need to be able to deliver file<br />
data services to users when and where<br />
they need it with great performance - and<br />
the evolution of cloud file storage is<br />
making that possible.<br />
More info: www.nasuni.com<br />
34 <strong>ST</strong>ORAGE <strong>May</strong>/<strong>Jun</strong>e <strong>2021</strong><br />
@<strong>ST</strong>MagAndAwards<br />
www.storagemagazine.co.uk<br />
MAGAZINE
ENTERPRISE <strong>ST</strong>ORAGE AND<br />
SERVERS FROM LENOVO<br />
Now at Northamber<br />
• The UK’s SMB, Mid-Market, Education and Local Government<br />
Infrastructure Specialists<br />
• In-house configuration, design, demo and training centre<br />
• Easy access to Lenovo’s Alliance Partners<br />
• Flexible payment options with Hardware as a Service<br />
Talk to the Northamber Solutions experts today or<br />
for more details visit northamber.com/lenovo<br />
Call us on<br />
020 8296 7015<br />
northamber.com/lenovo | follow us<br />
©Northamber <strong>2021</strong> E and O.E. <strong>Jun</strong>e ‘21