18.06.2014 Views

Legato NetWorker technical overview Legato NetWorker technical

Legato NetWorker technical overview Legato NetWorker technical

Legato NetWorker technical overview Legato NetWorker technical

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>Legato</strong> <strong>NetWorker</strong><br />

<strong>technical</strong> <strong>overview</strong><br />

RV NRW Seminar Storage<br />

December 2001<br />

Reinhold Danner<br />

Compaq Technical<br />

Customer Support Center


Agenda<br />

•Overview<br />

•Features<br />

•High availability<br />

•SAN and Library Consolidation<br />

•NAS<br />

•SAN and Serverless backups<br />

2


Agenda<br />

•Overview<br />

• Features<br />

• High availability<br />

• SAN and Library Consolidation<br />

• NAS<br />

• SAN and Serverless backups<br />

3


What is <strong>Legato</strong> Networker<br />

<strong>Legato</strong> <strong>NetWorker</strong> is a powerful and highly<br />

scalable family of distributed storage<br />

management software components for<br />

heterogeneous environments of all sizes.<br />

<strong>NetWorker</strong> provides complete centralized<br />

protection and life cycle management of<br />

corporate data assets.<br />

4


<strong>NetWorker</strong> product development<br />

1990 2001<br />

Unix File System backup (Networker)<br />

Windows NT File System backup<br />

UNIX and NT RDBMS Backup (Networker Modules)<br />

UNIX and NT ERP & Messaging backup (NW Modules<br />

Storage Frameworks (GEMS)<br />

Media Manager (SmartMedia)<br />

LAN Free backup<br />

Server-Less backup<br />

5


Networker Server Platforms<br />

• Solaris<br />

• HP-UX<br />

• Tru64 UNIX<br />

• AIX<br />

• IRIX<br />

• Linux (Redhat, SuSE, Caldera, TurboLinux)<br />

• Windows NT and 2000<br />

• NetWare<br />

• See Compatibility Guide<br />

6


<strong>NetWorker</strong> supports<br />

• Application and Database<br />

support<br />

– Oracle<br />

– Informix<br />

– Sybase<br />

– DB2<br />

– SAP R/3 on Oracle<br />

– Microsoft SQL Server<br />

– MS Exchange<br />

– Lotus Notes<br />

• Framework integration<br />

– Tivoli<br />

– HP Openview<br />

– Unicenter TNG<br />

– Generic SNMP module<br />

See compatibility<br />

list for more<br />

7


3-tier any-two-any storage architecture<br />

for Client/Server Environments<br />

Server<br />

(1 Data Zone)<br />

Tru64 UNIX<br />

<strong>NetWorker</strong><br />

Server<br />

Metadata<br />

Indices<br />

Storage Node<br />

Storage<br />

Node<br />

HP-UX<br />

Client<br />

Linux<br />

Solaris<br />

NT<br />

AIX<br />

8


G E M S<br />

Global Enterprise Management of Storage<br />

• Control Zone<br />

– Command and<br />

Control Functions<br />

• Data Zones<br />

– Data Protection<br />

– Local Administration<br />

– Performance<br />

Optimisation<br />

Control Zone(s)<br />

Corporate Intranet<br />

. . .<br />

Data Zone Data Zone Data Zone<br />

X018 9


Core <strong>NetWorker</strong> Technology<br />

Metadata (Backup Database)<br />

Relational, Fast<br />

Granular (Separate File and Media DB)<br />

Point-in-Time, True Image Recovery<br />

Enables Rapid <strong>NetWorker</strong> Server Recovery<br />

Easy to Replicate/Fail Over<br />

Tape Format<br />

Parallel (Backup and Recovery)<br />

Platform Independent<br />

Self Describing<br />

Resilient<br />

Enables Rapid File Location<br />

• Mature, proven technology that has spanned a<br />

decade of computing<br />

10


<strong>NetWorker</strong> Open Tape Format<br />

File Number<br />

Markers<br />

Data Blocks<br />

End of Data<br />

EOF<br />

EOF<br />

EOF<br />

EOF<br />

label=vol.001<br />

blocksize=32KB<br />

media_type=4mm<br />

0 1 2<br />

76 0<br />

1<br />

Volume Header<br />

• <strong>Legato</strong>'s standard tape format which enable tapes to be written to<br />

and read from ANY platform<br />

• Parallel Backup with interleaving capabilities for high performance -<br />

and parallel recoveries<br />

• Platform and Media independent<br />

– Same format for all supported types of media<br />

• Self-describing<br />

• Resilient<br />

– Skips soft errors<br />

Record Numbers<br />

• Enables Rapid file Location<br />

Block Headers<br />

11


Modular scalable Architecture with<br />

Storage Node<br />

• Backup Device location flexibility (local or remote)<br />

• Availability: Reduce network traffic bottlenecks<br />

• Manageability: Single system view for devices, media and index<br />

• Failover: Dynamic allocation of devices across storage nodes<br />

12


Agenda<br />

• Overview<br />

•Features<br />

• High availability<br />

• SAN and Library Consolidation<br />

• NAS<br />

• SAN and Serverless backups<br />

13


Comprehensive Multi-Platform<br />

Data Protection<br />

• Central administration through GUI and<br />

scriptable commands from server/client<br />

• Policy based scheduling of enterprise wide<br />

backup operations<br />

• Flexible grouping of clients<br />

• Full, incremental, differential and consolidated<br />

backup schedule options<br />

• Optional multi-tier staging of backup operations<br />

• Ad-hoc user backup<br />

• Directed recovers<br />

• Parallel restore capability<br />

14


Comprehensive Multi-Platform<br />

Data Protection (cont)<br />

• Pre- and Post processing options accommodate<br />

special requirement (offline DB backup)<br />

• Automatic and on-demand cloning for added<br />

protection (Saveset duplication)<br />

• Open file management options (NT)<br />

• Firewall support<br />

• Archiving<br />

• Hierachical Storage Management (HSM)<br />

• Can use a dedicated network for backup and<br />

recovery<br />

15


Browse and Rentention Policies<br />

Browse Policy for purging unwanted file version<br />

information<br />

– backup, archive and migration data visible via User<br />

Administrator interface<br />

– point in time recovery<br />

– one per saveset<br />

Retention Policy for grooming and recycling media<br />

– backup, archive and migration data visible via System<br />

Administration interface<br />

– Browse Policy depends on/interrelated to Retention Policy<br />

– tracks media usage<br />

16


Indexing (catalog) ) Technology<br />

• Excellent performance and scaling<br />

– Index performance scales linearly with # of<br />

savesets<br />

– Fast browsing<br />

– Huge index sizes supported<br />

• Fault resilient<br />

– No compression<br />

– Index files are write-once-read-many<br />

17


<strong>NetWorker</strong> Index Database Model<br />

/nsr<br />

mm<br />

index<br />

mmvolume6<br />

client1<br />

db6<br />

clientN<br />

db6<br />

saveset<br />

And<br />

volumes<br />

Saved<br />

Files<br />

Description<br />

saveset<br />

18


Networker Index features<br />

• Saveset based policies<br />

• Directed recovery within same platform<br />

• cross-platform browsing<br />

• 64 bit format data streams<br />

– No need to split larger savesets into 2GB chunks<br />

• Index files are check-summed<br />

• .rec files are write once, read many<br />

• No compression needed<br />

• Indices are saved automatically during scheduled<br />

server initiated backups<br />

• Index backups are none-blocking<br />

19


Client File Index: 6.x<br />

S1.rec<br />

(Save Time S1)<br />

V6hdr<br />

All records from<br />

this save time (S1)<br />

are in this file<br />

S2.rec<br />

(Save Time S2)<br />

All records from<br />

this save time (S2)<br />

are in this file<br />

S1<br />

S2<br />

.<br />

.<br />

Contains an<br />

entry for each<br />

save set<br />

file (each<br />

save time)<br />

S1.K0<br />

S1.K1<br />

keyed by name<br />

keyed by file id<br />

Contains keys<br />

that point to the<br />

records in<br />

S1.rec.<br />

• The *.rec files are the ones that are backed-up. Anything else in this<br />

20<br />

directory can be rebuilt from the *.rec files by means of nsrck(8).


<strong>NetWorker</strong> Parallelism and Interleaving<br />

• Parallelism<br />

– Multiple data streams<br />

from multiple clients to a<br />

NW Server or Storage Node<br />

• Multiplexing<br />

– Multiple data streams written to<br />

one or multiple tape drives<br />

– Provides maximum throughput<br />

Memory Buffer or<br />

Shared Memory *<br />

* (power edition only)<br />

NW Server<br />

Multiple savesets on self describing<br />

tape intermixed at the block level<br />

21


Immediate technology<br />

DATA STREAMS (Control and Data)<br />

SCSI<br />

NETWORKER<br />

CLIENT<br />

TCP/IP and UDP<br />

NETWORKER<br />

SERVER<br />

SCSI<br />

NETWORKER<br />

SERVER<br />

SCSI<br />

DATA<br />

TCP/IP<br />

and UDP<br />

SCSI<br />

NETWORKER<br />

CLIENT<br />

22


Full and Differential backup<br />

Deltas grow over time, relative to a Full<br />

1<br />

Full Backup Differential - backup Full Backup<br />

all files that have<br />

changed since the last<br />

full<br />

• Full backup for every system every night is too slow<br />

• So 'DIFF' or 'INCR' must be used.<br />

3<br />

7<br />

...........<br />

• 'Level' backups grow over time so there is a pratical limit to<br />

how many can be performed<br />

23


Incremental backups<br />

• Incrementals use less media space because they only<br />

backup changes made on that day<br />

• Incrementals consume less network bandwidth as there<br />

is less data to transmit<br />

• Recovery from incrementals is slower because all<br />

incremental backups must be used in the recovery.<br />

24


Saveset Consolidation<br />

Day 1 - Full<br />

Day 2 - Level 1<br />

Day 3 - Level 1<br />

Backups<br />

Merges Level 1 backup<br />

with full back up Saveset<br />

to create a new full backup<br />

• Less traffic on network<br />

during backups<br />

• Faster recovery times<br />

• Consolidation impacts<br />

<strong>NetWorker</strong> Server only<br />

Full<br />

Day 2 - Full<br />

Day 3 - Full<br />

25


Comparison of backup levels<br />

Backup Level Pros<br />

Cons<br />

Full Faster restore Slower backup<br />

Higher load on server<br />

Higher load on client/network<br />

Takes up more volume space<br />

Incremental<br />

Consolidate<br />

Faster backup<br />

Lower load on client/network<br />

Lower load on server<br />

Takes up least volume space<br />

Faster backup<br />

Faster restore<br />

Lower load on client/network<br />

Smaller backup window<br />

Slower restore<br />

Data can spread across multiple volumes<br />

Longest high load on server<br />

Need at least two drives<br />

Takes up the most volume space<br />

26


Comparison of backup levels<br />

Backup Level Pros<br />

Cons<br />

Full Faster restore Slower backup<br />

Higher load on server<br />

Higher load on client/network<br />

Takes up more volume space<br />

Incremental<br />

Consolidate<br />

Faster backup<br />

Lower load on client/network<br />

Lower load on server<br />

Takes up least volume space<br />

Faster backup<br />

Faster restore<br />

Lower load on client/network<br />

Smaller backup window<br />

Slower restore<br />

Data can spread across multiple volumes<br />

Longest high load on server<br />

Need at least two drives<br />

Takes up the most volume space<br />

27


Archive Module<br />

June Financials<br />

June_Numbers.xls<br />

Fin_Data.doc<br />

Fin_Graphs.html<br />

Proposal.txt<br />

Group of Data<br />

• Data 'moved' from<br />

online storage to<br />

offline media<br />

• Automatic 'grooming'<br />

releases disk space<br />

• Separate media pool<br />

• Long term retention<br />

Annotation (up to 1K<br />

or 1024 characters)<br />

June_Financials<br />

• Server or User initiated<br />

on demand<br />

• Recall has annotated<br />

text search capabilities<br />

28


<strong>NetWorker</strong> HSM<br />

100%<br />

Full<br />

Disk<br />

Utilization<br />

0%<br />

Empty<br />

Time<br />

High Water<br />

Mark<br />

Low Water<br />

Mark<br />

• Migration based on<br />

• Last access time<br />

• Minimum file size<br />

• File owner<br />

• File group<br />

• File type<br />

• HSM on UNIX products<br />

– HSM on Tru64 UNIX based on<br />

symbolic links<br />

– HSM on Solaris based on<br />

DMAPI/XDSM standard<br />

• HSM on NT<br />

• Transparent to user /<br />

application<br />

• Recall happens<br />

automatically by<br />

accessing migrated<br />

files<br />

29


Saveset Staging<br />

Clients<br />

<strong>NetWorker</strong> Server<br />

¬<br />

®<br />

­<br />

High Speed Disk<br />

Storage Pool<br />

¯<br />

• Initial backup to intermediate high speed devices<br />

• When stage area reach preset threshold, NW<br />

can automatically stage data to removable media<br />

• Interim recovery operations occurs at disk speed<br />

30


Saveset and Volume Cloning<br />

Clients<br />

<strong>NetWorker</strong> Server<br />

¬ ­<br />

®<br />

Multi-Device<br />

Tape Library<br />

• Duplicate saveset for offsite vaulting<br />

• Backup operation reads filesystem once<br />

• Recover can use original or clone<br />

31<br />

• Automatic scheduled or user initiated on demand


Agenda<br />

• Overview<br />

• Features<br />

•High availability<br />

• SAN and Library Consolidation<br />

• NAS<br />

• SAN and Serverless backups<br />

32


<strong>NetWorker</strong> Server - NT MS-CLusterServer<br />

LAN<br />

clients<br />

ping<br />

peng<br />

pong<br />

rd=ping:\\.\Tape0<br />

\nsr<br />

rd=pong:\\.\Tape0<br />

•Clients save and recover files from virtual IP Name<br />

•Shared copperSCSI tape drives are not supported<br />

•Device attached to systems where Networker Server<br />

is running are considered local<br />

33<br />

•<strong>NetWorker</strong> supports NT V4.0 MSCS & W2k Clusters


<strong>NetWorker</strong> Server - Tru64 UNIX TruCluster 5<br />

LAN<br />

clients<br />

ClusterAlias:<br />

peng<br />

ping<br />

SCSI/FC<br />

Cluster interconnect<br />

pong<br />

/nsr<br />

rd=ping:/dev/ntape/tape0_d0<br />

HSG80<br />

HSG80<br />

/dev/ntape/tape1c<br />

/dev/ntape/tape2c<br />

/dev/ntape/tape3c<br />

•supported on TruCluster V5.0a, V5.1 and V5.1a<br />

•assumes name of default CLUA<br />

•runs as a CAA application (resource)<br />

•only one Index (default ClusterAlias) stores file entries of all filesystems<br />

34<br />

•shared /nsr -> /cluster/members/{memb}/nsr -> //nsr


Agenda<br />

• Overview<br />

• Features<br />

• High availability<br />

•SAN and Library Consolidation<br />

• NAS<br />

• SAN and Serverless backups<br />

35


Benefits of Library / Drive Sharing<br />

• Tape robot are very expensive<br />

– Sharing this cost between multiple backup servers/storage<br />

nodes, increases ROI<br />

– Media management is centralised. Administration costs<br />

are reduced and ROI is improved<br />

– Most products perform library sharing but not drive sharing<br />

• Library sharing allows you to allocate drives to systems<br />

• Drive sharing allows you to dynamically change this<br />

allocation. Sometimes known as Dynamic Drive Sharing<br />

(DDS)<br />

– Library sharing is important in traditional backup<br />

environments<br />

– Drive sharing is required in most SAN solutions<br />

36


Library Sharing with ACSLS<br />

Mainframe<br />

<strong>NetWorker</strong> Server/<br />

Storage Node<br />

SCSI<br />

SCSI<br />

ACSLS Server<br />

TCP/IP<br />

SCSI<br />

SCSI<br />

<strong>NetWorker</strong> Server/<br />

Storage Node<br />

<strong>NetWorker</strong> Server/<br />

Storage Node<br />

37


<strong>NetWorker</strong> Library Sharing (static)<br />

<strong>NetWorker</strong><br />

Server<br />

<strong>NetWorker</strong> Data Zone<br />

• Jukebox is shared among one <strong>NetWorker</strong> Server<br />

and/or multiple Storage Nodes<br />

• Drives are used by one Server/Storage Node only<br />

38


<strong>NetWorker</strong> dynamic device sharing (DDS)<br />

within one Data Zone<br />

LAN<br />

Robotic Control<br />

Library<br />

NW Server<br />

A<br />

1<br />

• Drive 1 (/dev1)<br />

(rd=snB:/dev1)<br />

(rd=snC:/dev1)<br />

• Drive 2 (/dev2)<br />

(rd=snB:/dev2)<br />

(rd=snC:/dev2)<br />

Storage Node<br />

B<br />

Storage Node<br />

C<br />

Drive - physical tape drive<br />

Device - access path to phys. Drive<br />

all devices sharing a drive<br />

have same HW id<br />

Drive number - phys. drive's position in the jb<br />

Device number - device's position in the jb<br />

device resource<br />

Hardware ID = unique Ids for phys. drives<br />

2<br />

39


GEMS SmartMedia (dynamic)<br />

SmartMedia<br />

Server<br />

<strong>NetWorker</strong><br />

Server<br />

Storage NodeDCP<br />

<strong>NetWorker</strong><br />

Server<br />

Robotic Control<br />

LCP<br />

DCP<br />

DCP<br />

Application DCP<br />

Data<br />

Paths<br />

Library<br />

• GEMS SmartMedia<br />

– Central Jukebox<br />

Management<br />

– Library<br />

Consolidation<br />

over Data Zones<br />

– Dynamic drive<br />

sharing (DDS)<br />

i.e. for each tape drive one DCP is configured for each system<br />

4 tape drive => configure 4x DCP for application node<br />

40


Open management framework<br />

41


AlphaStor - key functional areas<br />

Device &<br />

Library Mgmt.<br />

AlphaStor/Robot Manager<br />

Media Life Cycle<br />

Management<br />

AlphaStor/Media Manager<br />

42


True Device Sharing<br />

• AlphaStor creates<br />

logical separation.<br />

• <strong>NetWorker</strong> sees<br />

devices as locally<br />

attached.<br />

• Mixed device types<br />

possible.<br />

DLT7000<br />

SuperDLT<br />

SAN<br />

43


Device and Library Functionality<br />

• Device sharing across backup servers (data zones)<br />

• Dynamic device allocation (automated drive selection<br />

– maximizes usage)<br />

• Mixed media support (multiple drive types in same<br />

library, form factors, etc.)<br />

• Support for broad range of SCSI libraries plus<br />

controller-based libraries in SAN environment<br />

(StorageTek ACSLS/LibStation, IBM 3494, ADIC<br />

AML)<br />

44


Media Life Cycle Management<br />

• Automates tape and other media life cycle<br />

management<br />

– Onsite/offsite tape rotation policy definition<br />

• Allows media from multiple <strong>NetWorker</strong> servers<br />

(data zones) to reside in the same library<br />

• Consolidated media reporting and management<br />

across backup servers (data zones)<br />

• Available ‘scratch’ tape reporting for individual<br />

or combined applications<br />

• A point and click web interface for Operators<br />

45


Compaq EBS for Data Centers<br />

<strong>Legato</strong> <strong>NetWorker</strong><br />

NT<br />

NT<br />

Fibre Channel<br />

SAN Switch 8/16<br />

EMA12000<br />

2000<br />

2000<br />

Alpha<br />

T64<br />

Alpha<br />

T64<br />

Alpha<br />

T64<br />

Sun<br />

Solaris<br />

MDR<br />

Sun<br />

Solaris<br />

• Integrated SAN of Tape and Disk on<br />

a high-speed, independent Fibre<br />

Channel (100MB/sec) storage<br />

backbone<br />

• Leverages current DLT Library<br />

technology and shares the tape<br />

device to all servers on Fabrics<br />

(cascaded) Switches<br />

• Provides a centralized, automated<br />

backup solution for multiple Compaq<br />

ProLiant and/or Compaq Alpha<br />

and/or SUN Servers<br />

• Provides large management savings<br />

and tape standardization for all<br />

servers<br />

• Storage consolidation through<br />

Heterogeneous OS & Platform<br />

ESL9326DX Support - NT, W2K, Tru64 UNIX /<br />

Library TruCluster, Sun Solaris 46


Agenda<br />

• Overview<br />

• Features<br />

• High availability<br />

• SAN and Library Consolidation<br />

•NAS<br />

• Serverless backups<br />

47


Network Attached Storage Support<br />

• NDMP local backup<br />

Networker<br />

Backup<br />

Server<br />

NDMP<br />

LAN<br />

ISOLATED<br />

BACKUP<br />

DATA FLOW<br />

Tape<br />

Library<br />

NAS<br />

The NAS files is a<br />

storage node!<br />

48


Network Attached Storage Support (cont)<br />

• Remote Backup<br />

Tape Library<br />

Networker<br />

Backup<br />

Server<br />

LAN<br />

NDMP<br />

NAS<br />

Backup through<br />

the network to<br />

server attached<br />

library<br />

49


Network Attached Storage Support (cont)<br />

• NDMP 3-party backup<br />

Networker<br />

Backup<br />

Server<br />

NDMP<br />

LAN<br />

NDMP<br />

ISOLATED<br />

BACKUP<br />

DATA FLOW<br />

Tape Library<br />

NAS<br />

Filers with low capacity<br />

do not need a library<br />

50


SnapImage backup<br />

• <strong>NetWorker</strong> SnapImage Backup<br />

- Block Level Image Backup/ Restore<br />

– Available on Solaris 2.6/2.7 and HP-UX 11.0<br />

– Live, low impact block level backups<br />

– Full image and File level recovery<br />

– NDMP backup to to locally attached tape device<br />

or across the LAN to a tape device attached to<br />

another NDMP server<br />

51


Agenda<br />

• Overview<br />

• Features<br />

• High availability<br />

• SAN and Library Consolidation<br />

• NAS<br />

•Serverless backups<br />

52


Serverless backup on a SAN<br />

Application<br />

Servers<br />

Workstations<br />

Messaging Network<br />

Storage<br />

Server<br />

Storage Area Network<br />

• Direct disk-to-tape and<br />

disk-to-disk storage<br />

• No impact to LAN or<br />

application servers<br />

• Enables continuous<br />

– Backup<br />

– Restore<br />

Disks<br />

Tape<br />

• <strong>Legato</strong> solutions<br />

– <strong>NetWorker</strong>, Celestra<br />

and NDMP<br />

53


Reengineering Backup<br />

100%<br />

Backup<br />

Application<br />

Reengineered<br />

1%<br />

Control<br />

Process<br />

Inter<br />

Process<br />

Protocol<br />

99%<br />

Data Moving<br />

Process<br />

54


Celestra 3-Tier Architecture<br />

Management<br />

<strong>NetWorker</strong><br />

LAN<br />

Software Complexity<br />

Workload<br />

Data Control<br />

Std. Interface<br />

Data Movement<br />

Std. Interface<br />

Data Mover<br />

Celestra<br />

Power<br />

Tape Library<br />

NDMP<br />

Application<br />

Servers<br />

Third Party Extended Block Copy Interface<br />

SAN<br />

55


Celestra in a SAN Environment<br />

# 1<br />

Application turns<br />

on backup<br />

LAN<br />

<strong>NetWorker</strong><br />

with NDMP<br />

# 2<br />

Celestra sends a<br />

block list to data<br />

mover<br />

# 3<br />

Data move<br />

sends block<br />

to tape<br />

Application<br />

Servers<br />

SAN<br />

Celestra<br />

Power Agents<br />

Data<br />

Mover<br />

Tape Library<br />

56


Celestra Live Backup<br />

Application Server<br />

File system<br />

Document<br />

saved<br />

Write<br />

CheckPoint<br />

Interceptor<br />

Celestra Device<br />

OR DataMover<br />

Cache<br />

Celestra<br />

Power<br />

Tape Library<br />

Backup Events<br />

• Backup starts at 11:00pm<br />

• Snapshot taken, Cache is<br />

opened<br />

• Begin saving blocks to tape<br />

• At 11:05pm a write to the file<br />

system is done<br />

• Write Interceptor first saves<br />

original block to cache<br />

• Write Interceptor then writes new<br />

block<br />

• Resume saving blocks to tape<br />

• When we arrive at changed block,<br />

save original from cache instead<br />

• Resume saving from filesystem<br />

• Backup of filesystem<br />

- snapshot of 11:00pm -<br />

is complete<br />

57


Celestra Power Delivers<br />

• Offloads LAN, CPU and I/O<br />

• Moves data directly from disk to tape<br />

• Server and network performance not impacted<br />

• Safe, reliable live backup of active file systems<br />

• High speed data transfer - exceeds 1TB/hour<br />

• Full, incremental, sparse backup<br />

• Image, directory, and file restore<br />

• Proven technology (100+ sites over 5 years)<br />

58

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!