Using EMC Storage Array Technologies for Fast and ... - VMware
Using EMC Storage Array Technologies for Fast and ... - VMware
Using EMC Storage Array Technologies for Fast and ... - VMware
- No tags were found...
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
Why are data migrations per<strong>for</strong>medTechnology refresh<strong>Storage</strong> consolidation <strong>and</strong> upgradePer<strong>for</strong>mance tuningTo enable tiered storage (ILM)
Migration Challenges Facing Today’s Businesses24 x 7 x 365 application availabilityNo downtime windowsNo data exposurePer<strong>for</strong>mance optimizationLoad balancing <strong>and</strong> volume resizing<strong>Storage</strong> utilizationComplexity <strong>and</strong> costDisparate systemsManual processesLong planning cyclesRisks
Type of data migrationsBackup <strong>and</strong> restoreHost based data migrationESX Server level migrationVirtual Machine migrationMigrations using SAN or IP<strong>Storage</strong> virtualization (Invista, SVC etc.)Migrations across heterogeneous storage arrays• Open Replicator• SAN CopyMigration between homogeneous storage arrays• CLARiiON to CLARiiON (MirrorView)• Symmetrix to Symmetrix (SRDF)• Celerra to Celerra (Celerra Replicator)
Host Based MigrationService Console can be used to per<strong>for</strong>m migrationsCopy each virtual machines disks from source to target volumeCan be done using the “cp” comm<strong>and</strong>• Not practical <strong>for</strong> ESX 3.x environments“vmkfstools” in ESX 3.x environment provides an option to copy data• Not practical <strong>for</strong> ESX 2.x environmentsMigrations using Virtual Center“Cold” migrate can be used to move dataMigrations inside the Virtual MachinesHost based replication software• Open Migrator/LM <strong>for</strong> Windows <strong>and</strong> Linux• Replistor <strong>for</strong> WindowsLogical Volume Managers can “move” data
<strong>Using</strong> “cp” on ESX 2.5 Service ConsoleSteps needed to per<strong>for</strong>m migrationPresent “source” <strong>and</strong> “target” disk(s) to ESX ServerCreate VMFS on “target” disk(s) <strong>and</strong> assign an appropriate labelPower virtual machines offUse “cp” to copy virtual disks from source VMFS to target VMFSRemove access to “source” disk(s)Rescan the SAN fabricRe-label the VMFS on “target” disk(s) to the original labelPower on virtual machinesIf “source” VMFS does not have labels, configuration files will need to bechangedMigration of individual VMs on the “source” VMFS can be per<strong>for</strong>medWill need changes to the configuration files
Listing of Source <strong>and</strong> Target VMFS
Determining the Location <strong>and</strong> Name of Virtual Disks
Per<strong>for</strong>ming Migration using “cp” on Service Console
Completing the Swap of Source <strong>and</strong> Target VMFS
Migrations <strong>for</strong> ESX 3.x <strong>Using</strong> Service ConsoleData migration in ESX 3.x environment is more complex than ESX 2.xenvironmentVC is tightly integratedDifficult to per<strong>for</strong>m data migration just using service consoleVC maintains significant in<strong>for</strong>mation that is not deleted without manualinterventiondMotion can be usedCurrently supported <strong>for</strong> live upgrades from ESX 2.5 to ESX 3.0.1Large data migrations in ESX 3.x environments can get complexIf provided the choice:• First per<strong>for</strong>m data migration in ESX 2.x environment• Then upgrade ESX 2.x to ESX 3.0 environment
<strong>Using</strong> “vmkfstools” on ESX 3.x Service ConsoleSteps needed to per<strong>for</strong>m migration:Present “source” <strong>and</strong> “target” disk(s) to ESX ServerCreate VMFS on “target” disk(s) <strong>and</strong> assign an appropriate label• Use VC to create VMFS since it automatically aligns VMFS volumesCreate directories on “target” VMFS to match “source” VMFSCopy configuration files from “source” VMFS to “target” VMFSPower virtual machines offCopy virtual disks using “vmkfstools”Remove access to “source” disk(s) <strong>and</strong> rescan the SAN fabricUn-register virtual machines from VCDelete “source” VMFS in<strong>for</strong>mation from VC databaseRe-label “target” VMFS to original “source” VMFS label nameRe-register VMs <strong>and</strong> power them on
Presenting Target Volumes to ESX 3.x
Migrating Data from Source to Target <strong>for</strong> ESX 3.x
Un-registering VMs using VC 2.x
Removing Access to Source VMFS
Removing Source VMFS In<strong>for</strong>mation on ESX 3.x
Removing Source VMFS from VC 2.x Database
Rename Target VMFS to Original VMFS Name
Register <strong>and</strong> Power on VM from New Disks
Advantages <strong>and</strong> Disadvantages of <strong>Using</strong> COSAdvantagesSimple to useFlexible– migration can be per<strong>for</strong>med at individual virtual disk level<strong>Storage</strong> Agnostic– “source” disks could be internal disks!DisadvantagesProtracted outagesSlow, cumbersome processCan use only processor 0 <strong>for</strong> migrating the data• No parallelism even on a large multi-processor server
Migration between homogeneous storage arraysHost based replication is an option (either at VM or ESX Server layer)<strong>Storage</strong> agnosticCan be used <strong>for</strong> migration to the same type of arraySymmetrix to SymmetrixCLARiiON to CLARiiONCelerra to CelerraSRDF <strong>for</strong> SymmetrixMirrorView <strong>for</strong> CLARiiONCelerra Replicator <strong>for</strong> CelerraCelerra Replicator is used <strong>for</strong> replicating NAS datastores on CelerraNetwork File Server
Advantages & DisadvantagesAdvantages:Highly scalable <strong>and</strong> fastDoes not consume host resources <strong>for</strong> migrationMinimal disruption to production hosts (both ESX Servers <strong>and</strong> VM)Enables incremental updates <strong>and</strong> testing be<strong>for</strong>e cut overSame process can be used to migrate other operating systemDisadvantages:Requires same type of array (array dependent)Cannot/difficult to change volume configuration when migrating dataCannot migrate at individual VM level
Migration between heterogeneous storage arraysHost based replication is an option (either at VM or ESX Server layer)<strong>Storage</strong> agnostic<strong>Storage</strong> <strong>Array</strong> based softwareOpen ReplicatorSAN CopyOpen Replicator <strong>and</strong> SAN Copy will be the focus <strong>for</strong> rest of thepresentation
<strong>EMC</strong> Open Replicator <strong>for</strong> DMXRuns entirely within DMX arrayExisting hardware <strong>and</strong> networkMounts open systems remotevolumesAppears as a host to remotestorageShares front-end FC portsPer<strong>for</strong>ms raw block I/ORead, Write <strong>and</strong> IncrementalUpdate16 copies per session512 concurrent sessionsSymmetrixSANIBMCLARiiONHPQHitachi
<strong>EMC</strong> Open Replicator Modes of OperationPoint-in-Time BCV PushPoint-in-Time “Live” PushSTDBCVTargetTargetSTDStart: 6:00 a.m.End: 6:02 a.m.TargetTargetImage:6:00 a.m.Point-in-Time Volume Pull“Live” Data Migration PullTargetTargetTargetSTDSTDSTDSTDSTDOldOld
SAN Copy <strong>for</strong> CLARiiON <strong>Array</strong>s<strong>Fast</strong>Copies data between arraysFull <strong>and</strong> incremental copiesBi-directional<strong>Array</strong>-based• No server or LAN impactsSimpleSingle point of managementScripted automationNo additional hardwareOpenApplication- <strong>and</strong> operating systemindependentCLARiiON, Symmetrix, <strong>and</strong> thirdpartysystemsSymmetrixSAN/WANIBMHitachiHP
SAN Copy <strong>for</strong> CLARiiON <strong>Array</strong>sCLARiiON arrays capable of hostingSAN Copy:CX3-80, CX3-40, CX3-20, CX700,CX500, CX600, <strong>and</strong> CX400Full or incremental copies of dataresiding on a SAN Copy-hosted arraycan be “pushed”To any <strong>EMC</strong> or supported thirdpartyarrayFull copies of data residing on anysupported array can be “pulled” to aSAN Copy-hosted arraySourceDestinationPushPullDestinationSource
<strong>Using</strong> Open Replicator with ESX 2.xOpen Replicator is managed via CLI or GUIManagement can be per<strong>for</strong>med from any supported plat<strong>for</strong>mOpen Replicator can be used to migrate RDM volumesOpen Replicator can be used to migrate VMFS volumesAll members of a spanned VMFS needs to migrated togetherAll virtual machines on a VMFS needs to be migrated at the same timeIncremental push capability of Open Replicator reduces downtime dramaticallyIf VMFS labels are used no reconfiguration of virtual machines is neededOpen Replicator is highly scalableThe whole environment can be migrated in one small outage windowSteps involved when migrating data from or to DMXZone the DMX ports to provide access to the third party storage array portsUse LUN masking software to provide DMX ports access to the appropriateLUNs
Migrating Data from DMX in ESX 2.x EnvironmentWhen migrating from DMX to supported third party storageCreate session that defines relationship between DMX LUNs <strong>and</strong> thirdparty storage LUN• If needed set it up <strong>for</strong> incremental refreshesActivate the session to obtain a point in time image of the dataTo minimize impact control the rate at which the data is copiedRecreate the session to per<strong>for</strong>m incremental pushActivate the recreated session to obtain new point in time imageRepeat the steps above until the amount of data to be migrated is smallShutdown the virtual machines <strong>and</strong> start the final pushChange LUN masking so the ESX Servers have access to new LUNs• Remove access to the original LUNsRescan <strong>for</strong> new LUNsRestart the virtual machines as soon the Open Replicator sessioncompletes
Identifying the Devices to be Migrated
Determining <strong>Storage</strong> <strong>Array</strong> Volumes to be Migrated
Providing DMX Access to Remote <strong>Storage</strong> Devices
Creating <strong>and</strong> Activating Open Replicator Session
Tuning Open Replicator Sessions
Effect of Setting Ceiling on Migration
Querying Status of Sessions
Differential Push of Data
Benefits of <strong>Using</strong> Incremental Push
Changing ESX Server Access to Volumes
Finishing the Migration
Starting Virtual Machines on Migrated Volumes
Migrating Data to DMX in ESX 2.x EnvironmentWhen migrating to DMX from supported third party storage1. Create session that defines relationship between DMX LUNs <strong>and</strong> thirdparty storage LUNa. If available, ensure donor update is turned on2. Shutdown the virtual machines3. Activate the session to start migration of the data4. Change LUN masking so the ESX Servers have access to new LUNsa. Remove access to the original LUNs5. Rescan <strong>for</strong> new LUNs6. Restart the virtual machines as soon as possiblea. Data migration continues in the backgroundThe outage time is approximately the time required <strong>for</strong> step 2—5 listedabove
SAN Copy <strong>for</strong> Migrating ESX Server EnvironmentsSAN Copy is managed via CLI or GUISAN Copy is very similar to Open Replicator except <strong>for</strong>Cannot support as many concurrent migrations as Open ReplicatorInitial push cannot provide a consistent point in time copy of the dataOne has to wait <strong>for</strong> pull to complete be<strong>for</strong>e accessing dataDoes not have donor update featureAll virtual machines on a VMFS needs to be migrated at the same timeIncremental push capability of SAN Copy reduces downtimedramaticallySAN Copy can provide much faster migration than what can be obtainedwith host based softwareSteps involved when migrating data from or to CLARiiON is the same aslisted <strong>for</strong> Open Replicator
Advantages & DisadvantagesAdvantages of using storage array based SAN migration softwareScalable <strong>and</strong> fast migration ratesAmount of downtimeIncremental refresh capabilitiesTest be<strong>for</strong>e you actually swap (leveraging storage array based snaps)No host cycles involvedDisadvantages of using storage array based SAN migration softwareCannot migrate a subset of virtual disksCan be more complex than using host based utilitiesRequires both source <strong>and</strong> target storage on the SAN
When to use what technologies <strong>for</strong> migrationOpen Replicator <strong>and</strong> SAN Copy are ideal when:Large amount of data (100+ GB) needs to be migratedMigration involves 10 or more virtual machinesEnvironments that do not have a large maintenance windowMigration is between dissimilar arrays (one of them is DMX or CLARiiON)Use Open ReplicatorWhen migrating data from DMX to supported third party arraysWhen migrating data to DMX from other supported arrays (exceptCLARiiON)Use SAN CopyWhen migrating data from CLARiiON to supported third party arraysCan be used <strong>for</strong> migrating data to CLARiiON but outage windows wouldbe largerUse host based utilities when migrating small amount of data or VM
AcknowledgmentsMike McGhee, Symmetrix EngineeringSusan Young, <strong>VMware</strong> InstructorSheetal Kochavara, CLARiiON EngineeringJeff Bernard, Alliance Manager
References“Best Practices <strong>for</strong> deploying <strong>VMware</strong> ESX 3.x <strong>and</strong> 2.5.x server with<strong>EMC</strong> <strong>Storage</strong> products” at VMworld 2006<strong>VMware</strong> ESX Server <strong>Using</strong> <strong>EMC</strong> Symmetrix <strong>Storage</strong> Systems<strong>VMware</strong> ESX Server <strong>Using</strong> <strong>EMC</strong> CLARiiON <strong>Storage</strong> SystemsCLARiiON Integration with <strong>VMware</strong> ESX Server
Presentation DownloadPlease remember to complete yoursession evaluation <strong>for</strong>m<strong>and</strong> return it to the room monitorsas you exit the sessionThe presentation <strong>for</strong> this session can be downloaded athttp://www.vmware.com/vmtn/vmworld/sessions/Enter the following to download (case-sensitive):Username: cbv_repPassword: cbv<strong>for</strong>9v9r