18.08.2013 Views

Virtual Disk API Programming Guide - Documentation - VMware

Virtual Disk API Programming Guide - Documentation - VMware

Virtual Disk API Programming Guide - Documentation - VMware

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Virtual</strong> <strong>Disk</strong> <strong>Programming</strong> <strong>Guide</strong><br />

A redo log is created for HotAdded disks, on the same datastore as the base disks. Do not remove the target<br />

virtual machine (the one being backed up) while HotAdded disk is still attached. If removed, HotAdd fails to<br />

properly clean up redo logs so virtual disks must be removed manually from the backup appliance. Also, do<br />

not remove the snapshot until after cleanup. Removing it could result in an unconsolidated redo log.<br />

HotAdd is a SCSI feature and does not work for IDE disks. The paravirtual SCSI controller (PVSCSI) is not<br />

supported for HotAdd; use the LSI controller instead.<br />

Removing all disks on a controller with the vSphere Client also removes the controller. You might want to<br />

include some checks in your code to detect this in your appliance, and reconfigure to add controllers back in.<br />

<strong>Virtual</strong> disk created on Windows by HotAdd backup or restore might have a different disk signature than the<br />

original virtual disk. The workaround is to reread or rewrite the first disk sector in NBD mode.<br />

HotAdded disks should be released with Vix<strong>Disk</strong>Lib_Cleanup() before snapshot delete. Cleanup might<br />

cause improper removal of the change tracking (ctk) file. You can fix it by power cycling the virtual machine.<br />

Customers running a Windows Server 2008 proxy on SAN storage should set SAN policy to onlineAll (see<br />

note about SAN policy in “Best Practices for SAN Transport” on page 81).<br />

Best Practices for NBDSSL Transport<br />

Various versions of ESX/ESXi have different defaults for NBD timeouts. Some have no timeouts. <strong>VMware</strong><br />

recommends that you specify a default NBD timeout in the Vix<strong>Disk</strong>Lib configuration file. If you do not specify<br />

a timeout, some versions of ESX/ESXi will hold the corresponding disk open indefinitely, until vpxa or hostd<br />

is restarted. However, if you set a timeout, you might have to perform some “keepalive” operations to prevent<br />

the disk from being closed on the server side. Reading block 0 periodically is a good keepalive operation.<br />

Before ESXi 5.0 there were no default network file copy (NFC) timeouts. Default NFC timeout values may<br />

change in future releases. <strong>VMware</strong> recommends that you specify default NFC timeouts in the Vix<strong>Disk</strong>Lib<br />

configuration file. If you do not specify a timeout, older versions of ESX/ESXi hold the corresponding disk<br />

open indefinitely, until vpxa or hostd is restarted. However with a timeout, you might need to perform some<br />

“keepalive” operation to prevent the disk from being closed on the server side. Reading block 0 periodically<br />

is a good keepalive operation.<br />

As a starting point, recommended settings are 3 minutes for Accept and Request, 1 minute for Read, 10<br />

minutes for Write, and no timeouts (0) for nfcFssrvr and nfcFssrvrWrite.<br />

General Backup and Restore<br />

For incremental backup of virtual disk, always enable changed block tracking (CBT) before the first snapshot.<br />

When doing full restores of virtual disk, disable CBT for the duration of the restore. File‐based restores affect<br />

change tracking, but disabling CBT is optional for partial restores, except with SAN transport. CBT should be<br />

disabled for SAN transport writes because the file system must be able to account for thin‐disk allocation and<br />

clear‐lazy‐zero operations.<br />

Backup software should ignore independent disks (those not capable of snapshots). These virtual disks are<br />

unsuitable for backup. They throw an error if a snapshot is attempted on them.<br />

To back up thick disk, the proxyʹs datastore must have at least as much free space as the maximum configured<br />

disk size for the backed‐up virtual machine. Thin‐provisioned disk is often faster to back up.<br />

If you do a full backup of lazy‐zeroed thick disk with CBT disabled, the software reads all sectors, converting<br />

data in empty (lazy‐zero) sectors to actual zeros. Upon restore, this full backup data will produce eager‐zeroed<br />

thick disk. This is one reason why <strong>VMware</strong> recommends enabling CBT before the first snapshot.<br />

Backup and Restore of Thin-Provisioned <strong>Disk</strong><br />

Thin‐provisioned virtual disk is created on first write. So the first‐time write to thin‐provisioned disk involves<br />

extra overhead compared to thick disk, whether using NBD, NBDSSL, or HotAdd. This is due to block<br />

allocation overhead, not VDDK advanced transports. However once thin disk has been created, performance<br />

is similar to thick disk, as discussed in the Performance Study of <strong>VMware</strong> vStorage Thin Provisioning.<br />

82 <strong>VMware</strong>, Inc.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!