VSAN-Troubleshooting-Reference-Manual
VSAN-Troubleshooting-Reference-Manual
VSAN-Troubleshooting-Reference-Manual
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
Diagnostics and <strong>Troubleshooting</strong> <strong>Reference</strong> <strong>Manual</strong> – Virtual SAN<br />
Anatomy of a read in all-flash configurations<br />
The major difference between a read in a hybrid configuration and a read in an allflash<br />
configuration is that in an all-flash configuration, the flash cache is not used for<br />
caching reads. It is dedicated as a write cache only. If the read operation does not<br />
find the block in the flash cache in an all-flash configuration, then it is read directly<br />
from the ‘capacity’ flash device. The block does not get placed in the tier1 flash cache,<br />
like a hybrid configuration. The performance of the all-flash capacity tier is more<br />
than enough for reads.<br />
Anatomy of a write in all-flash configurations<br />
In the all-flash configuration, the tier1 flash cache is now used for write caching only,<br />
what can be considered a write back cache. When the working set is bigger than<br />
write cache, old data blocks are evicted from the tier1 write cache to the flash<br />
capacity devices. If the working set fits of the virtual machine fits completely in the<br />
tier1 write cache, there are no data blocks written to the flash capacity device at all.<br />
Virtual SAN caching algorithms<br />
Virtual SAN implements a distributed persistent cache on flash devices across the<br />
Virtual SAN cluster. In Virtual SAN 5.5 and 6.0 hybrid configurations, caching is done<br />
in front of the magnetic disks where the data lives. It is not done on the client side,<br />
i.e. where the virtual machine compute resides. A common question is why we took<br />
this approach to caching?<br />
The reason for this is because such distributed caching results in better overall<br />
utilization of flash, which is the most valuable storage resource in our cluster. Also,<br />
with DRS and vMotion, virtual machines move around hosts in a cluster. You don’t<br />
want to be moving GBs of data around or rewarming caches every time a VM<br />
migrates. Indeed, in Virtual SAN you will see no performance degradation after a VM<br />
migration.<br />
Enhancements to caching algorithms in 6.0<br />
Virtual SAN 6.0 uses a tier-1 flash device as a write cache in both hybrid and all-flash<br />
disk groups.<br />
However, the caching algorithm optimizes for very different goals in each case. In<br />
hybrid groups, the caching algorithm aims at accumulating large proximal chunks of<br />
data for each magnetic disk. The priority is to maximize the write performance<br />
obtained from the magnetic disks by applying a nearly sequential workload to them<br />
when destaging from the flash cache (elevator algorithm) to magnetic disks.<br />
V M W A R E S T O R A G E B U D O C U M E N T A T I O N / 2 4 8