13.07.2015 Views

An Operating Systems Vade Mecum

An Operating Systems Vade Mecum

An Operating Systems Vade Mecum

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

178 Transput Chapter 5The kernel can reduce the amount of physical transput by employing the CachePrinciple. If a process has just read some bytes, the chances are good that the next byteson the device will be read next. Instead of repeating the read operation for the entireblock, the device driver can save the block that was read in last and extract the next 10bytes, for example, when needed. In this case, many calls on the lower part of the devicedriver can be avoided. We can also avoid blocking the caller, in keeping with the HysteresisPrinciple. Therefore, whenever a process requests input, the driver should firstcheck to see if the desired data block is already in main store. Only if it is not must thelower part of the device driver be invoked to bring it in. The set of buffers the kernelkeeps in main store in the hope they will be needed again forms a cache. A form of LRUreplacement can be implemented for this cache.Writing can also take advantage of this cache. It is not necessary to read in a freshcopy of the block about to be modified if a copy already sits in the cache. After themodification is performed on the main-store copy, that cache entry should be marked‘‘dirty.’’ It is not yet necessary to write it back out to the physical device, since it may beneeded again soon. If any process wants to read it in the meantime, the main-store versionwill be used, and this version is up-to-date. Further writes may also affect the samedata; it is not necessary to archive the data until all such writes have finished. The dirtybuffer should be written out when its space is needed by some new buffer or when thedevice is idle and may as well be employed in writing it out. This method is known aswrite-behind. Write-behind is a typical form of lazy evaluation, in which work isdelayed in the hope that it will not be needed at all. Write-behind makes sense for bothsynchronous and asynchronous transput; the operation is considered completed when thedata block is properly modified even if it has not yet been written out.Write-behind has some dangers. If the operating system should crash (fail unexpectedly),data that should have been written to the device may still be in main store, andit may not be feasible to recover them. For this reason, dirty buffers should periodicallybe archived from the cache so that the device is always relatively up-to-date. Databaseprograms need to be able to force such archiving and to know when it is finished.<strong>An</strong>other unfortunate property of write-behind is that device errors cannot be presented tothe process that wants to perform output. If, for example, the process tries to write pastthe end of a tape, the physical write command may be issued after the process has starteddoing unrelated operations or even after the process has terminated. Presenting this errorto the process is then either very cumbersome or impossible.The data-block cache can be used to reduce the amount of time processes areblocked during synchronous read, just as write-behind reduces the amount of blockingtime during synchronous write. Under the read-ahead policy, the device driver brings inthe next data block before the process has made a request that requires it. When the processgets to that region of data, the cache will already contain it, so the process will notbe forced to wait. Of course, the process may never want to read from the next blockbecause it may be performing random accesses or reading only the initial part of the device.In these cases read-ahead wastes device bandwidth. Therefore, the kernel shouldavoid read-ahead if the process has recently performed a Position operation, which indicatesrandom-access activity. (Database programs often get no benefit from read-aheadbecause even though they may know exactly which page of a file they need to read next,the page is often not the ‘‘next’’ one as far as the kernel is concerned, so no read-ahead isperformed.) Read-ahead is a typical form of eager evaluation, in which work is performedearly in the hope that it will be useful.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!