15.01.2013 Views

U. Glaeser

U. Glaeser

U. Glaeser

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

[KSR91] KSR-1 Overview, Internal Report, Kendall Square Research Co.170 Tracer Lane, Waltham, MA<br />

02154, 1991.<br />

[Lam88] M. Lam Software Pipelining, “An Effective Scheduling Technique for VLIW Machines,” ACM<br />

Sigplan 1088 Conf. or Programming Language Design and Implementation, 1988.<br />

[Leiserson92] C. E. Leiserson, “The Network Architecture of the Connection Machine, CM-5,” Proc. ACM<br />

Symp. Computer Architecture, 1992.<br />

[Lo97] J. Lo, S, Eggers, et al., “Tuning Compiler Optimization for Simultaneous Multithreading,” Int.<br />

Symp. on Microarchitecture, 1997.<br />

[MasPar91] MasPar, “The MasPar Family of Data Parallel Computer,” Technical summary, MasPar Computer<br />

Corporation, Sunnyvale, CA, 1991.<br />

[May88] D. Pountain and D. May, Occam Programming, INMOS, Oxford England, 1988.<br />

[Miller90] D. R. Miller and D. J. Quammen. “Exploiting Large Register Sets,” Int. J. Microprocessors and<br />

Microsystems, pp. 333–340, August 1990.<br />

[Park97] S. Park, S. Shim, and S. Moon, Evaluation of Scheduling Techniques on a SPARC-Based VLIW<br />

Testbed, Micro-30, 1997.<br />

[Polychronopulos89] C. D. Polychronopulos, et al., “Paradrase 2: An Environment for Parallel zing, Partitioning,<br />

Synchronizing Programs on Multiprocessors,” Proc. Int. Conf. on Parallel Processing, 1989.<br />

[Quammen96] D. Quammen, J. Stanley, and P. Wang, “The Packed Exponential Connection Network,”<br />

Pre. Int. Symp. Parallel Arch. Algorithms and Networks, 1996.<br />

[Rau93] B. Rau and J. Fisher, “Instruction-Level Parallel Processing: History, Overview, and Perspective,”<br />

J. Supercomputing: Special Issue on Instruction-Level Parallelism, 1993.<br />

[Saulsbury95] A. Saulsbury, T. Wilkinson, et al., “An Argument for a Simple COMA,” Symp. Int. on High<br />

Perf. Computer Architecture, 1995.<br />

[Siegel89] H. J. Siegel, “A model of SIMD Machines and a Comparison of Various Interconection<br />

Networks,” IEEE Trans. Computer, 28(12) 1989.<br />

[Smith90] J. E. Smith, “Future General-Purposes Supercomputer Architecture,” Proc. ACM Supercomputing<br />

Conf., 1990.<br />

[Tullsen95] D. Tullsen, et al., “Simultaneous Multithreading: Maximizing On-Chip Parallelism,” Proc.<br />

23rd Int. Symp. on Computer Architecture, 1995.<br />

[Wolfe91] M. E. Wolfe and M. Lam, “A Loop Transformation Theory and an Algorithm to Maximize<br />

Parallelism,” IEEE Trans. Parallel Distri Systems, 3(10), 1991.<br />

[Yuan97] X. Yuan, R. Melhem, and R. Gupta, “Distributed Path Reservation Algorithms for Multiplexing<br />

All-Optical Interconnection Networks,” Proc. Symp. High-Performance Comp. Architecture,<br />

1997.<br />

5.6 Virtual Memory Systems and TLB Structures<br />

Bruce Jacob<br />

Virtual Memory, a Third of a Century Later<br />

Virtual memory was designed in the late 1960s to provide automated storage allocation. It is a technique<br />

for managing the resource of physical memory that provides to the application an illusion of a very large<br />

amount of memory—typically, much larger than is actually available. In a virtual memory system, only<br />

the most often used portions of a process’s address space actually occupy physical memory; the rest of<br />

the address space is stored on disk until needed. When the mechanism was invented, computer memories<br />

were physically large (one kilobyte of memory occupied a space the size of a refrigerator), they had access<br />

times comparable to the processor’s speed (both were extremely slow), and they came with astronomical<br />

price tags. Due to space and monetary constraints, installed computer systems typically had very little<br />

memory—usually less than the size of today’s on-chip caches, and far less than the users of the systems<br />

would have liked. The virtual memory mechanism was designed to solve this problem, by using a system’s<br />

disk space as if it were memory and placing into main memory only the data used most often.<br />

© 2002 by CRC Press LLC

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!