05.02.2013 Views

ARM Architecture Reference Manual ARMv7-A and ARMv7-R edition

ARM Architecture Reference Manual ARMv7-A and ARMv7-R edition

ARM Architecture Reference Manual ARMv7-A and ARMv7-R edition

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

A3.9 Caches <strong>and</strong> memory hierarchy<br />

Application Level Memory Model<br />

The implementation of a memory system depends heavily on the microarchitecture <strong>and</strong> therefore the details<br />

of the system are IMPLEMENTATION DEFINED. <strong>ARM</strong>v7 defines the application level interface to the memory<br />

system, <strong>and</strong> supports a hierarchical memory system with multiple levels of cache. This section provides an<br />

application level view of this system. It contains the subsections:<br />

Introduction to caches<br />

Memory hierarchy on page A3-52<br />

Implication of caches for the application programmer on page A3-52<br />

Preloading caches on page A3-54.<br />

A3.9.1 Introduction to caches<br />

A cache is a block of high-speed memory that contains a number of entries, each consisting of:<br />

main memory address information, commonly known as a tag<br />

the associated data.<br />

Caches are used to increase the average speed of a memory access. Cache operation takes account of two<br />

principles of locality:<br />

Spatial locality<br />

An access to one location is likely to be followed by accesses to adjacent locations.<br />

Examples of this principle are:<br />

sequential instruction execution<br />

accessing a data structure.<br />

Temporal locality<br />

An access to an area of memory is likely to be repeated in a short time period. An example<br />

of this principle is the execution of a code loop<br />

To minimize the quantity of control information stored, the spatial locality property is used to group several<br />

locations together under the same tag. This logical block is commonly known as a cache line. When data is<br />

loaded into a cache, access times for subsequent loads <strong>and</strong> stores are reduced, resulting in overall<br />

performance benefits. An access to information already in a cache is known as a cache hit, <strong>and</strong> other<br />

accesses are called cache misses.<br />

Normally, caches are self-managing, with the updates occurring automatically. Whenever the processor<br />

wants to access a cacheable location, the cache is checked. If the access is a cache hit, the access occurs in<br />

the cache, otherwise a location is allocated <strong>and</strong> the cache line loaded from memory. Different cache<br />

topologies <strong>and</strong> access policies are possible, however, they must comply with the memory coherency model<br />

of the underlying architecture.<br />

Caches introduce a number of potential problems, mainly because of:<br />

Memory accesses occurring at times other than when the programmer would normally expect them<br />

There being multiple physical locations where a data item can be held<br />

<strong>ARM</strong> DDI 0406B Copyright © 1996-1998, 2000, 2004-2008 <strong>ARM</strong> Limited. All rights reserved. A3-51

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!