09.08.2013 Views

Design and Verification of Adaptive Cache Coherence Protocols ...

Design and Verification of Adaptive Cache Coherence Protocols ...

Design and Verification of Adaptive Cache Coherence Protocols ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Chapter 7<br />

<strong>Cache</strong>t: A Seamless Integration <strong>of</strong><br />

Multiple Micro-protocols<br />

The <strong>Cache</strong>t protocol is a seamless integration <strong>of</strong> the Base, WP <strong>and</strong> Migratory protocols that are<br />

presented in Chapters 4, 5 <strong>and</strong> 6. Although each protocol is complete in terms <strong>of</strong> functionality,<br />

we <strong>of</strong>ten refer to them as micro-protocols because they constitute parts <strong>of</strong> the full <strong>Cache</strong>t<br />

protocol. The <strong>Cache</strong>t protocol provides both intra-protocol <strong>and</strong> inter-protocol adaptivity that<br />

can be exploited via appropriate heuristic mechanisms to achieve optimal performance under<br />

changing program behaviors. Di erent micro-protocols can be used by di erent cache engines,<br />

<strong>and</strong> a cache can dynamically switch from one micro-protocol to another.<br />

We rst discuss the integration <strong>of</strong> micro-protocols, <strong>and</strong> the dynamic protocol switch through<br />

downgrade <strong>and</strong> upgrade operations. Section 7.2 describes the coherence states <strong>and</strong> protocol<br />

messages <strong>of</strong> <strong>Cache</strong>t. In Sections 7.3 <strong>and</strong> 7.4, we present the imperative <strong>and</strong> integrated rules<br />

<strong>of</strong> the <strong>Cache</strong>t protocol, respectively. Section 7.5 gives some composite rules that can be used<br />

to improve the performance without a ecting the soundness <strong>and</strong> liveness <strong>of</strong> the system. The<br />

<strong>Cache</strong>t protocol assumes FIFO message passing, which requires that messages between the<br />

same source <strong>and</strong> destination be received in the order they are issued.<br />

7.1 Integration <strong>of</strong> Micro-protocols<br />

The CRF model allows a cache coherence protocol to use any cache or memory in the memory<br />

hierarchy as the rendezvous for processors that access shared memory locations, provided that it<br />

maintains the same observable behavior. The Base, WP <strong>and</strong> Migratory protocols are distinctive<br />

in the actions performed while committing dirty cells <strong>and</strong> reconciling clean cells. Figure 7.1<br />

summarizes the di erent treatment <strong>of</strong> commit, reconcile <strong>and</strong> cache miss in the three micro-<br />

protocols.<br />

Base: The most straightforward implementation simply uses the memory as the rendezvous.<br />

When a Commit instruction is executed for an address that is cached in the Dirty state, the<br />

data must be written back to the memory before the instruction can complete. A Reconcile<br />

133

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!