09.08.2013 Views

Design and Verification of Adaptive Cache Coherence Protocols ...

Design and Verification of Adaptive Cache Coherence Protocols ...

Design and Verification of Adaptive Cache Coherence Protocols ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

message can be resumed eventually. To ensure liveness, the memory must send a purge request<br />

to the caches in which the address is cached to force the address to be purged.<br />

We introduce two directive messages, <strong>Cache</strong>Req <strong>and</strong> PurgeReq, for this purpose. A cache<br />

can send a <strong>Cache</strong>Req message to the memory to request for a data copy the memory can<br />

send a PurgeReq message to a cache to force a clean copy to be purged or a dirty copy to be<br />

written back. In addition, we maintain some information about outst<strong>and</strong>ing directive messages<br />

by splitting certain imperative cache <strong>and</strong> memory states. The Invalid state in the imperative<br />

rules corresponds to Invalid <strong>and</strong> <strong>Cache</strong>Pending in the integrated protocol, where <strong>Cache</strong>Pending<br />

implies that a <strong>Cache</strong>Req message has been sent to the memory. The T[dir, ] state in the<br />

imperative rules corresponds to T[dir, ] <strong>and</strong> C[dir] in the integrated protocol, where T[dir, ]<br />

implies that a PurgeReq message has been multicast to cache sites dir.<br />

Figure 5.5 de nes the rules <strong>of</strong> the WP protocol. The tabular description can be easily<br />

translated into formal TRS rules (cases that are not speci ed represent illegal or unreachable<br />

states). The processor rules are all m<strong>and</strong>atory rules. The cache engine <strong>and</strong> memory engine<br />

rules are categorized into m<strong>and</strong>atory <strong>and</strong> voluntary rules. Each m<strong>and</strong>atory rule is weakly fair<br />

in that if it can be applied, it must be applied eventually or become impossible to apply at some<br />

time. A m<strong>and</strong>atory rule marked with `SF' means the rule requires strong fairness to ensure<br />

the liveness <strong>of</strong> the system. The notation `msg ! dir' means sending the message msg to the<br />

destinations represented by directory dir.<br />

A processor rule may retire or stall an instruction depending on the cache state <strong>of</strong> the<br />

address. When an instruction is retired, it is removed from the processor-to-memory bu er<br />

<strong>and</strong> the corresponding value or reply is sent to the memory-to-processor bu er. When an<br />

instruction is stalled, it remains in the processor-to-memory bu er for later processing. A<br />

stalled instruction does not necessarily block other instructions to be executed.<br />

An incoming message can be processed or stalled depending on the memory state <strong>of</strong> the<br />

address. When a message is processed, it is removed from the incoming queue. When a message<br />

is stalled, it remains in the incoming queue for later processing (only <strong>Cache</strong>Req messages can<br />

be stalled). A stalled message is bu ered properly to avoid blocking following messages.<br />

5.3.1 M<strong>and</strong>atory Rules<br />

The processor rules are similar to those in the Base protocol, except that a Reconcile instruction<br />

can complete even when the address is cached in the Clean state. On acache miss, the cache<br />

sends a cache request to the memory <strong>and</strong> sets the cache state to be <strong>Cache</strong>Pending until the<br />

requested data is received. On a Commit instruction, if the address is cached in the Dirty<br />

state, the cache writes the data back to the memory <strong>and</strong> sets the cache state to be WbPending<br />

until a writeback acknowledgment is received. An instruction remains stalled when it cannot<br />

be completed in the current cache state.<br />

When the memory receives a <strong>Cache</strong>Req message, it processes the message according to the<br />

memory state. If the address is already cached in the cache, the cache request is discarded.<br />

91

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!