13.07.2015 Views

ARM Cortex-A15 MPCore Processor Technical Reference Manual

ARM Cortex-A15 MPCore Processor Technical Reference Manual

ARM Cortex-A15 MPCore Processor Technical Reference Manual

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Level 1 Memory System6.3 L1 instruction memory systemThe instruction cache can source up to 128 bits per fetch depending on alignment. A single fetchcan span a 128-bit aligned region or cache line, but cannot span a page boundary.Sequential cache read operations reduce the number of full cache reads. This has the benefit ofreducing power consumption. If a cache read is sequential to the previous cache read, and theread is within the same cache line, only the data RAM way that was previously read is accessed.The L1 instruction cache appears to software as a physically tagged, physically indexed array.Therefore, the instruction cache is only required to be flushed when writing new data to aninstruction address.This section describes the L1 instruction memory system in:• Instruction cache disabled behavior.• Instruction cache speculative memory accesses.• Fill buffers on page 6-5.• Non-cacheable fetching on page 6-5.• Parity error handling on page 6-5.• Cache line length and heterogeneous systems on page 6-5.6.3.1 Instruction cache disabled behaviorThe SCTLR.I bit, see System Control Register on page 4-54, enables or disables the L1instruction cache. If the I bit is disabled, fetches cannot access any of the instruction cachearrays. An exception to this rule is the CP15 instruction cache operations. If the instructioncache is disabled, the instruction cache maintenance operations can still execute normally.6.3.2 Instruction cache speculative memory accessesAn instruction remains in the pipeline between the fetch and the execute stages. Because therecan be several unresolved branches in the pipeline, instruction fetches are speculative, meaningthere is no guarantee that they are executed. A branch or exceptional instruction in the codestream can cause a pipeline flush, discarding the currently fetched instructions.Because of the aggressive prefetching behavior, you must not place read-sensitive devices in thesame page as code. Pages with Device or Strongly-ordered memory type attributes are treatedas Non-Cacheable Normal Memory. You must mark pages that contain read-sensitive deviceswith the TLB XN (Execute Never) attribute bit.To avoid speculative fetches to read-sensitive devices when address translation is disabled,these devices and code that are fetched must be separated in the physical memory map. See the<strong>ARM</strong> ® Architecture <strong>Reference</strong> <strong>Manual</strong> <strong>ARM</strong>v7-A and <strong>ARM</strong>v7-R edition for more information.To avoid speculative fetches to potential non-code regions, the static predictor is disabled andbranches are forced to resolve in order when address translation is disabled.Unnecessary speculative fetches to very slow memory can slow down the system because theyconsume memory system resources and delay completion of subsequent DSB instructions. Youcan avoid this by setting the XN attribute bit for any pages that do not contain code and aremapped to slow devices or memory.<strong>ARM</strong> DDI 0438I Copyright © 2011-2013 <strong>ARM</strong>. All rights reserved. 6-4ID062913Non-Confidential

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!