29.01.2015 Views

Embedded Software for SoC - Grupo de Mecatrônica EESC/USP

Embedded Software for SoC - Grupo de Mecatrônica EESC/USP

Embedded Software for SoC - Grupo de Mecatrônica EESC/USP

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

SDRAM-Energy-Aware Memory Allocation 323<br />

In [11], techniques are presented to reduce the static energy consumption<br />

of SDRAMs in embed<strong>de</strong>d systems. The strategy of this paper consists of<br />

clustering data structures with a large temporal affinity in the same memory<br />

bank. As a consequence the idle periods of banks are grouped, thereby creating<br />

more opportunities to transition more banks in a <strong>de</strong>eper low-power mo<strong>de</strong> <strong>for</strong><br />

a longer time. Several researchers (see e.g. [7], [8] and [11]) present a<br />

hardware memory controller to exploit the SDRAM energy mo<strong>de</strong>s. The<br />

authors of [15] present a data migration strategy. It <strong>de</strong>tects at run-time which<br />

data structures are accessed together, then it moves arrays with a high temporal<br />

affinity to the same banks. The research on techniques to reduce the static<br />

energy consumption of SDRAMs is very relevant and promising. Complementary<br />

to the static energy reduction techniques, we seek to reduce the<br />

dynamic energy contribution.<br />

The most scalable and fastest multi-processor virtual memory managers<br />

(e.g. [9] and [16]) use a combination of private heaps combined with a shared<br />

pool to avoid memory fragmentation. They rely on hardware based memory<br />

management units to map virtual memory pages to the banks. These hardware<br />

units are unaware of the un<strong>de</strong>rlying memory architecture. There<strong>for</strong>e, in the<br />

best-case 2 memory managers distribute the data across all memory banks<br />

(random allocation). In the worst-case, all data is concentrated in a single<br />

memory bank. We will compare our approach to both extremes in Section 6.<br />

5. BANK AWARE ALLOCATION ALGORITHMS<br />

We first present a best ef<strong>for</strong>t memory allocator (BE). The allocator can map<br />

data of different tasks to the same bank in or<strong>de</strong>r to minimize the number of<br />

page-misses. Hence, accesses from different tasks can interleave at run-time,<br />

causing unpredictable page-misses. At <strong>de</strong>sign-time, we do not exactly know<br />

how much the page-misses will increase the execution time of the tasks. As<br />

a consequence, the best ef<strong>for</strong>t allocator cannot be used when hard real-time<br />

constraints need to be guaranteed and little slack is available. The goal of the<br />

guaranteed per<strong>for</strong>mance allocator (GP) is to minimize the number of pagemisses<br />

while still guaranteeing the real-time constraints.<br />

5.1. Best ef<strong>for</strong>t memory allocator<br />

The BE (see below) consists of a <strong>de</strong>sign-time and a run-time phase. The<br />

<strong>de</strong>sign-time phase bounds the exploration space of the run-time manager<br />

reducing its time and energy penalty.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!