21.01.2013 Views

Lecture Notes in Computer Science 4917

Lecture Notes in Computer Science 4917

Lecture Notes in Computer Science 4917

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

286 A. García et al.<br />

SPECfp benchmarks. However, the performance ga<strong>in</strong> achievable by the loop<br />

w<strong>in</strong>dow is seriously limited by the size of the back-end structures <strong>in</strong> current processor<br />

designs. If an unbounded back-end with unlimited structures is available,<br />

the loop w<strong>in</strong>dow achieves better performance than a traditional front-end design,<br />

even if this front-end is wider. The speedup achieved by some benchmarks<br />

like 171.swim, 172.mgrid, and178.galgel is over 40%, which suggests that the<br />

loop w<strong>in</strong>dow could be a worthwhile contribution to the design of future large<br />

<strong>in</strong>struction w<strong>in</strong>dow processors [5].<br />

These results show that even the simple LPA approach presented <strong>in</strong> this paper<br />

can improve performance and, especially, reduce energy consumption. Furthermore,<br />

this is our first step towards a comprehensive LPA design that extracts all<br />

the possibilities of <strong>in</strong>troduc<strong>in</strong>g the semantic <strong>in</strong>formation of loop structures <strong>in</strong>to<br />

the processor. We plan to analyze loop prediction mechanisms and implement<br />

them <strong>in</strong> conjunction with the loop w<strong>in</strong>dow. In addition, if the loop detection<br />

is guided by the branch predictor, the loop w<strong>in</strong>dow can be managed <strong>in</strong> a more<br />

efficient way, reduc<strong>in</strong>g the number of <strong>in</strong>sertions required and optimiz<strong>in</strong>g energy<br />

consumption.<br />

The coverage of our proposal is another <strong>in</strong>terest<strong>in</strong>g topic for research. We<br />

only capture now simple dynamic loops with a s<strong>in</strong>gle execution path, but we<br />

will extend our renam<strong>in</strong>g scheme to enable captur<strong>in</strong>g more complex loops that<br />

<strong>in</strong>clude hammock structures, that is, several execution paths. Increas<strong>in</strong>g coverage<br />

will also benefit the possibility of apply<strong>in</strong>g dynamic optimization techniques to<br />

the <strong>in</strong>structions stored <strong>in</strong> the loop w<strong>in</strong>dow, and especially those optimizations<br />

focused on loops, improv<strong>in</strong>g the processor performance. In general, we consider<br />

LPA is a worthwhile contribution for the computer architecture community, s<strong>in</strong>ce<br />

our proposal has a great potential to improve processor performance and reduce<br />

energy consumption.<br />

Acknowledgements<br />

This work has been supported by the M<strong>in</strong>istry of <strong>Science</strong> and Technology of<br />

Spa<strong>in</strong> under contract TIN2007-60625, the HiPEAC European Network of Excellence,<br />

and the Barcelona Supercomput<strong>in</strong>g Center. We would like to thank the<br />

anonymous referees for their useful reviews that made it possible to improve<br />

our paper. We would also like to thank Adrián Cristal, Daniel Ortega, Francisco<br />

Cazorla, and Marco Antonio Ramírez for their comments and support.<br />

References<br />

1. de Alba, M.R., Kaeli, D.R.: Runtime predictability of loops. In: Proceed<strong>in</strong>gs of the<br />

4th Workshop on Workload Characterization (2001)<br />

2. Badulescu, A., Veidenbaum, A.: Energy efficient <strong>in</strong>struction cache for wide-issue<br />

processors. In: Proceed<strong>in</strong>gs of the International Workshop on Innovative Architecture<br />

(2001)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!