13.10.2014 Views

OPTIMIZING THE JAVA VIRTUAL MACHINE INSTRUCTION SET BY ...

OPTIMIZING THE JAVA VIRTUAL MACHINE INSTRUCTION SET BY ...

OPTIMIZING THE JAVA VIRTUAL MACHINE INSTRUCTION SET BY ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

168<br />

Figure 7.22: Overall Performance by Number of Multicode Substitutions Performed<br />

for Transfer Reduction Scoring with Multicodes up to Length 25<br />

is actually executed when the application is run. In this case, some of the multicodes<br />

being introduced will never be executed by some of benchmarks even though they are<br />

of value to others.<br />

The costs associated with new multicodes come in two forms: performance costs<br />

and software engineering costs. Introducing new multicodes has a performance cost<br />

because their introduction into the main execution loop of the interpreter increases<br />

the size of the loop, both with respect to the number of cases that it contains and<br />

the amount of memory it occupies.<br />

Increasing the number of cases in the loop can slow the process of branching to<br />

the codelet that implements the bytecode that is about to execute. Such a slow<br />

down is possible when the compiler generates a switch statement using a binary<br />

search strategy because it may be necessary to perform more comparisons before the<br />

correct target is identified. A slow down can also occur when the switch statement<br />

is implemented using a lookup table. In this case, the slow down can occur because<br />

the size of the lookup table has increased, making it more likely that an entry will be<br />

requested that does not reside within the data cache.<br />

Increasing the total amount of memory required to hold the interpreter’s main loop<br />

can negatively impact performance regardless of how the interpreter is implemented.<br />

The amount of space available in the instruction cache is limited. Increasing the size<br />

of the interpreter’s main loop increases pressure on the instruction cache, because

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!