13.10.2014 Views

OPTIMIZING THE JAVA VIRTUAL MACHINE INSTRUCTION SET BY ...

OPTIMIZING THE JAVA VIRTUAL MACHINE INSTRUCTION SET BY ...

OPTIMIZING THE JAVA VIRTUAL MACHINE INSTRUCTION SET BY ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

94<br />

fstore 0, fstore 1 and fstore 2 are never executed by any of the<br />

benchmarks that were profiled. An additional 4 bytecodes each represent<br />

less than one in a million bytecodes executed. In comparison, if each<br />

of the bytecodes defined by the Java Virtual Machine Specification was<br />

executed with equal frequency it would be expected that each would represent<br />

one out of every 200 bytecodes. Consequently, unless the performance<br />

difference between an infrequently executed specialized bytecode and its<br />

equivalent general purpose form is very large, no change in application<br />

performance will be observed.<br />

While the JIT compiler is able to minimize the impact despecialization has on<br />

application performance, the presence of the specialized bytecode can still slow application<br />

performance in the following ways:<br />

Class Loading / Verification: A minor difference in performance may<br />

exist for those operations that occur before and during the JIT compilation<br />

process. For example, since despecialization increases class file size, it may<br />

require marginally more time to load the class file from disk. Verification<br />

may also take additional time since it is performed before JIT compilation.<br />

However, each of these tasks is only performed once during the lifetime<br />

of the application, accounting for a tiny fraction of most application’s<br />

runtime. Most of the application runtime is spent once these tasks have<br />

completed and consists of executing code generated by the JIT compiler.<br />

JIT Compilation: While despecialization does not impact the quality of<br />

the code generated by JIT compilation in many cases, the process of generating<br />

that code can be slowed slightly by despecialization. This slowing<br />

is a result of the fact that additional argument bytes must be decoded<br />

during the JIT compilation process. However, because JIT compilation<br />

only occurs once, the addition of this minor amount of work has almost<br />

no impact on the overall performance of the application.<br />

Interpreter Performance: Most of the Sun virtual machines tested are<br />

classified as mixed-mode, making use of both an interpreter and a JIT<br />

compiler. The interpreter is used to execute the code initially. Once a<br />

method is determined to execute frequently, it is JIT compiled. Using<br />

this strategy ensures that time is not spent compiling code that is only<br />

executed a small number of times. Figure 5.2 shows the results achieved

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!