13.10.2014 Views

OPTIMIZING THE JAVA VIRTUAL MACHINE INSTRUCTION SET BY ...

OPTIMIZING THE JAVA VIRTUAL MACHINE INSTRUCTION SET BY ...

OPTIMIZING THE JAVA VIRTUAL MACHINE INSTRUCTION SET BY ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

69<br />

4.2.6 Performance Summary<br />

Comparing the results of the three benchmarks tested in this study reveals that the<br />

Sun virtual machine was impacted the least by despecialization. The Kaffe virtual<br />

machine showed the largest performance loss as a result of despecialization, leaving<br />

IBM’s research virtual machine in the middle position. When the average performance<br />

across all of the benchmarks was considered, each of the virtual machines<br />

showed showed a performance loss of less than 2.4 percent. Computing the average<br />

performance across all of the virtual machines and benchmarks revealed an overall<br />

increase in runtime of 2.0 percent. These changes in performance are minor. Using<br />

67 bytecodes (one third of those defined by the Java Virtual Machine Specification)<br />

to achieve this difference is wasteful when those bytecodes can be utilized for other<br />

purposes.<br />

The largest single change in benchmark performance occurred when 227 mtrt<br />

was executed using IBM’s RVM under the complete despecialization condition. This<br />

performance loss of almost 12.7 percent was considerably larger than the largest losses<br />

observed for the other two virtual machines. Note that this change in performance<br />

is the direct result of performing branch despecialization. The introduction of a new<br />

optimization into the IBM virtual machine to handle the branching patterns that occur<br />

in the despecialized bytecode stream would likely result in a considerably smaller<br />

maximum performance loss. Consequently, examining the maximum performance<br />

losses also leads to the previous conclusion.<br />

4.3 Despecialization and Class File Size<br />

The use of specialized bytecodes impacts more than application runtime. Another<br />

result that must be considered is the impact despecialization has on class file size.<br />

Each of the despecializations discussed previously replaces a specialized bytecode with<br />

one or more general purpose bytecodes. In all cases except for the despecialization of<br />

bipush and sipush into ldc or ldc w, the general purpose bytecode or sequence is at<br />

least one byte larger than the original specialized bytecode. This may be the result of<br />

the fact that an operand value is now required in order to specify the position to be<br />

accessed, the fact that the same operand value has been widened to occupy additional<br />

bytes or the fact that additional opcodes reside in the code stream.<br />

It is possible for some occurrences of bipush to be despecialized without increasing<br />

class file size. This can occur when the value being generated already resides in the

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!