13.10.2014 Views

OPTIMIZING THE JAVA VIRTUAL MACHINE INSTRUCTION SET BY ...

OPTIMIZING THE JAVA VIRTUAL MACHINE INSTRUCTION SET BY ...

OPTIMIZING THE JAVA VIRTUAL MACHINE INSTRUCTION SET BY ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

91<br />

ences in background processes, memory management, process scheduling<br />

and disk performance to be ignored.<br />

Category 2: All benchmarks that showed a general slowing trend (bars<br />

in the histogram become higher) as the number of despecializations performed<br />

increases were included in this category. Any benchmark that met<br />

this criteria was placed is this category including both those benchmarks<br />

that showed a decrease in performance as each set of despecializations was<br />

applied and those benchmarks that only showed a performance loss when<br />

the later despecializations were performed.<br />

Category 3: Any benchmark that did not meet the criteria for either<br />

Category 1 or Category 2 was placed in this category. This category was<br />

divided into 4 sub-categories as described in section 5.2.4.<br />

Table 5.3 presents the performance results observed for each virtual machine<br />

tested. The number of benchmarks residing in each of the categories described previously<br />

is reported. Summary results are included at the right side of the table<br />

including a count of the total number of benchmarks in each category across all<br />

benchmarks. The final column expresses these values as a percentage of all of the<br />

benchmarks considered during testing. As the bottom row in the table indicates, it<br />

was necessary to omit 6 benchmarks from consideration. These benchmarks included<br />

5 benchmarks tested using IBM’s Jikes RVM on the Pentium 4 (JGF Series, JGF<br />

Euler, JGF MolDyn, JGF RayTracer and 227 mtrt) and 1 benchmark tested using<br />

IBM’s Jikes RVM on the Pentium III (JGF MonteCarlo). In five cases, these benchmarks<br />

were omitted from consideration because error messages were generated during<br />

the execution of the benchmark which indicated that the result of the benchmark’s<br />

computation was not valid. The final benchmark was excluded because it showed<br />

widely varying run times even in its original unmodified form. As a result, timing results<br />

could not be used to measure the impact the impact despecialization had on the<br />

benchmark’s runtime because variation in the run times within each test condition<br />

was far larger than the anticipated change in performance across test conditions.<br />

5.2.2 Category 1: Benchmarks that Show Little Difference<br />

In Performance<br />

Examining the data presented in Figures 5.2 through 5.9 reveals that the majority<br />

of the benchmarks tested during this study fall into this category, showing little

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!