05.08.2014 Views

here - Stefan-Marr.de

here - Stefan-Marr.de

here - Stefan-Marr.de

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

8.4. Ad hoc vs. OMOP Performance<br />

Table 8.1.: Experiments for Evaluation of Ad Hoc vs. OMOP-based Performance<br />

Ad Hoc vs OMOP-based<br />

AmbientTalkST CogVM vs CogVM with AST-OMOP<br />

RoarVM (opt) vs RoarVM+OMOP (opt)<br />

LRSTM CogVM vs CogVM with AST-OMOP<br />

RoarVM (opt) vs RoarVM+OMOP (opt)<br />

VM-based implementation (RoarVM+OMOP) can be compared. Ad hoc implementations<br />

are executed on top of the RoarVM (opt) without OMOP support<br />

to reflect the most plausible use case. Since VM-support for the OMOP<br />

introduces an in<strong>here</strong>nt performance overhead (cf. Sec. 8.5.2), it would put the<br />

ad hoc implementations at a disadvantage if they were to execute on top of<br />

RoarVM+OMOP (opt).<br />

The proposed evaluation only measures minimal variation and uses a simple<br />

bar chart with error bars indicating the 0.95% confi<strong>de</strong>nce interval. Since<br />

measurement errors are insignificant, the error bars are barely visible and the<br />

resulting Fig. 8.4 is more readable than a beanplot would have been. Fig. 8.4<br />

only shows microbenchmarks. The runtime has been normalized to the mean<br />

of the ad hoc implementation’s runtime measured for a specific benchmark.<br />

The y-axis shows this runtime ratio with a logarithmic scale, and thus, the<br />

i<strong>de</strong>al result is at the 1.0 line or below, which would indicate a speedup. All<br />

values above 1.0 indicate a slowdown, i. e., the benchmark running on top<br />

of the OMOP-based implementation took more time to complete than the<br />

benchmark on top of the ad hoc implementation.<br />

Results for Microbenchmarks The first conclusion from Fig. 8.4 is that most<br />

LRSTM benchmarks on RoarVM+OMOP show on par or better performance<br />

(gray bars). Only Array Access (STM) (28%), Class Var Binding (STM) (14%),<br />

and InstVar Access (STM) (18%) show slowdowns. This slowdown can be explained<br />

by the general overhead of about 17% for OMOP support in the VM<br />

(cf. Sec. 8.5.2).<br />

The AmbientTalkST benchmarks on the other hand experience slowdowns<br />

throughout. These benchmarks indicate that the differences in how message<br />

sends and primitives are handled in the ad hoc and the RoarVM+OMOP<br />

implementation, have a significant impact on performance.<br />

The results for the AST-OMOP implementation show more significant slowdowns.<br />

The main reason for slowdown is that the implementation has to reify<br />

215

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!