05.08.2014 Views

here - Stefan-Marr.de

here - Stefan-Marr.de

here - Stefan-Marr.de

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

8.6. Absolute Performance<br />

To conclu<strong>de</strong>, direct performance gains <strong>de</strong>pend to a large extent on the application<br />

and the concurrency mo<strong>de</strong>l implemented on top of the OMOP. Thus,<br />

the mo<strong>de</strong>st overall performance gain might not justify the in<strong>here</strong>nt overhead<br />

for other applications. However, the original motivation was to improve the<br />

<strong>de</strong>bugging experience during the VM implementation by avoiding unnecessary<br />

reification. For this purpose, this optimization still provi<strong>de</strong>s the <strong>de</strong>sired<br />

benefits so that <strong>de</strong>bugging can focus on the relevant reified operations.<br />

8.6. Absolute Performance<br />

The final question for this performance evaluation is about which of the implementation<br />

strategies yields better performance in practice. Sec. 8.4 showed<br />

that direct VM-support brings the OMOP-based implementation of LRSTM<br />

on par with its ad hoc implementation on top of the RoarVM. However,<br />

Sec. 8.3 showed that the CogVM is about 11.0x faster than the RoarVM (opt),<br />

but the AST-transformation-based implementation (AST-OMOP) brings a performance<br />

penalty of 254.4x for enforced execution, which translates to a slowdown<br />

of 2.8x compared to the ad hoc implementations. This experiment compares<br />

the absolute performance of the benchmarks running on top of the<br />

CogVM and the AST-OMOP implementation with their performance on top<br />

of the RoarVM+OMOP (opt).<br />

Fig. 8.11 shows the results of the AmbientTalkST and LRSTM being executed<br />

with i<strong>de</strong>ntical parameters on CogVM with AST-OMOP and on the<br />

RoarVM+OMOP (opt). The benchmark results have been normalized to the<br />

mean of the corresponding result for the RoarVM+OMOP (opt). To improve<br />

the plot’s readability, only the results for the CogVM are <strong>de</strong>picted.<br />

They show that six of the microbenchmarks and one of the kernel benchmarks<br />

achieve higher absolute performance on the RoarVM+OMOP (opt).<br />

However, the majority maintains its higher absolute performance on top of<br />

the CogVM. For the kernel benchmarks, the benchmarks require on average a<br />

5.7x higher runtime on the RoarVM+OMOP (opt). The AmbientTalkST NBody<br />

benchmark is about 21% slower, while the AmbientTalkST Fannkuch benchmark<br />

is about 14.3x faster.<br />

To conclu<strong>de</strong>, the AST-transformation-based implementation is currently the<br />

faster implementation because it can benefit from the overall higher performance<br />

of the CogVM. The results for RoarVM+OMOP (opt) are not optimal<br />

but an integration into the CogVM or other VMs with JIT compilation might<br />

result in better overall performance. This assumption is further supported by<br />

227

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!