05.08.2014 Views

here - Stefan-Marr.de

here - Stefan-Marr.de

here - Stefan-Marr.de

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

8.5. Assessment of Performance Characteristics<br />

The high overhead on the CogVM can be explained by the differences in<br />

implementations. On the one hand, the AST transformation adds a significant<br />

number of byteco<strong>de</strong>s to each method that need to be executed, and on the<br />

other hand, the reflective operations preclu<strong>de</strong> some of the optimizations of<br />

the CogVM. For instance the basic benefit of the JIT compiler to compiling<br />

byteco<strong>de</strong>s for field access to inline machine co<strong>de</strong> cannot be realized, since<br />

these operations have to go through the domain object’s intercession handler.<br />

Other optimizations, such as polymorphic inline caches [Hölzle et al., 1991]<br />

cannot yield any benefit either, because method invocations are performed<br />

reflectively insi<strong>de</strong> the intercession handler, w<strong>here</strong> polymorphic inline caches<br />

are not applied.<br />

In the RoarVM+OMOP the overhead is comparatively lower, because it only<br />

performs a number of additional checks and then executes the actual reflective<br />

co<strong>de</strong> (cf. Sec. 7.2.1). Thus, initial performance loss is significantly lower.<br />

Speculating on how the CogVM could benefit from similar adaptations,<br />

t<strong>here</strong> are indications that the JIT compiler could be adapted to <strong>de</strong>liver improved<br />

performance for co<strong>de</strong> that uses the OMOP. However, because of the<br />

restricted generalizability of these results, it is not clear whether the CogVM<br />

could reach efficiency to the same <strong>de</strong>gree as the RoarVM+OMOP implementation.<br />

However, this question is out of the scope of this dissertation and will<br />

be part of future work (cf. Sec. 9.5.2).<br />

8.5.2. In<strong>here</strong>nt Overhead<br />

Context and Rationale Sec. 7.2.1 outlined the implementation strategy for<br />

the RoarVM+OMOP. One performance relevant aspect of it is the modification<br />

of byteco<strong>de</strong>s and primitives to perform additional checks. If the check<br />

finds that execution is performed in enforced mo<strong>de</strong>, the VM triggers the intercession<br />

handlers of the OMOP. However, since these checks are on the performance<br />

critical execution path, they prompted the question of how much<br />

they impact overall performance. This section assesses the in<strong>here</strong>nt overhead<br />

on unenforced execution for both OMOP implementation strategies.<br />

AST-OMOP The AST-OMOP has only an overhead in terms of memory usage<br />

but does not have a direct impact on execution performance. The memory<br />

overhead comes from the need of keeping methods for both execution mo<strong>de</strong>s,<br />

the enforced as well as the unenforced, in the image (cf. Sec. 7.1.1). Furthermore,<br />

since in most classes the AST-OMOP represents the owner of an object<br />

as an extra object field, t<strong>here</strong> is also a per-object memory overhead. The per-<br />

221

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!