25.01.2015 Views

Cost-Based Optimization of Integration Flows - Datenbanken ...

Cost-Based Optimization of Integration Flows - Datenbanken ...

Cost-Based Optimization of Integration Flows - Datenbanken ...

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

5.5 Experimental Evaluation<br />

For comparison, the unoptimized plan was executed k ′ times and we measured the total<br />

execution time W (P 2 ) · k ′ . This experiment has been repeated 100 times. Both unoptimized<br />

and optimized execution show a linear scalability with increasing batch size k ′ with<br />

the difference that for optimized execution, we observe a logical y-intercept that is higher<br />

than zero. As a result, the unoptimized execution shows a constant relative execution<br />

time, while the optimized execution shows a relative execution time that decreases with<br />

increasing batch size and that tends towards a lower bound, which is given by the partial<br />

plan costs <strong>of</strong> operators that do not benefit from partitioning. It is important to note<br />

that (1) even for one-message-partitions the overhead is negligible and (2) that even small<br />

numbers <strong>of</strong> messages within a batch significantly reduce the relative execution time.<br />

Figure 5.19: Varying R and sel<br />

Second, we evaluated the batch size k ′ according to different (fixed interval) message<br />

rates R, and selectivities sel in order to validate our assumptions about the batch size<br />

estimation <strong>of</strong> k ′ = R·∆tw under the influence <strong>of</strong> message queue partitioning. We processed<br />

|M| = 100 messages with plan instances <strong>of</strong> P 2 , where all sub experiments were repeated<br />

100 times with fixed waiting time <strong>of</strong> ∆tw = 10 s. Figure 5.19 shows the influence <strong>of</strong> the<br />

message rate R on the average number <strong>of</strong> messages per batch. We observe (1) that the<br />

higher the message rate, the higher the number <strong>of</strong> messages per batch, and (2) that the<br />

selectivity determines the reachable upper bound. However, until this upper bound, the<br />

batch size is independent <strong>of</strong> the selectivity because for higher selectivities we wait longer<br />

(∆tw · 1/sel) until partition execution. Similarly, also an increasing waiting time showed<br />

the expected behavior <strong>of</strong> linearly increasing batch sizes until the upper bound is reached.<br />

Latency Time<br />

Furthermore, we evaluated the latency influence <strong>of</strong> MFO. While the total latency time<br />

<strong>of</strong> message sequences is directly related to the throughput and thus reduced anyway, the<br />

latency time <strong>of</strong> single messages needs further investigation. In this subsection, we analyze<br />

the given maximum latency guarantee and latency times in overload situations.<br />

In detail, we executed |M| = 1,000 messages with plan P 2 using a maximum latency<br />

constraint <strong>of</strong> lc = 10 s and measured the latency time T L (m i ) <strong>of</strong> single messages m i .<br />

There, we fixed a selectivity <strong>of</strong> sel = 0.1, a message arrival rate <strong>of</strong> R = 5 msg /s, and<br />

used different messages arrival rate distributions (fixed, poisson) as well as analyzed the<br />

influence <strong>of</strong> serialized external behavior. In order to discuss the worst-case consideration,<br />

we computed the waiting time ∆tw | T L (M ′ = k ′ /sel) = lc (that is typically only used<br />

as a deescalation strategy). Here, ∆tw was computed as 981.26 ms because in the worst<br />

case there are 1/sel = 10 different partitions plus the execution time <strong>of</strong> the last partition.<br />

Note that the selectivity has no influence on the variance <strong>of</strong> message latency times but on<br />

161

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!