20.01.2015 Views

Performance Modeling and Benchmarking of Event-Based ... - DVS

Performance Modeling and Benchmarking of Event-Based ... - DVS

Performance Modeling and Benchmarking of Event-Based ... - DVS

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

5.1. SPECJMS2007 - A STANDARD BENCHMARK 97<br />

For vertical <strong>and</strong> horizontal scaling, many <strong>of</strong> these values are either fixed or are automatically<br />

calculated based upon the scaling factor.<br />

With this knowledge, the controller instantiates the s<strong>of</strong>tware components <strong>of</strong> the SUT. It<br />

begins this by starting an RMI server <strong>and</strong> connecting to a satellite process on each node machine<br />

identified as part <strong>of</strong> this test to give it specific instructions. In all places (throughout the<br />

benchmark) where lists are given, work is distributed equally using a round-robin algorithm.<br />

Period 2: Starting Agents The satellite is a simple part <strong>of</strong> the framework that knows how<br />

to build the correct environment to start the required JVM processes. It takes the controller’s<br />

configuration <strong>and</strong> starts the agents relevant to that node. There is an architectural lower bound<br />

<strong>of</strong> one Agent-JVM per class <strong>of</strong> location (SP, SM, DC, HQ), meaning a minumum <strong>of</strong> four AgentJVMs<br />

in the SUT. Each agent connects back to the controller to signal their readiness.<br />

Period 3. Starting <strong>Event</strong> H<strong>and</strong>lers The controller signals all agents to initialise their<br />

event h<strong>and</strong>ler threads <strong>and</strong> connect to the JMS resources they will be using (this includes both<br />

incoming <strong>and</strong> outgoing Destinations). Each event h<strong>and</strong>ler is implemented as a Java thread.<br />

Period 4: Warmup Period Load-generating threads (drivers) ramp up their throughput<br />

from zero to their configured rate over the Warmup Period. This helps ensure the SUT is not<br />

swamped by an initial rush when many <strong>of</strong> its constituent elements may not yet be fully prepared.<br />

The agents are the only parts <strong>of</strong> the SUT which perform JMS operations (i.e. talk directly<br />

to the JMS middleware).<br />

Period 5: Measurement Period The measurement period is also known as the steadystate<br />

period. All agents are running at their configured workload <strong>and</strong> no changes are made. The<br />

controller will periodically (thirty seconds by default) check that there are no errors <strong>and</strong> may<br />

also collect periodic realtime performance statistics.<br />

Period 6: Drain Period In order to make sure all produced messages have an opportunity<br />

to be consumed, the controller signals agents to pause their load-generating threads (drivers).<br />

This period is not expected to be long in duration as a noticeable backlog in messages would<br />

invalidate audit requirements on throughput anyway.<br />

Period 7: Stopping <strong>Event</strong> H<strong>and</strong>lers Agents will terminate all event h<strong>and</strong>lers but remain<br />

present themselves so that the controller can collect final statistics.<br />

Period 8: Post-processing Results Having collected statistics from all parties, the controller<br />

begins post-process including auditing <strong>and</strong> preparation <strong>of</strong> the final results.<br />

Reporter Framework<br />

The benchmark includes a reporter framework that prepares detailed reports about the run. For<br />

this purpose, the reporter framework enriches collected measurement data with other information,<br />

e.g., throughput predictions <strong>and</strong> configuration settings.<br />

The reporter framework has two major components:<br />

1. Final Reporter<br />

The controller takes formal measurements at three points during the run (see Figure 5.17).<br />

The first two, the beginning <strong>and</strong> end <strong>of</strong> the measurement period, are used to audit the<br />

messaging throughput. The final measurement, at the end <strong>of</strong> the drain period, is used to<br />

audit the final message counts.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!