27.03.2014 Views

SEKE 2012 Proceedings - Knowledge Systems Institute

SEKE 2012 Proceedings - Knowledge Systems Institute

SEKE 2012 Proceedings - Knowledge Systems Institute

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

flexibility they provide to model systems with multithreaded<br />

executions. In comparison, analytical approaches<br />

such as Ma rkov models are difficult to express multithreaded<br />

systems, at least because of the state-space<br />

explosion problem, as each Petri net marking becomes a<br />

single state in the Markov model. As Markov models can<br />

be seen as a finite state machine, they are more suitable<br />

for single-threaded systems.<br />

The paper is organized as follows. Section II presents<br />

a) the description of experiments, b) an analytical model,<br />

and c) the Petri net model for HTTP request rejection rate<br />

in JBoss. Section III focuses on experimental and<br />

analytical results. Section IV offers some possible<br />

directions to the future research and finally section V<br />

contains some concluding remarks.<br />

II.<br />

DESCRIPTION OF EXPERIMENT<br />

A. Experimental Environment<br />

The experimental environment consists of two hosts<br />

(client and server) remotely located from each other in a<br />

LAN environment. The host server on which JBoss is<br />

running is excluded from running other tasks to ensure the<br />

consistency of data sets collected. The Duke’s Bank<br />

application [18] is transformed into a web service and<br />

deployed on JBoss. JBoss uses the Apache Tomcat as its<br />

default web server. The client host generates service<br />

requests to JBoss. The bandwidth of the LAN is shared<br />

with other users not relevant to this experiment. Tools like<br />

Wireshark [19] and Ping are used to measure the round<br />

trip delay (RTD), excluding the time spent in the hosts. In<br />

comparison to the time spent in the server, the RTD is<br />

observed to be so negligible that it is ignored in the<br />

experimental analyses. The system structure is illustrated<br />

in Fig. 1.<br />

WS client<br />

SoapUI<br />

SOAP- Http Request<br />

SOAP- Http Response<br />

Figure 1.<br />

Client<br />

The experimental setup.<br />

JBoss AS<br />

Banking<br />

WS<br />

Server<br />

OS Windows 7 Windows Vista<br />

Processor 1.67 GHz 2.66 GHz<br />

RAM 1.00 GB 2.00 GB<br />

Software SoapUI (3.6.1) JBoss AS (4.2.2)<br />

SoapUI [20] is used to generate controlled service<br />

request loads to the server. SoapUI is an open source<br />

testing tool solution for service-oriented architectures.<br />

With a graphical user interface, SoapUI is capable of<br />

generating and mocking service requests in SOAP format.<br />

There are two main parameters in SoapUI load testing<br />

tool that can be set to control the workload of the<br />

application server: number of threads representing the<br />

virtual users, and the number of requests (runs) generated<br />

per thread. For example, if the number of threads is set to<br />

20 and number of runs per thread is set to 10, then there<br />

are 20 clients, each sending 10 SOAP requests for a total<br />

of 200 requests. SoapUI measures several performance<br />

parameters such as the average response time, avg,<br />

transactions per second, tps, and the number of<br />

transaction requests failed in the process, which is denoted<br />

as err. Avg is the average time difference between the<br />

times a request is sent out until the response is received.<br />

Tps, also called arrival rate, is th e average number of<br />

requests generated by the clients per second.<br />

The experiments have been conducted with sixteen<br />

different load configurations. For each load test, SoapUI<br />

returns the values for avg, tps, and err. Each test is<br />

repeated 5 times and the average of the returned values<br />

are calculated. The total time T for each load test<br />

configuration is computed as follows:<br />

<br />

<br />

where cnt is the total number of requests for each test.<br />

Consequently, the error and success rates for each load are<br />

computed by:<br />

<br />

<br />

<br />

<br />

<br />

The error reports generated by SoapUI consist of all<br />

types of potential errors that are generated by the network,<br />

application server, and web service itself. However, this<br />

study considers only the errors that are generated by JBoss<br />

when the HTTP requests are rejected. Th erefore,<br />

is treated as .<br />

B. The Analytical Model<br />

The actual rate of requests rejected can be obtained<br />

from (2). This rate can be estimated in a different way.<br />

Recall that avg is the average response time for one<br />

request. Therefore, the service rate is 1/avg. In order for<br />

JBoss not to reject any incoming request, it should be able<br />

to use enough threads to keep up w ith the arrival rate.<br />

Thus, JBoss will reject no requests if the following holds:<br />

1<br />

threads<br />

avg<br />

1<br />

tps threads<br />

avg<br />

tps 0<br />

<br />

Otherwise some requests will be rejected. Thus, number<br />

of requests rejected per unit of time, i.e. ,<br />

will be:<br />

<br />

)<br />

<br />

Consequently, the total number of requests rejected is:<br />

<br />

<br />

<br />

(4)<br />

306

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!