29.11.2012 Views

2nd USENIX Conference on Web Application Development ...

2nd USENIX Conference on Web Application Development ...

2nd USENIX Conference on Web Application Development ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Resp<strong>on</strong>se time (ms)<br />

Resp<strong>on</strong>se time (ms)<br />

600<br />

500<br />

400<br />

300<br />

200<br />

100<br />

1st adaptati<strong>on</strong> <str<strong>on</strong>g>2nd</str<strong>on</strong>g> adaptati<strong>on</strong> 3rd adaptati<strong>on</strong><br />

workload=20.7req/s<br />

0<br />

0 5 10 15 20 25 30<br />

600<br />

500<br />

400<br />

300<br />

200<br />

100<br />

Request rate (req/s)<br />

SLO<br />

(a) Using Amaz<strong>on</strong>’s Elastic Load Balancer<br />

1st adaptati<strong>on</strong> <str<strong>on</strong>g>2nd</str<strong>on</strong>g> adaptati<strong>on</strong> 3rd adaptati<strong>on</strong><br />

workload=22.7req/s<br />

0<br />

0 5 10 15 20 25 30<br />

Request rate (req/s)<br />

(b) Using our system<br />

SLO<br />

Figure 7: Provisi<strong>on</strong>ing Ref CPU under increasing workload<br />

our system, the third instance (a very fast <strong>on</strong>e) is given a<br />

higher workload than the others so the system requires a<br />

fourth instance <strong>on</strong>ly above 22.7 req/s.<br />

Figure 8 shows similar results for the Ref I/O applicati<strong>on</strong>.<br />

Here as well, our system balances traffic between<br />

instances such that they exhibit identical performance,<br />

whereas ELB creates significant performance differences<br />

between the instances. Our system can sustain up to<br />

9 req/s when using three instances, while ELB can sustain<br />

<strong>on</strong>ly 7 req/s.<br />

These results show that <strong>on</strong>e should employ adaptive<br />

load balancing to correctly assign weights to forwarding<br />

instances when distributing traffics in Cloud. By doing<br />

so, <strong>on</strong>e can achieve homogeneous performance from heterogeneous<br />

instances and make more efficient usage of<br />

these instances.<br />

5.3 Effectiveness of Performance Predicti<strong>on</strong><br />

and Resource Provisi<strong>on</strong>ing<br />

We now dem<strong>on</strong>strate the effectiveness of our system to<br />

provisi<strong>on</strong> multi-tier <strong>Web</strong> applicati<strong>on</strong>s. In this scenario,<br />

in additi<strong>on</strong> to using our load balancer, we also need to<br />

0<br />

0 2 4 6 8 10 12<br />

Request rate (req/s)<br />

(a) Using Amaz<strong>on</strong>’s Elastic Load Balancer<br />

<str<strong>on</strong>g>USENIX</str<strong>on</strong>g> Associati<strong>on</strong> <strong>Web</strong>Apps ’11: <str<strong>on</strong>g>2nd</str<strong>on</strong>g> <str<strong>on</strong>g>USENIX</str<strong>on</strong>g> <str<strong>on</strong>g>C<strong>on</strong>ference</str<strong>on</strong>g> <strong>on</strong> <strong>Web</strong> Applicati<strong>on</strong> <strong>Development</strong> 57<br />

Resp<strong>on</strong>se time (ms)<br />

Resp<strong>on</strong>se time (ms)<br />

700<br />

600<br />

500<br />

400<br />

300<br />

200<br />

100<br />

700<br />

600<br />

500<br />

400<br />

300<br />

200<br />

100<br />

1st adaptati<strong>on</strong> <str<strong>on</strong>g>2nd</str<strong>on</strong>g> adaptati<strong>on</strong><br />

SLO<br />

SLO<br />

1st adaptati<strong>on</strong> <str<strong>on</strong>g>2nd</str<strong>on</strong>g> adaptati<strong>on</strong><br />

0<br />

0 2 4 6 8 10 12<br />

Request rate (req/s)<br />

(b) Using our system<br />

Figure 8: Provisi<strong>on</strong>ing Ref I/O under increasing workload<br />

predict the performance that each new machine would<br />

have if it was added to the applicati<strong>on</strong> server or database<br />

server tiers to decide which tier a new instance should be<br />

assigned to. The Amaz<strong>on</strong> cloud does not have standard<br />

automatic mechanisms for driving such choices so we<br />

do not compare our approach with Amaz<strong>on</strong>’s resource<br />

provisi<strong>on</strong>ing service.<br />

We use our system to provisi<strong>on</strong> the TPC-W ecommerce<br />

benchmark using the “shopping mix” workload.<br />

This standard workload generates 80% of read<strong>on</strong>ly<br />

interacti<strong>on</strong>s, and 20% of read-write interacti<strong>on</strong>s.<br />

We set the SLO of the resp<strong>on</strong>se time of TPC-W to be<br />

500 ms. We increase the workload by creating corresp<strong>on</strong>ding<br />

numbers of Emulated Browsers (EBs). Each<br />

EB simulates a single user who browses the applicati<strong>on</strong>.<br />

Whenever an EB leaves the applicati<strong>on</strong>, a new EB is automatically<br />

create to maintain a c<strong>on</strong>stant load.<br />

When the overall resp<strong>on</strong>se time of the applicati<strong>on</strong> violates<br />

the SLO, we request a new instance from the Cloud<br />

and profile it using the reference applicati<strong>on</strong>. Thanks to<br />

the performance correlati<strong>on</strong>s between the tiers of TPC-<br />

W and the reference applicati<strong>on</strong>, we use the performance<br />

profile of the new instance to predict the performance<br />

of any tier of TPC-W if it was using the new instance.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!