28.08.2015 Views

The Design and Implementation of the Anykernel and Rump Kernels

1F3KDce

1F3KDce

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

224<br />

8<br />

7<br />

St<strong>and</strong>ard components<br />

Self-contained<br />

6<br />

Cluster startup (s)<br />

5<br />

4<br />

3<br />

2<br />

1<br />

0<br />

50 100 150 200 250<br />

Routers in cluster<br />

Figure 4.13:<br />

Time required to start, configure <strong>and</strong> send an initial packet.<br />

4.6.3 System Call Speed<br />

We compared rump system call performance against o<strong>the</strong>r technologies: Xen, QEMU<br />

(unaccelerated) <strong>and</strong> User-Mode Linux. We did this comparison by executing <strong>the</strong><br />

setrlimit() system call 5 million times per thread in two simultaneously running<br />

host threads. We ran <strong>the</strong> UML <strong>and</strong> Xen tests on a Linux host. For calibration,<br />

we provide both <strong>the</strong> NetBSD <strong>and</strong> Linux native cases. We were unable to get UML<br />

or QEMU to use more than one host CPU. For a NetBSD host we present native<br />

system calls, a rump kernel guest, <strong>and</strong> a QEMU NetBSD guest. For Linux, we have<br />

native performance, a Linux Xen guest <strong>and</strong> a UML guest. <strong>The</strong> results are presented<br />

in Figure 4.14. <strong>The</strong> NetBSD native call is 16% faster than <strong>the</strong> Linux native call. We<br />

use this ratio to normalize <strong>the</strong> results when comparing rump kernels against Linux.<br />

We did not investigate <strong>the</strong> reason for <strong>the</strong> difference between NetBSD <strong>and</strong> Linux.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!