28.08.2015 Views

The Design and Implementation of the Anykernel and Rump Kernels

1F3KDce

1F3KDce

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

62<br />

After having received a rump kernel CPU <strong>and</strong> thread context, thread B wants to<br />

lock mutex M. M is held, <strong>and</strong> thread B will have to block <strong>and</strong> await M’s release.<br />

Thread B will release <strong>the</strong> rump kernel CPU <strong>and</strong> sleep until A unlocks <strong>the</strong> mutex.<br />

After <strong>the</strong> mutex is unlocked, <strong>the</strong> host marks thread B as runnable <strong>and</strong> after B wakes<br />

up, it will attempt to schedule a rump kernel CPU <strong>and</strong> after that attempt to lock<br />

mutex M <strong>and</strong> continue execution. When B is done with <strong>the</strong> rump kernel call, it will<br />

return back to <strong>the</strong> application. Before doing so, <strong>the</strong> CPU will be released <strong>and</strong> <strong>the</strong><br />

implicit thread context will be free’d.<br />

Note that for thread A <strong>and</strong> thread B to run in parallel, both <strong>the</strong> host <strong>and</strong> <strong>the</strong> rump<br />

kernel must have multiprocessor capability. If <strong>the</strong> host is uniprocessor but <strong>the</strong> rump<br />

kernel is configured with multiple virtual CPUs, <strong>the</strong> threads can execute inside <strong>the</strong><br />

rump kernel concurrently. In case <strong>the</strong> rump kernel is configured with only one CPU,<br />

<strong>the</strong> threads will execute within <strong>the</strong> rump kernel sequentially irrespective <strong>of</strong> if <strong>the</strong><br />

host has one or more CPUs available for <strong>the</strong> rump kernel.<br />

2.4 Virtual Memory<br />

Virtual memory address space management in a rump kernel is relegated to <strong>the</strong> host<br />

because support in a rump kernel would not add value in terms <strong>of</strong> <strong>the</strong> intended use<br />

cases. <strong>The</strong> only case where full virtual memory support would be helpful would<br />

be for testing <strong>the</strong> virtual memory subsystem. Emulating page faults <strong>and</strong> memory<br />

protection in a usermode OS exhibits over tenfold performance penalty <strong>and</strong> can be<br />

significant in o<strong>the</strong>r, though not all, hypervisors [11]. <strong>The</strong>refore, supporting a corner<br />

use case was not seen worth <strong>the</strong> performance penalty in o<strong>the</strong>r use cases.<br />

<strong>The</strong> implication <strong>of</strong> a rump kernel not implementing full memory protection is that<br />

it does not support accessing resources via page faults. <strong>The</strong>re is no support in a<br />

rump kernel for memory mapping a file to a client. Supporting page faults inside a

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!