28.08.2015 Views

The Design and Implementation of the Anykernel and Rump Kernels

1F3KDce

1F3KDce

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

93<br />

interrupts from hardware clock interrupts to run periodic tasks (used e.g. by TCP<br />

timers).<br />

<strong>The</strong> users <strong>of</strong> <strong>the</strong> kernel s<strong>of</strong>tint facility expect <strong>the</strong>m to operate exactly according<br />

to <strong>the</strong> principles we listed. Initially, for simplicity, s<strong>of</strong>tints were implemented as<br />

regular threads. <strong>The</strong> use <strong>of</strong> regular threads resulted in a number <strong>of</strong> problems. For<br />

example, when <strong>the</strong> E<strong>the</strong>rnet code schedules a s<strong>of</strong>t interrupt to do IP level processing<br />

for a received frame, code first schedules <strong>the</strong> s<strong>of</strong>tint <strong>and</strong> only later adds <strong>the</strong> frame<br />

to <strong>the</strong> processing queue. When s<strong>of</strong>tints were implemented as regular threads, <strong>the</strong><br />

host could run <strong>the</strong> s<strong>of</strong>tint thread before <strong>the</strong> E<strong>the</strong>rnet interrupt h<strong>and</strong>ler had put <strong>the</strong><br />

frame on <strong>the</strong> processing queue. If <strong>the</strong> s<strong>of</strong>tint ran before <strong>the</strong> packet was queued, <strong>the</strong><br />

packet would not be delivered until <strong>the</strong> next incoming packet was h<strong>and</strong>led.<br />

S<strong>of</strong>t interrupts are implemented in sys/rump/librump/rumpkern/intr.c according<br />

to <strong>the</strong> principles we listed earlier. <strong>The</strong> st<strong>and</strong>ard NetBSD implementation was<br />

not usable in a rump kernel since that implementation is based on direct interaction<br />

with <strong>the</strong> NetBSD scheduler.<br />

3.4 Virtual Memory Subsystem<br />

<strong>The</strong> main purpose <strong>of</strong> <strong>the</strong> NetBSD virtual memory subsystem is to manage memory<br />

address spaces <strong>and</strong> <strong>the</strong> mappings to <strong>the</strong> backing content [20]. While <strong>the</strong> memory<br />

address spaces <strong>of</strong> a rump kernel <strong>and</strong> its clients are managed by <strong>the</strong>ir respective hosts,<br />

<strong>the</strong> virtual memory subsystem is conceptually exposed throughout <strong>the</strong> kernel. For<br />

example, file systems are tightly built around being able to use virtual memory<br />

subsystem data structures to cache file data. To illustrate, <strong>the</strong> st<strong>and</strong>ard way <strong>the</strong><br />

kernel reads data from a file system is to memory map <strong>the</strong> file, access <strong>the</strong> mapped<br />

range, <strong>and</strong> possibly fault in missing data [101].

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!