28.08.2015 Views

The Design and Implementation of the Anykernel and Rump Kernels

1F3KDce

1F3KDce

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

145<br />

Our implementation is different from <strong>the</strong> above. Instead <strong>of</strong> doing transport <strong>and</strong><br />

network layer processing in <strong>the</strong> rump kernel, we observe that regardless <strong>of</strong> what <strong>the</strong><br />

guest does, processing will be done by <strong>the</strong> host. At best, we would need to undo what<br />

<strong>the</strong> guest did so that we can feed <strong>the</strong> payload data to <strong>the</strong> host’s sockets interface.<br />

Instead <strong>of</strong> using <strong>the</strong> TCP/IP protocol suite in <strong>the</strong> rump kernel, we redefine <strong>the</strong> inet<br />

domain, <strong>and</strong> attach our implementation at <strong>the</strong> protocol switch layer [114]. We call<br />

this new implementation sockin to reflect it being socket inet. <strong>The</strong> attachment<br />

to <strong>the</strong> kernel is illustrated in Figure 3.23. Attaching at <strong>the</strong> domain level means<br />

communication from <strong>the</strong> kernel’s socket layer is done with usrreq’s, which in turn<br />

map to <strong>the</strong> host socket API in a very straightforward manner. For example, for<br />

PRU_ATTACH we call socket(), for PRU_BIND we call bind(), for PRU_CONNECT<br />

we call connect(), <strong>and</strong> so forth. <strong>The</strong> whole implementation is 500 lines <strong>of</strong> code<br />

(including whitespace <strong>and</strong> comments), making it 1/20th <strong>of</strong> <strong>the</strong> size <strong>of</strong> Slirp.<br />

Since sockin attaches as <strong>the</strong> Internet domain, it is mutually exclusive with <strong>the</strong> regular<br />

TCP/IP protocol suite. Fur<strong>the</strong>rmore, since <strong>the</strong> interface layer is excluded, <strong>the</strong> sockin<br />

approach is not suitable for scenarios which require full TCP/IP processing within<br />

<strong>the</strong> virtual kernel, e.g. debugging <strong>the</strong> TCP/IP stack. In such cases one <strong>of</strong> <strong>the</strong> o<strong>the</strong>r<br />

two networking models should be used. This choice may be made individually for<br />

each rump kernel instance.<br />

3.9.2 Disk Driver<br />

A disk block device driver provides storage medium access <strong>and</strong> is instrumental to<br />

<strong>the</strong> operation <strong>of</strong> disk-based file systems. <strong>The</strong> main interface is simple: a request<br />

instructs <strong>the</strong> driver to read or write a given number <strong>of</strong> sectors at a given <strong>of</strong>fset.<br />

<strong>The</strong> disk driver queues <strong>the</strong> request <strong>and</strong> returns. <strong>The</strong> request is h<strong>and</strong>led in an order<br />

according to a set policy, e.g. <strong>the</strong> disk head elevator. <strong>The</strong> request must be h<strong>and</strong>led in<br />

a timely manner, since during <strong>the</strong> period that <strong>the</strong> disk driver is h<strong>and</strong>ling <strong>the</strong> request

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!