The x-Kernel: An architecture for implementing network protocols - IDA
The x-Kernel: An architecture for implementing network protocols - IDA
The x-Kernel: An architecture for implementing network protocols - IDA
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
msec<br />
11<br />
Unix<br />
x-<strong>Kernel</strong><br />
10<br />
9<br />
8<br />
7<br />
6<br />
5<br />
4<br />
3<br />
2<br />
1<br />
0<br />
0 200 400 600 800 1000 1200 1400<br />
Figure 5: Message Manager<br />
bytes<br />
cost in Unix varies significantly. In particular, the Unix curve can be divided into four distinct parts: (1)<br />
the incremental cost of going from 200 bytes to 500 bytes is 0.57 msec per 100 bytes; (2) the cost of<br />
sending 600 bytes is over 1 msec less than the cost of sending 500 bytes; (3) the incremental cost of<br />
sending 600 to 1000 bytes is 0.25 msec per 100 bytes (same as the x-kernel); and (4) the incremental<br />
cost of sending between 1100 and 1400 bytes is again 0.57 msec per 100 bytes.<br />
<strong>The</strong> reason <strong>for</strong> this wide difference of behavior is that Unix does not provide a uni<strong>for</strong>m message/buffer<br />
management system. Instead, it is the responsibility of each protocol to represent a message as a linked<br />
list of two different storage units: mbufs which hold up to 118 bytes of data and pages which hold up to<br />
1024 bytes of data [18]. Thus, the difference between the four parts of the Unix curve can be explained<br />
as follows: (1) a new mbuf is allocated <strong>for</strong> each 118 bytes (i.e., 0.57msec/2 = 0.28msec is the cost of<br />
using an mbuf); (2) a page is allocated when the message size reaches 512 bytes (half a page); (3) the rest<br />
of the page is filled without the need to allocate additional memory; and (4) additional mbufs are used.<br />
Thus, the difference in the cost of sending 500 bytes and 600 bytes in Unix concretely demonstrates the<br />
per<strong>for</strong>mance penalty involved in using the “wrong” buffer management strategy; in this case, the penalty<br />
is 14%. Perhaps just as important as this quantitative impact is the “qualitative” difference between the<br />
buffer management scheme offered by the two systems: someone had to think about and write the data<br />
buffering code that results in the Unix per<strong>for</strong>mance curve.<br />
Second, Figure 6 gives the per<strong>for</strong>mance of the x-kernel and Unix as a function of the number of open<br />
17