Theoretical and Experimental DNA Computation (Natural ...
Theoretical and Experimental DNA Computation (Natural ...
Theoretical and Experimental DNA Computation (Natural ...
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
92 4 Complexity Issues<br />
memory access facilities (load <strong>and</strong> store), basic arithmetic operations, <strong>and</strong><br />
conditional branching instructions. The compilation into <strong>DNA</strong> involves three<br />
basic stages: compiling the P-RAM program down to a low-level sequence<br />
of instructions; translating this program into a combinational logic network;<br />
<strong>and</strong>, finally, encoding the actions of the resulting Boolean networks into <strong>DNA</strong>.<br />
The actions performed in the second stage lie at the core of the translation.<br />
For each processor invoked at the kth parallel step, there will be identical<br />
(except for memory references) sequences of low-level instructions; the combinational<br />
logic block that simulates this (parallel) instruction takes as its input<br />
all of the bits corresponding to the current state of the memory, <strong>and</strong> produces<br />
as output the same number of bits representing the memory contents after<br />
execution has completed. Effecting the necessary changes involves no more<br />
than employing the required combinational logic network to simulate the action<br />
of any low-level instruction. The main overhead in the simulation comes<br />
from accessing a specific memory location, leading to the logS(n) slow-down.<br />
All other operations can be simulated by well established combinational logic<br />
methods without any loss in runtime. It is well known that the NAND function<br />
provides a complete basis for computation by itself; therefore we restrict<br />
our model to the simulation of such gates. In fact, the realization in <strong>DNA</strong> of<br />
this basis provides a far less complicated simulation than using other complete<br />
bases.<br />
We now describe the simulation in detail. This detailed work is due mainly<br />
to the second author of [9], <strong>and</strong> is used with permission. We provide a full,<br />
formal definition of the CREW P-RAM model that we will use. This description<br />
will consider processors, memory, the control regime, memory access<br />
rules, <strong>and</strong> the processor instruction set. Finally we describe the complexity<br />
measures that are of interest within this model.<br />
The p processors, labelled P1,P2,...,Pp are identical, <strong>and</strong> can execute instructions<br />
from the set described below. Processors have a unique identifier:<br />
that of Pi being i. The global common memory, M, consists of t locations<br />
M1,M2,...,Mt, each of which is exactly r-bits long. The initial input data<br />
consists of n items, which are assumed to be stored in locations M[1],...,M[n]<br />
of the global memory. The main control program, C, sequentially executes instructions<br />
of the form<br />
k; for x ∈ Lk par do instk(x)<br />
Lk is a set of processor identifiers. For each processor Px in this list the instruction<br />
specified by instx is executed. Each processor executes the same sequence<br />
of instructions, but on different memory locations. The control program language<br />
augments the basic parallel operation comm<strong>and</strong> with the additional<br />
control flow statements<br />
for counter-name ← start-value step step-value to end-value do<br />
instruction-sequence