DARPA ULTRALOG Final Report - Industrial and Manufacturing ...
DARPA ULTRALOG Final Report - Industrial and Manufacturing ...
DARPA ULTRALOG Final Report - Industrial and Manufacturing ...
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
6<br />
S<br />
UB<br />
i<br />
∑<br />
= P LI / LI . (16)<br />
i<br />
p∈K<br />
n(<br />
i)<br />
So, a component i can complete by T LB <strong>and</strong> generate tasks at<br />
a constant interval of T LB /L i from t=S i UB when it receives<br />
tasks at a constant interval of T LB /L i from t=0. Now, consider<br />
component i’s successor s which has only one predecessor.<br />
As the successor receives tasks at a constant interval of T LB /L s<br />
from t=S i UB or more preferably, it can complete by S i UB +T LB .<br />
So, a component e∈E (with no successor) can receive tasks at<br />
a constant interval of T LB /L e from maximal task traveling time<br />
to the component of:<br />
Max<br />
j∈S<br />
e<br />
∑<br />
i∈<br />
j<br />
S<br />
p<br />
UB<br />
i<br />
i<br />
(17)<br />
(note that a path j does not include component e) or more<br />
preferably so that its completion time T e is bounded as:<br />
T<br />
e<br />
≤ T<br />
LB<br />
+ Max<br />
j∈S<br />
e<br />
∑<br />
i∈<br />
j<br />
S<br />
UB<br />
i<br />
. (18)<br />
And, the upper bound of T is the maximal of the bounds.<br />
Though we formulated the upper bound performance<br />
without considering stress environments, one can easily modify<br />
it so that the upper bound performance can reflect the stress<br />
environments (if each ω n s is identifiable or assumable). The<br />
adequacy criterion is defined as the ratio between T LB <strong>and</strong> T UB<br />
as in (19). When the criterion is close to one, a network can<br />
achieve the lower bound performance using the proportion<br />
allocation policy. Typically, the criterion converges to one as<br />
each L i increases. However, as the criterion approaches zero,<br />
the policy become more <strong>and</strong> more inadequate. The example<br />
network in Fig. 1 is quite adequate because the network’s<br />
adequacy is 0.99 (300/303).<br />
LB<br />
<br />
T<br />
Adequacy = (19)<br />
UB<br />
T<br />
So far, we assumed a hypothetical weighted round-robin<br />
server which is difficult to realize in practice. But, our<br />
arguments do not seem to be invalid because they are based on<br />
worst-case analysis <strong>and</strong> quantum size is relatively infinitesimal<br />
compared to working horizon in reality.<br />
D. Resource control mechanism<br />
Once a network has an appropriate adequacy over a certain<br />
level (depending on the nature of the network), the proportional<br />
allocation is deployed periodically under MPC framework.<br />
Consider current time as t. To update load index as the system<br />
moves on, we slightly modify it to represent total CPU time for<br />
the remaining tasks as:<br />
LI ( t ) = R ( t ) + L ( t ) P , (20)<br />
i<br />
i<br />
in which R i (t) denotes remaining CPU time for a task in process<br />
<strong>and</strong> L i (t) the number of remaining tasks excluding a task in<br />
process. After identifying initial number of tasks L i (0)=L i , each<br />
component updates it by counting down as they process tasks.<br />
Periodically, a resource manager of each node collects current<br />
LI i (t)s from residing components <strong>and</strong> allocates resource<br />
proportional to the indices as in (21). As the resource allocation<br />
policy is purely localized there is no need for synchronization<br />
between nodes. The designed resource control mechanism is<br />
scalable as each node can make decisions independent of<br />
others while requiring almost no computation.<br />
w<br />
i<br />
i ( t)<br />
ωn(<br />
i)<br />
∑ LI p ( t)<br />
p∈K<br />
n(<br />
i)<br />
i<br />
i<br />
LI ( t)<br />
= (21)<br />
VI. EMPIRICAL RESULTS<br />
We ran several experiments using discrete-event simulation<br />
to validate the designed resource control mechanism.<br />
A. Experimental design<br />
The experimental network is composed of eight components<br />
in four nodes as in Fig. 3. Two components are sharing a<br />
resource in N 3 <strong>and</strong> four components in N 4 . Also, ω n is 1 for all<br />
n∈N <strong>and</strong> CPU is allocated using a weighted round-robin<br />
scheduling in which CPU time received by each component in a<br />
round is equal to its assigned weight.<br />
N 1 N 2<br />
A 1<br />
N 3<br />
A 3 A 4<br />
N 4<br />
A 5 A 6 A 7 A 8<br />
Fig. 3. Experimental network configuration. The network is composed of<br />
eight components in four nodes <strong>and</strong> the performance can depend on the<br />
resource allocation of nodes N 3 <strong>and</strong> N 4 .<br />
We set up ten different experimental conditions as shown in<br />
Table I. We vary the number of root tasks rt i <strong>and</strong> CPU time per<br />
task P i , <strong>and</strong> the distribution of P i can be deterministic or<br />
exponentially distributed. While using stochastic distribution<br />
we repeat 5 experiments.<br />
We use three different resource control policies for each<br />
experimental condition. Table II shows these control policies.<br />
In round-robin allocation policy (RR) the components in each<br />
node are assigned equal weights over time. PA-O <strong>and</strong> PA-C use<br />
the proportional allocation policy in open-loop <strong>and</strong> closed-loop<br />
A 2