Synergy User Manual and Tutorial. - THE CORE MEMORY
Synergy User Manual and Tutorial. - THE CORE MEMORY
Synergy User Manual and Tutorial. - THE CORE MEMORY
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
<strong>Synergy</strong> <strong>User</strong> <strong>Manual</strong> <strong>and</strong> <strong>Tutorial</strong><br />
FDD is a Fault Detection Daemon. It is activated by an option in the prun comm<strong>and</strong> to<br />
detect worker process failures at runtime.<br />
<strong>Synergy</strong> V3.0 requires no root privileged processes. All parallel processes assume<br />
respective user security <strong>and</strong> resource restrictions defined at account creation. Parallel use<br />
of multiple computers imposes no additional security threat to the existing systems.<br />
Theoretically, there should be one object daemon for each supported object type. For the<br />
three supported types: tuple space, pipe <strong>and</strong> files, we saved the pipe daemon by<br />
implementing it directly in LIL. Thus, <strong>Synergy</strong> V3.0 has only two object daemons: the<br />
Tuple Space H<strong>and</strong>ler (tsh) <strong>and</strong> the File Access H<strong>and</strong>ler (fah). The object daemons, when<br />
activated, talk to parallel programs via the LIL operators under the user defined identity<br />
(via CSL). They are potentially resource hungry. However they only "live" on the<br />
computers where they are needed <strong>and</strong> permitted.<br />
Optimal processor assignment is theoretically complex. <strong>Synergy</strong>'s automatic processor<br />
binding algorithm is extremely simple: unless specifically designated, it binds all tuple<br />
space objects, one master <strong>and</strong> one worker to a single processor. Other processors run the<br />
worker-type (with repeatable logic) processes. Since network is the bottleneck, this<br />
binding algorithm minimizes network traffic thus promising good performance for most<br />
applications using the current tuple matching engine. The full implementation of SLM<br />
will have a distributed tuple matching engine that promises to fulfill a wider range of<br />
performance requirements.<br />
Fault tolerance is a natural benefit of the SPP design. Processor failures discovered<br />
before a run are automatically isolated. Worker processor failures during a parallel<br />
execution is treated in V3.0 by a "tuple shadowing" technique. <strong>Synergy</strong> V3.0 can<br />
automatically recover the lost data from a lost worker with little overhead. This feature<br />
brings the availability of a multiprocessor application to be equal to that of a single<br />
processor <strong>and</strong> is completely transparent to application programs.<br />
<strong>Synergy</strong> provides the basis for automatic load balancing. However, optimal load<br />
balancing requires adjusting tuple sizes. Tuple size adjustments can adapt guided selfscheduling<br />
[1], factoring [2] or fixed chunking using the theory of optimal granule size<br />
for load balancing [3].<br />
<strong>Synergy</strong> V3.0 runs on clusters of workstations. This evaluation copy allows unlimited<br />
processors across multiple file systems (*requires one binary installation per file system).<br />
120