Synergy User Manual and Tutorial. - THE CORE MEMORY
Synergy User Manual and Tutorial. - THE CORE MEMORY
Synergy User Manual and Tutorial. - THE CORE MEMORY
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
<strong>Synergy</strong> <strong>User</strong> <strong>Manual</strong> <strong>and</strong> <strong>Tutorial</strong><br />
<strong>Synergy</strong> Philosophy<br />
Facilitating the best use of computing <strong>and</strong> networking resources for each application is<br />
the key philosophy in <strong>Synergy</strong>. We advocate competitive resource sharing as opposed to<br />
``cycle stealing.'' The tactic is to reduce processing time for each application. Multiple<br />
running applications would fully exploit system resources. The realization of the<br />
objectives, however, requires both quantitative analysis <strong>and</strong> highly efficient tools.<br />
It is inevitable that parallel programming <strong>and</strong> debugging will be more time consuming<br />
than single thread processing regardless how well the application programming interface<br />
(API) is designed. The illusive parallel processing results taught us that we must have<br />
quantitatively convincing reasons to processing an application in parallel before<br />
committing to the potential expenses (programming, debugging <strong>and</strong> future maintenance.)<br />
We use Timing Models to evaluate the potential speedups of a parallel program using<br />
different processors <strong>and</strong> networking devices [13]. Timing models capture the orders of<br />
timing costs for computing, communication, disk I/O <strong>and</strong> synchronization requirements.<br />
We can quantitatively examine an application's speedup potential under various processor<br />
<strong>and</strong> networking assumptions. The analysis results delineate the limit of hopes. When<br />
applied to practice, timing models provide guidelines for processing grain selection <strong>and</strong><br />
experiment design.<br />
Efficiency analysis showed that effective parallel processing should follow an<br />
incremental coarse-to-fine grain refinement method. Processors can be added only if<br />
there are unexplored parallelism, processors are available <strong>and</strong> the network is capable of<br />
carrying the anticipated load. Hard-wiring programs to processors will only be efficient<br />
for a few special applications with restricted input at the expense of programming<br />
difficulties.<br />
To improve performance, we took an application-oriented approach in the tool design.<br />
Unlike conventional compilers <strong>and</strong> operating systems projects, we build tools to<br />
customize a given processing environment for a given application. This customization<br />
defines a new infrastructure among the pertinent compilers, operating systems <strong>and</strong> the<br />
application for effective resource exploitation. Simultaneous execution of multiple<br />
parallel applications permits exploiting available resources for all users. This makes the<br />
networked processors a fairly real ``virtual supercomputer.''<br />
An important advantage of the <strong>Synergy</strong> compiler-operating system-application<br />
infrastructure is the higher level portability over existing systems. It allows written<br />
parallel programs to adapt into any programming, processor <strong>and</strong> networking technologies<br />
without compromising performance.<br />
116