29.04.2014 Views

Presburger Arithmetic and Its Use in Verification

Presburger Arithmetic and Its Use in Verification

Presburger Arithmetic and Its Use in Verification

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

2.1. MULTICORE PARALLELISM: A BRIEF OVERVIEW<br />

ture of symmetric multiprocessors, thus work<strong>in</strong>g well on shared memory mach<strong>in</strong>es.<br />

Although OpenMP is easier to adapt to the architecture of multicore hardwares<br />

than MPI, it was not designed specifically for the multicore platform. Other new<br />

parallel frameworks <strong>in</strong>clud<strong>in</strong>g Parallel Extensions (PFX) of .NET framework are<br />

expected to exploit multicore comput<strong>in</strong>g powers better.<br />

2.1.4 Programm<strong>in</strong>g Models<br />

Categorized by programm<strong>in</strong>g models, parallel process<strong>in</strong>g falls <strong>in</strong>to two groups: explicit<br />

parallelism <strong>and</strong> implicit parallelism [12]. In explicit parallelism, concurrency<br />

is expressed by means of special-purpose directives or function calls. These<br />

primitives are related to synchronization, communication <strong>and</strong> task creation which<br />

result <strong>in</strong> parallel overheads. Explicit parallelism is criticized for be<strong>in</strong>g too complicated<br />

<strong>and</strong> hid<strong>in</strong>g mean<strong>in</strong>gs of parallel algorithms; however, it provides programmers<br />

with full control over parallel execution, which leads to optimal performance. MPI<br />

framework <strong>and</strong> Java’s parallelism constructs are well-known examples of explicit<br />

parallelism.<br />

Implicit parallelism is based on compiler support to exploit parallelism without<br />

us<strong>in</strong>g additional constructs. Normally, implicit parallel languages do not require<br />

special directives or rout<strong>in</strong>es to enable parallel execution. Some examples of implicit<br />

parallel languages could be HPF <strong>and</strong> NESL [12]; these languages hide complexity of<br />

task management <strong>and</strong> communication, so developers can focus on design<strong>in</strong>g parallel<br />

programs. However, implicit parallelism does not give programmers much space for<br />

tweak<strong>in</strong>g programs lead<strong>in</strong>g to suboptimal parallel efficiency.<br />

In this work, we use F# on .NET framework for parallel process<strong>in</strong>g. F#’s<br />

parallelism constructs are clearly explicit parallelism; <strong>in</strong> Section 2.2 we are show<strong>in</strong>g<br />

that these constructs are very close to implicit parallelism but they still allow users<br />

to control the degree of parallelism.<br />

2.1.5 Important Concepts<br />

Speedup<br />

Speedup is def<strong>in</strong>ed as follows [21]:<br />

where:<br />

• N is the number of processors.<br />

S N = T 1<br />

T N<br />

• T 1 is the execution time of a sequential algorithm.<br />

• T N is the execution time of a parallel algorithm us<strong>in</strong>g N processors.<br />

In an ideal case when S N = N, al<strong>in</strong>earspeedupisobta<strong>in</strong>ed.However,superl<strong>in</strong>ear<br />

speedup higher than N with N processors might happen. One reason might be cache<br />

7

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!