11.07.2015 Views

Encyclopedia of Computer Science and Technology

Encyclopedia of Computer Science and Technology

Encyclopedia of Computer Science and Technology

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

concurrent programming 113Concurrent programming is the organization <strong>of</strong> programsso that two or more tasks can be executed at thesame time. Each task is called a thread. Each thread is itselfa traditional sequentially ordered program. One advantage<strong>of</strong> concurrent programming is that the processor can beused more efficiently. For example, instead <strong>of</strong> waiting forthe user to enter some data, then performing calculations,then waiting for more data, a concurrent program can havea data-gathering thread <strong>and</strong> a data-processing thread. Thedata-processing thread can work on previously gathereddata while the data-gathering thread waits for the user toenter more data. The same principle is used in multitaskingoperating systems such as UNIX or Micros<strong>of</strong>t Windows.If the system has only a single processor, the programsare allocated “slices” <strong>of</strong> processor time according to somescheme <strong>of</strong> priorities. The result is that while the processorcan be executing only one task (program) at a time, forpractical purposes it appears that all the programs are runningsimultaneously (see multitasking).Multiprocessing involves the use <strong>of</strong> more than one processoror processor “core.” In such a system each task (or eveneach thread within a task) might be assigned its own processor.Multiprocessing is particularly useful for programs thatinvolve intensive calculations, such as image processing orpattern recognition systems (see multiprocessing).Programming IssuesRegular programs written for operating systems such asMicros<strong>of</strong>t Windows generally require no special codeto deal with the multitasking environment, because theoperating system itself will h<strong>and</strong>le the scheduling. (Thisis true with preemptive multitasking, which has generallysupplanted an earlier scheme where programs were responsiblefor yielding control so the operating system could giveanother program a turn.)Managing threads within a program, however, requiresthe use <strong>of</strong> programming languages that have special statements.Depending on the language, a thread might bestarted by a fork statement, or it might be coded in a waysimilar to a traditional subroutine or procedure. (The differenceis that the main program continues to run while theprocedure runs, rather than waiting for the procedure toreturn with the results <strong>of</strong> its processing.)The coordination <strong>of</strong> threads is a key issue in concurrentprogramming. Most problems arise when two or morethreads must use the same resource, such as a processorregister (at the machine language level) or the contents<strong>of</strong> the same variable. Let’s say two threads, A <strong>and</strong> B, havestatements such as: Counter = Counter + 1. Thread A getsthe value <strong>of</strong> Counter (let’s say it’s 10) <strong>and</strong> adds one to it.Meanwhile, thread B has also fetched the value 10 fromCounter. Thread A now stores 11 back in counter. ThreadB, now adds 1 <strong>and</strong> stores 11 back in Counter. The result isthat Counter, which should be 12 after both threads haveprocessed it, contains only 11. A situation where the resultdepends on which thread gets to execute first is called arace condition.One way to prevent race conditions is to specify thatcode that deals with shared resources have the ability to“lock” the resource until it is finished. If thread A can lockthe value <strong>of</strong> Counter, thread B cannot begin to work with ituntil thread A is finished <strong>and</strong> releases it. In hardware terms,this can be done on a single-processor system by disablinginterrupts, which prevents any other thread from gainingaccess to the processor. In multiprocessor systems, an interlockmechanism allows one thread to lock a memory locationso that it can’t be accessed by any other thread. Thiscoordination can be achieved in s<strong>of</strong>tware through the use<strong>of</strong> a semaphore, a variable that can be used by two threadsto signal when it is safe for the other to resume processing.In this scheme, <strong>of</strong> course, it is important that a threadnot “forget” to release the semaphore, or execution <strong>of</strong> theblocked thread will halt indefinitely.A more sophisticated method involves the use <strong>of</strong> messagepassing, where processes or threads can send a variety<strong>of</strong> messages to one another. A message can be used to passdata (when the two threads don’t have access to a sharedmemory location). It can also be used to relinquish accessto a resource that can only be used by one process at a time.Message-passing can be used to coordinate programs orthreads running on a distributed system where differentthreads may not only be using different processors, but runningon separate machines (a cluster computing facility).Programming language support for concurrent programmingoriginally came through devising new dialects<strong>of</strong> existing languages (such as Concurrent Pascal), buildingfacilities into new languages (such as Modula-2), or creatingprogram libraries for languages such as C <strong>and</strong> C++.However, in recent years concurrent programming languages<strong>and</strong> techniques have been unable to keep up withthe growth in multiprocessor computers <strong>and</strong> distributedcomputing (such as “clusters” <strong>of</strong> coordinated machines).With most new desktop PCs having two or more processingcores, there is a pressing need to develop new programsthat can carry out tasks (such as image processing) usingmultiple streams <strong>of</strong> execution. Meanwhile, in very highperformancemachines (see supercomputer), the DefenseAdvanced Research Projects Agency (DARPA) has been tryingto work with manufacturers to develop languages towork better with computers that may have hundreds <strong>of</strong>processors as well as distributed systems or clusters. Suchlanguages include Sun’s Fortress, intended as a modernreplacement for Fortran for scientific applications.The new generation <strong>of</strong> concurrent languages tries toautomate much <strong>of</strong> the allocation <strong>of</strong> processing, allowingprogrammers to focus on their algorithms rather thanimplementation issues. For example, program structuressuch as loops can be automatically “parallelized,” such asby assigning them to separate cores.Further ReadingAnthes, Gary. “Languages for Supercomputing Get ‘Suped’ Up.”<strong>Computer</strong>world.com, March 12, 2007. Available online. URL:http://www.computerworld.com/action/article.do?comm<strong>and</strong>=viewArticleBasic&articleId=283477&intsrc=hm_list.Accessed June 24, 2007.Ben-Ari, M. Principles <strong>of</strong> Concurrent & Distributed Programming.2nd ed. Reading, Mass.: Addison-Wesley, 2006.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!