11.07.2015 Views

The GPU Computing Revolution - London Mathematical Society

The GPU Computing Revolution - London Mathematical Society

The GPU Computing Revolution - London Mathematical Society

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

A KNOWLEDGE TRANSFER REPORT FROM THE LMSAND THE KTN FOR INDUSTRIAL MATHEMATICS5increasing parallelism. Softwarewill therefore need to changeradically in order to exploit thesenew parallel architectures. Thisshift to parallelism represents oneof the largest challenges thatsoftware developers have faced.Modifying existing software to makeit parallel is often (but not always)difficult. Now might be a good timeto re-engineer many older codesfrom scratch.Since hardware designers gave upon trying to maintain the free lunch,incredible advances in computerhardware design have been made.While mainstream CPUs have beenembracing parallelism relativelyslowly as the majority of softwaretakes time to change to this newmodel, other kinds of processorshave exploited parallelism muchmore rapidly, becoming massivelyparallel and delivering largeperformance improvements as aresult.<strong>The</strong> most significant class ofprocessor to have fully embracedmassive parallelism are graphicsprocessors, often called GraphicsProcessing Units or <strong>GPU</strong>s.Contemporary <strong>GPU</strong>s first appearedin the 1990’s and were originallydesigned to run the 2D and 3Dgraphical operations used by theburgeoning computer gamesmarket. <strong>GPU</strong>s started asfixed-function devices, designed toexcel at graphics-specific functions.<strong>GPU</strong>s reached an inflection point in2002, when the main graphicsstandards, OpenGL [68] andDirectX [82] started to requiregeneral-purpose programmability inorder to deliver advanced specialeffects for the latest computergames. <strong>The</strong> mass-market forcesdriving the development of <strong>GPU</strong>s(over 525 million graphicsprocessors were sold in 2010 [63])drove rapid increases in <strong>GPU</strong>performance and programmability.<strong>GPU</strong>s quickly evolved from theirfixed-function origins intofully-programmable, massivelyparallel processors (see Figure 2).Soon many developers were tryingto exploit these low-cost yetincredibly high-performance <strong>GPU</strong>sto solve non-graphics problems.<strong>The</strong> term ‘General-Purposecomputation on GraphicsProcessing Units’ or GP<strong>GPU</strong>, wascoined by Mark Harris in 2002 [51],and since then <strong>GPU</strong>s havecontinued to become more andmore programmable.Today, <strong>GPU</strong>s represent thepinnacle of the many-corehardware design philosophy. Amodern <strong>GPU</strong> will consist ofhundreds or even thousands offairly simple processor cores. Thisdegree of parallelism on a singleprocessor is usually called‘many-core’ in preference to‘multi-core’ — the latter term istypically applied to processors withat most a few dozen cores.It is not just <strong>GPU</strong>s that are pushingthe development of parallelprocessing. Led by Moore’s Law,mainstream processors are also onan inexorable advance towardsmany-core designs. AMD startedshipping a 12-core mainstream x86 Figure 2: <strong>The</strong> evolution of <strong>GPU</strong>s from fixed function pipelines on the left, to fully programmable arrays ofsimple processor cores on the right [69].

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!