03.07.2020 Views

2018-annual-report

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Narrowing the Data/Compute Gap

with Specialized Hardware

29

Datacenters are facing a challenge because the quantity

of information that needs to be stored and processed is

growing faster than the performance of general purpose

processors. For decades, this has been increasing as per

Moore’s Law, but today it is unclear whether the trend

can be maintained. The shrinking of transistor sizes has

slowed down significantly and even though it is possible

to add more transistors to a central processing unit (CPU),

using them to create additional cores is unlikely to benefit

applications unless they are trivially parallelizable.

In order to change the status quo, we need to investigate

how software interacts with the underlying hardware and

explore ways in which we could tailor the latter to the

application’s needs. As an alternative to adding conventional

cores, we could use part of the chip for specialized

processing elements. When tailored to widely-used application

domains in datacenters, these elements increase

overall processing efficiency and could be used to narrow

the gap between data growth and compute capacity.

databases, blockchains, etc.) and emerging distributed

data-intensive applications that are often bound by the

processing power of CPUs (e.g., business analytics,

machine learning, etc.) The most important challenges of

this research direction revolve around the goal of ensuring

that, while we benefit from the use of novel hardware, the

flexibility, reliability, as well as the security guarantees,

of applications are not impacted. In our exploration we

employ rigorous analysis methods, build proof of concept

software systems and even prototype specialized functionality

in hardware, using Field Programmable Gate Arrays

(FPGAs).

annual report

20

There are already several types of programmable hardware

devices appearing in the datacenter and consumer clouds,

which makes this an exciting time to be working in the

field of systems. Emerging low latency networks with programmable

network interface cards, for instance, allow

distributed applications to change their communication

model and various types of programmable hardware accelerators

allow compute-intensive tasks to be carried out

faster or in a more energy efficient manner. The shift away

from a “CPU-only” view, however, requires us to devise

better methods for software to take advantage of, or even

to directly drive the design of, novel hardware features.

Stagnating CPU performance (approximated here by singlecore

frequency) is limiting our ability to process the increasing

amounts of data we produce. Using more specialized hardware

is one promising direction to close the gap between data and

computation.

At our institute, we explore research questions related to

integrating programmable hardware accelerators in data

management systems that suffer from various forms of

data movement bottlenecks (e.g., large-scale distributed

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!