11.01.2014 Views

ANNUAL REPORT 2012

ANNUAL REPORT 2012

ANNUAL REPORT 2012

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

HTM - Hierarchical Temporal Memory on Manycores<br />

HTM - Hierarchical Temporal Memory on Many-cores<br />

T. Nordström, D. Hammerstrom, Zain-ul-Abdin, J. Duracz, and B. Svensson<br />

Centre for Research on Embedded Systems<br />

Introduction<br />

An increasingly important aspect of embedded computing is<br />

the processing and understanding of noisy real world data,<br />

then making decisions and taking timely actions based on these<br />

data. Consequently, various kinds of intelligent computing<br />

structures are being investigated as important building blocks<br />

in embedded system design.<br />

One very promising algorithm is Hierarchical Temporal Memory<br />

(HTM) which is being developed by Numenta, Inc. and<br />

which is already being used in a number of real applications.<br />

Being based on more biological kinds of models, HTM is massively<br />

parallel, but it is also computationally intensive and as<br />

it is integrated into real applications, it is starting to run into<br />

performance limitations. The goal of this CERES project is to<br />

explore suitable hardware support for the acceleration of the<br />

Hierarchical Temporal Memory<br />

HTM Learning<br />

Cortical Learning Algorithm (CLA) is a memory system that<br />

learns sequences of patterns and makes predictions. When an<br />

HTM model is exposed to a stream of data the CLA predicts<br />

what is likely to happen next, similar to how you predict the<br />

next note in a familiar song or the next word someone is likely<br />

to say in a common phrase. In addition, the CLA modifies<br />

its memory with each new record. Thus the HTM models are<br />

continually adapting to reflect the most recent patterns.<br />

Next Steps<br />

As memory is critical in HTM, as in most artificial neural network<br />

models, we will focus our effort on many-core mapping<br />

strategies towards optimizing memory management.<br />

In our cooperation with Portland State University, PSU has<br />

been focusing on FPGA and GPU implementation and HH<br />

on multi-core and Ambric/Adapteva “many-core” style parallelism.<br />

As a next step we hope to compare these different implementations.<br />

Project Key Data<br />

Partners: Halmstad University, Portland State University, Numenta,<br />

Inc., Nethra Imaging, Inc. , and Adapteva, Inc.<br />

Duration: Sep. 2011 – Dec. 2013,<br />

Funding: CERES+ project, Volume: 820 kSEK<br />

Contact: Tomas Nordström, HH<br />

HTM Structure<br />

The HTM-CLA is a highly detailed model of a layer of cells in<br />

the neocortex. In a typical CLA implementation there are 2000<br />

columns of simulated neurons (one per output bit of the spatial<br />

memory structure) and twenty simulated neurons per column,<br />

giving each CLA over 40,000 neurons. Each neuron has dozens<br />

of non-linear dendrite segments and potentially thousands of<br />

synapses.<br />

HTM on Adapteva<br />

Master students Zhou Xi & Luo Yaoyao, have implemented<br />

CLA on our Adapteva development board.<br />

They have investigated how to map HTM-CLA onto Adapteva’s<br />

Epiphany many-core architecture and have been running<br />

experiments to find out the speedup and efficiency of this<br />

HTM mapping onto this many-core architecture. Preliminary<br />

results show an almost perfect scalability when mapping HTM<br />

onto Adapteva.<br />

CERES Annual Report <strong>2012</strong><br />

23

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!