30.08.2014 Views

Retinal Prosthesis Dissertation - Student Home Pages

Retinal Prosthesis Dissertation - Student Home Pages

Retinal Prosthesis Dissertation - Student Home Pages

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

An Address Event Representation (AER)<br />

Multicast Router for Colour Vision<br />

A thesis submitted in partial fulfilment of the requirements<br />

for the degree of Doctor of Philosophy<br />

Alex Jameson<br />

School of Electrical, Electronic and Computer Engineering,<br />

Faculty of Science, Agriculture and Engineering,<br />

University of Newcastle upon Tyne, NE1 7RU<br />

July 2012


Acknowledgements<br />

I am very grateful to my supervisors: Dr. Graeme Chester and Professor Alex<br />

Yakovlev. Lacking the continual help and support of my primary supervisor; Dr<br />

Chester, this report would not have been finished to such a high standard, whilst<br />

Professor Yakovlev has also offered moral support to a high standard.<br />

Dedication<br />

Dedicated to my immediate family, past and present.<br />

Statement of Copyright<br />

The copyright of this report rests with the author and/or any one of his supervisory<br />

team. Quotations from it may be published without the authors’ prior written consent<br />

subject to the consent of any one of his supervisory team but information from it<br />

should be acknowledged.<br />

i of ix


Abstract\Foreword<br />

This document proposes a prosthetic vision application utilizing an AER [1-4]<br />

(Address Event Representation) Multicast router. It is novel in the sense of dealing<br />

with `colour’ signal propagation from a ‘scene’ of pixels in the form suggested by<br />

the trichromacy theory of: Red (R), Green (G) and Blue (B) signals flowing into the<br />

optic nerve [5-15]. The context is within the transmission of the information between<br />

a sender chip and a receiver chip. Power hungry processing will be done within the<br />

sender chip and the multicast router at the receiver chip will route the AER<br />

transmissions via driver circuitry to the relevant locations of the retinal implant.<br />

The fundamental concept is to input a `scene’ into a sender chip and the visual<br />

information from that scene will then be propagated by transmitting the events of<br />

that scene using an Address Representation Format (AER) from the sender chip via<br />

transmission lines to a receiver chip. The receiver chip will be capable of receiving<br />

the event data and recreating the original scene via implant driver circuitry from the<br />

information decoded from the received data. The test setup for this described<br />

combination is shown in Figure 0.<br />

AER Transmission<br />

lines<br />

INPUT<br />

image<br />

signal<br />

SENDER<br />

(chip)<br />

RECEIVER<br />

(chip)<br />

Display<br />

monitor<br />

Figure 0 Test setup for AER propagation<br />

ii of ix


The main advantage of AER in the context of aiding artificial vision for humans is<br />

that events occurring in the scene input to the receiver chip are represented and<br />

propagated as impulses or spikes which is the method used by the fibres of the optic<br />

nerve. The maximum spike rate represents maximum hue intensity when referring to<br />

the RGB true colour format. Note that propagation of signals in the optic nerve is<br />

very energy efficient and it can be expected that the neuromorphic AER<br />

communication protocol will be similar.<br />

iii of ix


Contents<br />

An Address Event Representation (AER) Multicast Router for Colour Vision .................................. i<br />

Acknowledgements ................................................................................................................................ i<br />

Dedication ............................................................................................................................................... i<br />

Abstract\Foreword ................................................................................................................................. ii<br />

Chapter 1 Introduction/research aim ................................................................................................. 10<br />

1.1 Introductory Literature Review ............................................................................................... 10<br />

1.1.1 Sub retinal ..................................................................................................................... 11<br />

1.1.2 Epi retinal ............................................................................................................................. 14<br />

1.2 Conceptual overview ............................................................................................................. 18<br />

1.3 Sub retinal versus Epi-retinal ................................................................................................. 21<br />

1.4 Introduction to stimulators ........................................................................................................... 21<br />

1.5 Thesis structure ..................................................................................................................... 22<br />

Chapter 2 Early directions .................................................................................................................. 24<br />

2.1 Introduction to initially envisaged project ..................................................................................... 24<br />

2.2 Neural network representation .................................................................................................... 25<br />

2.2.1 The first artificial neuron ........................................................................................................... 26<br />

2.2.2 Hebb’s rule ........................................................................................................................... 28<br />

2.2.3 Adaline and the adaptive linear combiner: a bipolar case of above ..................................... 28<br />

2.2.4 Neural Networks –perceptron training .................................................................................. 30<br />

2.2.5 Linear separability ................................................................................................................ 32<br />

2.2.6 The Delta Rule (used to determine separability) .................................................................. 35<br />

2.3 Address Event Representation (AER) ......................................................................................... 37<br />

2.3.1 The message packet ............................................................................................................ 37<br />

2.4 Comparison to Frame Based Representation (FBR) ................................................................... 38<br />

2.4.1 Comparison Criteria ............................................................................................................. 40<br />

2.4.2 Address Event Representation ............................................................................................ 41<br />

2.5 MATLAB sub-retinal simulation ................................................................................................... 42<br />

2.5.1 MATLAB representation ....................................................................................................... 42<br />

2.5.2 Program operation ............................................................................................................... 43<br />

2.5.1 Files in Appendix A ................................................................................................................... 44<br />

2.5.2 Use of the program ................................................................................................................... 44<br />

2.6 Epi-retinal; method of choice ....................................................................................................... 45<br />

2.6.1 MATLAB representation of epi_retinal approach.................................................................. 47<br />

2.6.2 FPGA representation of epi_retinal approach .......................................................................... 48<br />

2.7 Background to stimulators ........................................................................................................... 49<br />

2.7.1 Charge considerations ......................................................................................................... 49<br />

2.7.2 Achromatic current requirements ......................................................................................... 49<br />

Chapter 3 Concepts ............................................................................................................................. 50<br />

3.1 Neural concepts .......................................................................................................................... 52<br />

3.2 Biological Neuron operation ........................................................................................................ 53<br />

3.3 Natural limit for pulse duration ..................................................................................................... 54<br />

3.4 <strong>Retinal</strong> colour perception ............................................................................................................. 55<br />

3.5 Human visual processing ............................................................................................................ 57<br />

3.6 Current <strong>Retinal</strong> Implants .............................................................................................................. 61<br />

3.6.1 <strong>Retinal</strong> structure ................................................................................................................... 61<br />

3.6.2 <strong>Retinal</strong> operations ................................................................................................................ 62<br />

3.6.3 Commonality of sub retinal and epi retinal approaches ........................................................ 65<br />

iv of ix


3.6 AER concept in context ............................................................................................................... 66<br />

3.7 AER data format .......................................................................................................................... 67<br />

3.8 Behavioural test image to be used .............................................................................................. 71<br />

3.8.1 Other test images ................................................................................................................. 74<br />

3.9 Biphasic pulse ............................................................................................................................. 74<br />

3.10 Concluding this chapter ............................................................................................................. 76<br />

Chapter 4 Sender chip ......................................................................................................................... 79<br />

4.1 Sender concepts ......................................................................................................................... 80<br />

4.2 Sender AER format ..................................................................................................................... 85<br />

4.2.1 Alternative (discounted) sender AER format ........................................................................ 86<br />

4.3 Test setup ................................................................................................................................... 87<br />

4.4 Sender chip reports ..................................................................................................................... 87<br />

4.4.1 Power Analysis .................................................................................................................... 88<br />

4.5 Sender schematic ........................................................................................................................ 88<br />

4.6 Chapter summary ........................................................................................................................ 91<br />

Chapter 5 Receiver chip ...................................................................................................................... 94<br />

5.1 Clocking calculations (sender) ..................................................................................................... 94<br />

5.2 Power Analysis ....................................................................................................................... 96<br />

5.3 Receiver schematics ................................................................................................................... 97<br />

5.4 Clocking calculations (receiver) ................................................................................................... 99<br />

5.5 Programming components descriptions ...................................................................................... 99<br />

5.5.1 Incoming (short) AER stream ............................................................................................. 100<br />

5.5.2 Production of colour data streams ..................................................................................... 100<br />

5.5.3 Convert to long format........................................................................................................ 100<br />

5.5.4 Production of output streams ............................................................................................. 100<br />

5.6 Summary of FPGA resource utilisation...................................................................................... 101<br />

5.7 Post processing for electrodes .................................................................................................. 101<br />

5.8 Stimulator <strong>Retinal</strong> Interface ....................................................................................................... 103<br />

5.8.1 Factors affecting current requirement for stimulation ......................................................... 103<br />

5.8.2 Electrode size and positioning in current retinal approaches ............................................. 104<br />

5.8.3 Recent developments ........................................................................................................ 104<br />

Chapter 6 Concluding chapter.......................................................................................................... 105<br />

6.1 Conclusions and results ............................................................................................................ 105<br />

6.2 Implementation of design .......................................................................................................... 106<br />

6.2.1 DAC operation ................................................................................................................... 107<br />

6.2.2 Envisaged retinal array ...................................................................................................... 110<br />

6.2.3 Connecting the detailed engineering to the overall concept ............................................... 111<br />

6.2.4 Differences between proposed technique and current retinal implants .............................. 112<br />

6.3 Lower power FPGA ................................................................................................................... 113<br />

6.4 AER between sender and receiver chip .................................................................................... 115<br />

6.5 Post Processing Interface ......................................................................................................... 116<br />

6.6 Pre processing .......................................................................................................................... 117<br />

6.7 Future work ............................................................................................................................... 117<br />

6.7.1 Surgical implantation .......................................................................................................... 118<br />

6.7.2 Inductive linking considerations ......................................................................................... 119<br />

6.7.3 Initial configuration ............................................................................................................. 119<br />

Bibliography ....................................................................................................................................... 120<br />

%Appendix A MATLAB ..................................................................................................................... 133<br />

--Appendix B FPGA Test setup......................................................................................................... 142<br />

--Appendix C FPGA Sender chip ...................................................................................................... 150<br />

--Appendix D FPGA Receiver chip ................................................................................................... 154<br />

v of ix


--Appendix E FPGA Outputs ............................................................................................................. 171<br />

--Appendix F Alternative literature review ....................................................................................... 189<br />

vi of ix


List of figures<br />

Figure 1 Layer structure ......................................................................................................................... 13<br />

Figure 2 Eye diagram (image: Wikimedia Commons public domain) ..................................................... 14<br />

Figure 3 Mapping of Image Pixels to Stimulus Electrode Signals .......................................................... 17<br />

Figure 4 fouter = 50(fpacket) ........................................................................................................................ 19<br />

Figure 5 fspike_clock = 5(fouterpulse) ............................................................................................................... 20<br />

Figure 6 fpartial_spike_clock = 4(finnerpulse) ....................................................................................................... 20<br />

Figure 7 replacing the retina .................................................................................................................. 25<br />

Figure 8 pixel by pixel mapping ............................................................................................................. 26<br />

Figure 9 the first neuron model .............................................................................................................. 27<br />

Figure 10 Adaline detail showing the ALC feeding to a bipolar output function. .................................... 30<br />

Figure 11 neuronic detail ....................................................................................................................... 31<br />

Figure 12 (a) shows linear discrimination as opposed to (b), (c) and (d) ............................................... 33<br />

Figure 13 A hyperplane separating two classes/clusters of data. .......................................................... 34<br />

Figure 14 message packet .................................................................................................................... 37<br />

Figure 15 I frame sequencing ................................................................................................................ 40<br />

Figure 16 colour dichotomy ................................................................................................................... 42<br />

Figure 17 Flowchart of MATLAB program of Appendix A ...................................................................... 43<br />

Figure 18 program output ...................................................................................................................... 45<br />

Figure 19 Comparing epi_retinal approaches ........................................................................................ 46<br />

Figure 20 Simulation context diagram (level 0) ...................................................................................... 47<br />

Figure 21 FPGA view of processing ...................................................................................................... 48<br />

Figure 22 AER Delivery ......................................................................................................................... 52<br />

Figure 23 Biological neuron ................................................................................................................... 53<br />

Figure 24 Action Potential ...................................................................................................................... 54<br />

Figure 25 <strong>Retinal</strong> colour processing (original) ....................................................................................... 56<br />

Figure 26 Four Colour Opponent Theory ............................................................................................... 60<br />

Figure 27 <strong>Retinal</strong> Structure .................................................................................................................... 63<br />

Figure 28 Receptive field example......................................................................................................... 65<br />

Figure 29 showing fifty pulses/s (representing maximum intensity) and 5 pulses/s<br />

(representing minimum intensity) .................................................................................................. 67<br />

Figure 30 50 pulses/s and 5 pulses/s for one plane of virtual wire of test scene ................................... 70<br />

Figure 31 1024 pixel behavioural test image ......................................................................................... 72<br />

Figure 32 Colour composition ................................................................................................................ 73<br />

Figure 33 sixteen pixel image ................................................................................................................ 74<br />

Figure 34 Biphasic pulse compared to `spike’ ....................................................................................... 76<br />

Figure 35 Correspondence of pixels to colour ....................................................................................... 78<br />

Figure 36; Envisaged System ................................................................................................................ 79<br />

Figure 37 Tae versus fps ........................................................................................................................ 80<br />

Figure 38 forming AER stream .............................................................................................................. 83<br />

Figure 39 Forming AER stream ............................................................................................................. 83<br />

Figure 40 sender program overview ...................................................................................................... 84<br />

Figure 41 Display of AER test image ..................................................................................................... 87<br />

Figure 42 sender chip power analyser screenprint ................................................................................ 88<br />

Figure 43 sender_rtl_16 ......................................................................................................................... 90<br />

Figure 44 upto 1024 image i.e. 32 by 32 ............................................................................................... 92<br />

Figure 45 (1048576) image limit i.e. 1024 by 1024................................................................................ 93<br />

Figure 46 Receiver block diagram ......................................................................................................... 94<br />

Figure 47 receiver power analysis ......................................................................................................... 96<br />

vii of ix


Figure 48 receiver rtl_16 ........................................................................................................................ 98<br />

Figure 49 Outputting to electrodes....................................................................................................... 102<br />

Figure 50 Shift (reference) register ...................................................................................................... 106<br />

Figure 51 Enable DAC ......................................................................................................................... 107<br />

Figure 52 An R-2R ladder network D/A converter................................................................................ 108<br />

Figure 53 A bipolar D/A converter........................................................................................................ 109<br />

Figure 54 Axon clearance .................................................................................................................... 110<br />

Figure 55 <strong>Retinal</strong> implant ..................................................................................................................... 113<br />

Figure 56 AP propagating through optic nerve .................................................................................... 117<br />

Figure 57 Practical setup ..................................................................................................................... 118<br />

Figure 58 sender design summary ...................................................................................................... 173<br />

Figure 59 design summary .................................................................................................................. 183<br />

viii of ix


Table 1 Correlation between fps and AER packet frequency ................................................................ 21<br />

Table 2 `and’ and `or’ truth tables .......................................................................................................... 27<br />

Table 3 `xor’ and `nxor’ truth tables ....................................................................................................... 28<br />

Table 4 row and column addressing ...................................................................................................... 43<br />

Table 5 Receptive ON/OFF operation .................................................................................................. 64<br />

Table 6 Tspike related to fps .................................................................................................................... 81<br />

Table 7 examples of clocking hierarchy ................................................................................................. 82<br />

Table 8 short AER format ...................................................................................................................... 85<br />

Table 9 example correspondences ........................................................................................................ 86<br />

Table 10 electrode requirement ............................................................................................................. 92<br />

Table 11 clocking calculations ............................................................................................................... 95<br />

Table 12 simulation clocking .................................................................................................................. 95<br />

Table 13 receiver calculations ............................................................................................................... 99<br />

Table 14 Receiver chip FPGA resource data ...................................................................................... 101<br />

Table 15 Control signals to enable DAC .............................................................................................. 107<br />

Table 16 IGLOO product table ............................................................................................................. 115<br />

Table 17 image size versus bit length .................................................................................................. 115<br />

ix of ix


Chapter 1 Introduction/research aim<br />

The approach to be taken for this work is to determine and then decide how to<br />

implement a retinal prosthesis capable of restoring sufficient vision to a person<br />

suffering visual impairment to improve their quality of life. My goal here is not to<br />

implant an actual prosthesis but to evaluate ideas about the way to efficiently<br />

communicate the signals to the prosthesis. The boundaries of my study are to<br />

behaviourally programme in a Virtex 5 FPGA[16] a test image of 1024 pixels and<br />

structurally implement; in terms of VHDL programming, using the Xilinx ISE<br />

platform an image of 16 pixels as proof of concept. The limitation of the structural<br />

implementation is that the output of the FPGA will further require a post processing<br />

interface i.e. the microstimulator or driver portion [17-22] which typically resides on<br />

the retinal implant itself and enables the outputted signals from the FPGA to control<br />

the timing of the charge being delivered to the electrodes of the retinal implant array<br />

[23-30]. Although ten percent of the population suffer from visual impairment; so in<br />

the UK six million people, only ten percent of those suffer from AMD (age related<br />

macular degeneration) [Aside young people can also suffer from this condition] and<br />

RP (retinitis pigmentosa) a family of related diseases to which this study applies.<br />

Although this represents less than one million people in the UK clearly the<br />

European/worldwide market is much bigger. It also bears saying that the gift of sight<br />

is priceless for each person affected to a greater or lesser degree.<br />

1.1 Introductory Literature Review<br />

The following subsections pick out the differing approaches from the relevant<br />

literature and also the commonality of those approaches where this exists.<br />

10 of 200


1.1.1 Sub retinal<br />

The sub retinal approach is to replace the retina by an array of micro photodiodes<br />

underneath the retina (the side adjacent to the choroid) to channel natural sunlight<br />

[31-40], the concept is to replicate the processing of the retina. Essentially a<br />

subretinal implant would be inserted into the space normally occupied by<br />

photoreceptors in a healthy retina. The array of micro photodiodes then provides the<br />

stimulation data to the retinal implant. Most, if not all, sub retinal approaches rely on<br />

converting the colours of light to greyscale prior to stimulation [41]. This has an<br />

inherent flaw as the biological retinal actually processes incoming colour [42] in<br />

terms of four perceptions of colour namely red, green, blue and yellow and processes<br />

those colours to target `red’ cones, `green’ cones and `blue’ cones. These cones in<br />

turn propagate colour intensity of each of those planes of colour to specific retinal<br />

ganglion cells arranged in a regular topographical arrangement. The seriousness of<br />

this inherent flaw depends quite literally on your point of view inasmuch as to be<br />

truly biomimetic i.e. to imitate nature, a retinal prosthesis should target cones; for<br />

colour vision. However it seems to be perceived wisdom that achromatic targeting is<br />

of adequate sufficiency. The technological limitations driving this approach have<br />

been twofold: firstly electrode size; and secondly extra wiring required for the colour<br />

approach. Until recently electrode size has not been small enough; in vivo, to<br />

accommodate average parvocellular cell size of 10µm and in some cases average<br />

magnocellular cells of 80µm [43] i.e. although in vitro electrode size of 10µm has<br />

been utilised, typically electrode size in vivo has been typically 50µm or in some<br />

case 100µm. As the foveola (fovea pit) extends to 200µm diameter in the centre of<br />

the fovea centralis, where the predominance of cones governs visual acuity, it is<br />

nonsensical to map those signals out to 50µm electrodes; for colour vision, [5-7, 9,<br />

10, 12-14, 44-48] where the RGC axons are about 1µm in diameter [49]. In the<br />

11 of 200


foveola (fovea pit) there is a one to one correspondence from cone to ganglion cell<br />

via the bipolar cell [50] implying that within that limitation retinal processing would<br />

concern only colour. Outside that limitation sub retinal processing would need to<br />

take into account stimulation biologically initiated from rods, horizontal cells,<br />

amacrine cells etc. and hence would be more complex than the epiretinal approach<br />

described in the next subsection.<br />

12 of 200


Figure 1 Layer structure<br />

13 of 200


NB that the photoreceptors of the foveola (fovea pit) feed to the nerve fibres within<br />

the blind spot or optic disc. The optic disc is an oval of 1.76mm by 1.92mm giving<br />

an area of 2.66mm 2 (http://www.websters-onlinedictionary.org/definitions/Optic%20disc)<br />

The current state of technology using a<br />

high density electrode array gives ≈ 100 electrodes/mm 2 [29],implying over 200<br />

electrodes capability for stimulating RGC’s with an epiretinal implant<br />

Figure 2 Eye diagram (image: Wikimedia Commons public domain)<br />

1.1.2 Epi retinal<br />

The epiretinal refers to the side that faces the vitreous [51], essentially an<br />

epiretinal implant would rest on the inner limiting membrane of the retina [52].<br />

Whereas a sub retinal approach relies on stimulation data being supplied due to<br />

14 of 200


light falling on an array of photocells the epiretinal approach relies on electrical<br />

stimulation data being provided from an external camera. The most well-known<br />

artificial retinal/retinal prosthetic is that developed by the “Artificial Retina<br />

Project” [53] described in the following way: “A pair of glasses with a small<br />

video camera is worn by the patient. The video captured by the camera is sent<br />

wirelessly to a belt pack containing a microprocessor that processes the video<br />

signal. This processed video signal is sent to an antenna in the eye. The antenna<br />

is connected to an array of electrodes that have been implanted directly inside the<br />

eye on top of the old retina. The array of electrodes transmits signals that directly<br />

stimulate optic nerve cells that are responsible for sending images to the brain’s<br />

vision centers. “The `sender’ chip of the system developed here will be that part<br />

of the system that processes the camera video signal. Address Event<br />

Representation (AER) will be the processed video signal which is then sent to the<br />

`receiver’ chip housed in close proximity to the implanted retinal array. The<br />

receiver chip will, after post processing of the stimulation data, deliver biphasic<br />

current pulses to stimulate the retinal ganglion cells (RGC) via the electrodes of<br />

the retinal implant. Due to image size, and hence event activity, increasing the<br />

computational load quadratically then behaviorally only a 32 by 32 (1024 pixel)<br />

image will be demonstrated. Also due to the FPGA resource being used this<br />

prototype implementation will be restricted to a 4 by 4 (16 pixel) image implying<br />

48 electrodes to be driven although with a different resource an 8 by 8 (64 pixel)<br />

image implying 192 electrodes to be driven could easily be accommodated<br />

within the present 256 wire limit. Should this limit reach circa 1000 then a 16 by<br />

16 (256 pixel image) implying 768 electrodes could also easily be<br />

accommodated. An implementation of 32 by 32 (1024 pixels) suggesting 3072<br />

15 of 200


electrodes to be driven implies stimulating 3072 RGC which would still be in the<br />

foveal pit which is the area receptive to cones and hence colour stimulation.<br />

1.1.3 Commonality e.g. AER & related issues<br />

Sub retinal and epiretinal approaches both eventually deliver biphasic pulses to the<br />

retinal implant to drive the electrodes which in the epiretinal case are in contact with<br />

the axons of the ganglion cells. [54], [49], [55], [56], [57], [58]. Medical experiments<br />

have indicated that the equivalent impedance for the retina tissue of Retinitis<br />

pigmentosa (RP) and Age Related Macular Degeneration (AMD) is about 10kΩ.<br />

[53] The default loading for the Virtex 5 FPGA is 50Ω; although this can be altered<br />

it is unlikely that a required current to support a charge of 100nC at the electrode tip<br />

can be accommodated. On this basis a post processing interface is required as<br />

additional driving circuitry for the electrodes of the retinal implant.<br />

Address Event Representation (AER) [59], [60], [61], [62] has been chosen as the<br />

communication protocol between the sender chip and the receiver chip for two<br />

reasons: (1) Identification of each pixels worth of information; by this is meant each<br />

individual pixel is identified by its own unique address determined by its positioning<br />

on the image to be transmitted. This will enable each AER packet to be tracked and<br />

hence routed to its destination electrode driver. (2) Neuromorphic capability in the<br />

sense that it is widely perceived as the communication protocol of choice to<br />

represent the spikes of biological neural networks and by implication artificial neural<br />

networks (ANN) [63-89]. As biological spiking is being represented each AER[90]<br />

packet must contain information for pulse length, amplitude, frequency and form. An<br />

AER packet as defined in the “Extended Address Event Representation Draft<br />

Standard” (http://www.stanford.edu/group/brainsinsilicon/documents/methAER.pdf)<br />

has an address with a payload. In other words each specific event has an address<br />

16 of 200


associated with it. In its most fundamental form this would entail sending an address<br />

with a payload of one `spike’ where one spike would be the case where a spike either<br />

occurs or not. In this implementation the AER packet will be sent with an address<br />

having a payload of 50 potential spikes, specifically an address with a payload of a<br />

pulse count. After transmission from the sender chip to the receiver i.e. at reception,<br />

the AER packet will be translated from an address with pulse count payload to that<br />

of an address with 50 bits each; for each colour plane, representing the occurrence or<br />

non-occurrence of a spike.<br />

The reasoning of this protocol is that each pixels worth of information (AER packet)<br />

can be routed to the correct destination after transmission; diagram illustrates this<br />

concept for a two by two image i.e. four pixels (synonymous with four AER packets)<br />

and hence for the three planes of colour twelve electrodes must be driven.<br />

Pixel_1 red signal<br />

Pixel_1 green signal<br />

Pixel_1 blue signal<br />

Pixel_2 red signal<br />

Pixel_2 green signal<br />

Pixel_2 blue signal<br />

Pixel_3 red signal<br />

Pixel_3 green signal<br />

Pixel_3 blue signal<br />

Pixel_4 red signal<br />

Pixel_4 green signal<br />

Pixel_4 blue signal<br />

Figure 3 Mapping of Image Pixels to Stimulus Electrode Signals<br />

17 of 200


Typically AER is used as a multi sender, multi receiver asynchronous protocol<br />

whereas however in this epiretinal approach to a retinal prosthesis application its<br />

neuromorphic capability is as a timed synchronous protocol. In other words as there<br />

is only a single image to be transmitted; although each image is composed of<br />

multiple pixels, then this time multiplexed synchronous protocol will not have<br />

contention between AER packets. So despite the image signal eventually being<br />

multicast to many electrodes an asynchronous protocol enabling handshaking and<br />

avoiding contention is not necessary or warranted and will eliminate overheads<br />

associated with such extra processing.<br />

1.2 Conceptual overview<br />

To accommodate the initial test setup described in the abstract a frame rate of 25<br />

frames per second (fps) was chosen, this corresponds to the UK TV camera standard.<br />

The UK TV standard; as with the 25p video format, deals with persistence of vision<br />

where an after image of a stimulus remains long enough for the phi phenomenon (an<br />

optical illusion) to occur i.e. the perception of movement. At 1fps to transmit each<br />

pixels worth of information would take 1s divided by the pixel size of the image. So<br />

for a four by four image i.e. sixteen pixels, pixel time would equal 1/16 i.e. 0.0625s<br />

meaning 16Hz for the frequency of AER packets i.e. synonymous with pixel<br />

frequency. Each AER packet contains the pixel address and payload e.g. pulse count,<br />

corresponding to the colour information, during the AER transmission. At 25 fps the<br />

frequency of AER packets, for this example, would be 25*16=400Hz. The frequency<br />

of AER packets (f packet ) is the frame rate multiplied by the image size in pixels. There<br />

is a linear relationship between the frame rate for any particular image size and AER<br />

packet frequency that needs to be set e.g. 50*16=800Hz and for a 1024 pixel image<br />

@ 25fps, AER packet frequency would equal 25.6kHz.<br />

18 of 200


For each colour plane of maximum intensity (i.e. 255 for 24 bit colour) the AER<br />

packet time (T packet ) [e.g. at 400Hz this will be 0.0025s] can be conceived of as<br />

having 50 time slots available to accommodate each spike and its `off time’ before<br />

another spike makes sense. The concept of 50 time slots being derived from the real<br />

time firing rate (simulation frequency) for maximum intensity of pulsing i.e. 50<br />

spikes/second [54, 56, 91, 92].<br />

This `outerpulse’ time will be T packet /50 giving an outerpulse frequency of 50* f packet<br />

[e.g. at 400Hz this will be 20 kHz].<br />

Real timing - outerpulse relationship<br />

20ms<br />

(T outer )<br />

outer<br />

pulse<br />

pixel_clock<br />

1000ms (T pixel )<br />

Figure 4 f outer = 50(f packet )<br />

Each outerpulse will translate into an `innerpulse’ i.e. the portion represents the<br />

biological action potential (the spike - biphasic pulse) and the associated `off time’.<br />

19 of 200


Real time - innerpulse relationship<br />

4ms<br />

One Two Three Four<br />

Five<br />

spike_clock<br />

(innerpulse)<br />

outerpulse<br />

20ms<br />

Figure 5 f spike_clock = 5(f outerpulse )<br />

Note that each innerpulse will be represented by a four part biphasic pulse requiring<br />

a partial_spike_clock to form each of the four component parts of it.<br />

Real time - biphasic pulse relationship<br />

1ms<br />

One Two Three Four<br />

partial_spike<br />

_clock<br />

spike_clock<br />

(innerpulse)<br />

4ms<br />

biphasic<br />

pulse of<br />

values<br />

+<br />

0<br />

-<br />

1<br />

4ms<br />

1<br />

Figure 6 f partial_spike_clock = 4(f innerpulse )<br />

20 of 200


Num. of: Num. of: f pixel T pixel_allocation @1fps @25fps @50fps @100fps<br />

rows columns pixel_numbers T (pixel) {secs} f aer_pkt f aer_pkt f aer_pkt f aer_pkt<br />

1 2 2 5.00E-01 2 50 100 200<br />

2 2 4 2.50E-01 4 100 200 400<br />

4 4 16 6.25E-02 16 400 800 1600<br />

8 8 64 1.56E-02 64 1600 3200 6400<br />

16 16 256 3.91E-03 256 6400 12800 25600<br />

32 32 1024 9.77E-04 1024 25600 51200 102400<br />

64 64 4096 2.44E-04 4096 102400 204800 409600<br />

128 128 16384 6.10E-05 16384 409600 819200 1638400<br />

256 256 65536 1.53E-05 65536 1638400 3276800 6553600<br />

512 512 262144 3.81E-06 262144 6553600 13107200 26214400<br />

1024 1024 1048576 9.54E-07 1048576 26214400 52428800 104857600<br />

Table 1 Correlation between fps and AER packet frequency<br />

1.3 Sub retinal versus Epi-retinal<br />

Typically the subretinal implant is embedded `underneath’ the retina and uses<br />

photodiodes to replace damaged photoreceptors, relying on natural sunlight for<br />

power. The retinal processing can be described using Ewald Herings theory[42] in<br />

which in 1878 he wrote: "Yellow can have a red or green tinge, but not a blue one;<br />

blue can have only either a red or a green tinge, and red only either a yellow or a<br />

blue one. The four colours can with complete correctness therefore be described as<br />

simple or basic colours, as Leonardo da Vinci has already done." Whereas at the<br />

optic nerve ganglion cells act on the trichromacy theory[93] i.e. red, green and blue<br />

this implies a conversion between one form to the other. As an epiretinal implant<br />

effectively replaces the retinal function such a conversion can be avoided [34, 54, 88,<br />

94-128].<br />

1.4 Introduction to stimulators<br />

The stimulator (microstimulator) circuitry [53, 54, 57, 129] forms the post<br />

processing stage of current retinal implants (sub and epi) and will also be required as<br />

21 of 200


the post processing stage of this work. The stimulator resides with the retinal implant<br />

and produces a biphasic current with which to stimulate the electrodes of the retinal<br />

implant. Equivalent retinal tissue impedance as determined by medical experiments<br />

is circa 10kΩ (R) and a with a maximum current threshold value of 500µA (I) this<br />

requires a potential difference of 5v. Average power will be determined by a number<br />

of factors e.g. width of cathodic part of pulse, interphase delay, width of anodic part<br />

of pulse, spike rate and number of driven electrodes. A maximum power per<br />

electrode can be calculated given some presumptions e.g. with a 1000 clock period<br />

per frame/pixel, a spike rate of 50 and a pulse width of four clock periods with<br />

cathodic portion being one clock period, interpulse delay equating to two clock<br />

periods and anodic portion of one clock period; where the clock period is 1ms. Then<br />

over a second this is an average energy distribution of 0.5 x 0.2 x 500µA = 50µJ =><br />

50µW. So for a 100 electrodes maximum power dissipation would be 5mW. The<br />

volume of the human eye is 6.5 ml (0.4 cu. in.) and the weight is 7.5 g (0.25oz). In<br />

the USA the SAR limit is 1.6 W/kg, averaged over a volume of 1 gram of tissue; for<br />

the head. In Europe, the European Union Council has adopted the recommendations<br />

made by the International Commission on Non-Ionising Radiation Protection<br />

(ICNIRP Guidelines 1998). These recommendations set a SAR limit of 2.0 W/kg in<br />

10g of tissue.<br />

1.5 Thesis structure<br />

The thesis is divided into six chapters and five appendices, organised in the<br />

following way:<br />

Chapter one (Introduction/research aim); this one, containing research aim,<br />

selected extracts from the literature review, Distinction between the two approaches<br />

to an artificial retinal prosthesis within the literature and this thesis structure.<br />

22 of 200


Chapter two (Early directions): A description of the first tentative approaches,<br />

concerning the sub retinal approach, to this PhD project i.e. a neural network<br />

approach utilising MATLAB software.<br />

Chapter three (Concepts): Neuronic principles, homing in on their application to<br />

human visual processing. Address Event Representation (AER); method of choice in<br />

which the conventional measure of colour intensity for each plane of colour is<br />

translated into a neuromorphic representation for transmission propagation from the<br />

sender chip to receiver chip. Description of the biphasic pulse commonly used to<br />

mimic the biological action potential.<br />

Chapter four (Sender chip): Sender concepts and the chosen format. Description of<br />

a test setup wherein the AER format derived from conventional colour representation<br />

is displayed on a monitor to demonstrate efficacy of the format. Listing of the reports<br />

capable of being produced by the software and to be shown in the relevant appendix<br />

Chapter five (Receiver chip): Clocking calculation for the sender/receiver.<br />

Components for the receiver chip. Discussion of post processing necessary between<br />

the receiver chip output and that required for the implant electrodes.<br />

Chapter six (Concluding chapter): Resulting conclusions and future work.<br />

Appendices<br />

Appendix A: MATLAB simulation<br />

Appendix B: FPGA Test setup<br />

23 of 200


Appendix C: FPGA Sender chip<br />

Appendix D: FPGA Receiver chip<br />

Appendix E: Outputs from FPGA<br />

Chapter 2 Early directions<br />

Initially investigations were commenced utilising the MATLAB application software<br />

package as a cost effective precursor prior to a hardware oriented approach. Initial<br />

investigations involved a sub-retinal approach utilising the neural network toolbox<br />

supplied within the MATLAB software. Research into the application of Artificial<br />

Neural Network (ANN) when dealing with `spikes’ [75, 130-141]; as a form of<br />

neuromorphic communication, informed its suitability for this purpose.<br />

2.1 Introduction to initially envisaged project<br />

The approach would be to replace damaged parts of the retina, prior to the rods and<br />

cones, with a neural network artificial retina to fulfil the function of the amacrine,<br />

horizontal and bipolar cells[46].<br />

24 of 200


Eye<br />

Neural<br />

Network –<br />

Artificial<br />

Retina<br />

Lens<br />

Optic nerve<br />

Figure 7 replacing the retina<br />

Within the visible spectrum extending from extreme red; 760.6nm, to extreme violet;<br />

393.4 nm lie the four psychological primary colours of Ewald Hering’s theory[12].<br />

So called because it does not appear to the human eye that they can be separated into<br />

any more basic colours e.g. bluish-green or yellowish-green can be imagined but not<br />

greenish-red or a bluish-yellow. The eye has three cone receptors conventionally<br />

described as red, green and blue cones following Thomas Young’s trichromacy<br />

theory[12]. The red (long γ) cone has pigments which absorb maximally at 565nm,<br />

green cone (middle γ) absorbing maximally at 530nm and blue cone (short γ)<br />

absorbing maximally at 420nm[12]. Therefore the retina converts the four primary<br />

colours (red (620–750 nm), green (495–570 nm), blue (450–475 nm) and yellow<br />

(570–590 nm)) to which it is sensitive into the three primary colours red (R), green<br />

(G) and blue (B) to which the optic nerve is sensitive.<br />

2.2 Neural network representation<br />

25 of 200


Red<br />

(660nm)<br />

R / G Red<br />

( 620750nm )<br />

Green<br />

(520nm)<br />

Blue<br />

(470nm)<br />

Yellow<br />

(575nm)<br />

photodiodes<br />

Artificial<br />

Neural<br />

Network<br />

(ANN)<br />

G/<br />

R Green<br />

( 495570nm)<br />

B/<br />

Y Blue<br />

( 450475nm )<br />

Output<br />

signals<br />

(5 – 50)<br />

impulses/s<br />

Figure 8 pixel by pixel mapping<br />

Using Artificial Neural Network Theory (ANN) to mimic the processing of the<br />

neural network of the retina has the advantage that given the correct inputs to the<br />

network and the expected output the designed network is self-training in the sense<br />

that once trained with a set of quality test inputs; of a predetermined quantity, it will<br />

always produce the required results even with differing inputs applied. The following<br />

subsections give a primer for the theory which needs to be understood for an ANN to<br />

be designed as an approach to replace the retina using a photodiode approach as<br />

opposed to bypassing it with a camera.<br />

2.2.1 The first artificial neuron<br />

Warren McCulloch and Walter Pitts in their 1943 paper, “A logical calculus of Ideas<br />

Immanent in Nervous Activity” postulated that neurons with a binary threshold<br />

function were analogous to first order logic sentences. This first neuron model is<br />

shown here.<br />

26 of 200


w2<br />

w 1<br />

Output<br />

y<br />

Neurode<br />

x 2<br />

Input<br />

x 1<br />

Input<br />

Figure 9 the first neuron model<br />

By setting a threshold relevant to the inputs to be expected e.g. 1 unit then for an<br />

input of x 2 =0.5 units and x 1 = 0.5 units then threshold would be achieved and there<br />

would be an output from the neurode/neuron. The Boolean logic of this model would<br />

then be represented by the truth tables shown next.<br />

AND<br />

OR<br />

Input x 1 Input x 2 Output Input x 1 Input x 2 Output<br />

0 0 0 0 0 0<br />

0 1 0 0 1 1<br />

1 0 0 1 0 1<br />

1 1 1 1 1 1<br />

Table 2 `and’ and `or’ truth tables<br />

This type of neuron could also be used to solve the NOT function (with only one<br />

input) as well as the NOR and NAND functions. However the following truth tables<br />

were not so easily implemented.<br />

XOR<br />

Input<br />

Input<br />

NXOR<br />

Input<br />

Input<br />

x 1 x 2 Output x 1 x 2<br />

0 0 0 0 0 1<br />

0 1 1 0 1 0<br />

Output<br />

27 of 200


1 0 1 1 0 0<br />

1 1 0 1 1 1<br />

Table 3 `xor’ and `nxor’ truth tables<br />

The problem with this simple model was that it only allowed for binary inputs and<br />

outputs, it only used the threshold step (Heaviside) activation function and it did not<br />

incorporate weighting the different inputs.<br />

2.2.2 Hebb’s rule<br />

“When an axon of cell A is near enough to excite a cell B and repeatedly or<br />

persistently takes part in firing it, some growth process or metabolic change takes<br />

place in one or both cells such that A’s efficiency, as one of the cells firing B, is<br />

increased..”[142].To accommodate this idea into the McCulloch-Pitts neuron it<br />

would be necessary to `weight’ the inputs.<br />

2.2.3 Adaline and the adaptive linear combiner: a bipolar<br />

case of above<br />

“The Adaline is a device consisting of a single processing element; as such, it is not<br />

technically a neural network…. The term Adaline is an acronym; however, its<br />

meaning has changed somewhat over the years. Initially called the ADAptive Linear<br />

Neuron, it became the ADAptive LINear Element, when neural networks fell out of<br />

favour in the late 1960s….The dashed box; in the following figure, encloses a part of<br />

the Adaline called the adaptive linear combiner (ALC). If the output of the ALC is<br />

positive, the Adaline output is +1. If the ALC output is negative, the Adaline output<br />

is -1. …..The ALC performs a sum-of-products calculation using the input and<br />

weight vectors and applies an output function to get a single output value.<br />

y w<br />

o<br />

<br />

n<br />

<br />

j1<br />

w<br />

j<br />

x<br />

j<br />

28 of 200


Where w<br />

0<br />

is the bias weight; if we make the identification, x 1 0<br />

, we can rewrite the<br />

preceding equation as<br />

Or in vector notation,<br />

n<br />

y <br />

j0<br />

w j x j<br />

y = w T x<br />

The output function in this case is the identity function, as is the activation function.<br />

The use of the identity function as both output and activation function means that the<br />

output is the same as the activation, which is the same as the net input to the unit.<br />

The Adaline (or the ALC) is ADAptive in the sense there exists a well-defined<br />

procedure for modifying the weights in order to allow the device to give the correct<br />

output value for the given input. What output value is correct depends on the<br />

particular processing function being performed by the device. The Adaline (or the<br />

ALC) is LINear because the output is a simple linear function of the input values.<br />

…The Adaline could also be said to be a LINear Element…..”[74] Note that a<br />

network of more than one Adaline is termed a Madaline (Many Adaline).<br />

29 of 200


Adaptive linear combiner<br />

w 0<br />

1<br />

Let x 0 = bias (b)<br />

Bipolar output = sign y<br />

Threshold<br />

x 1<br />

w 1<br />

Body<br />

+1<br />

x 2<br />

w 2<br />

∑<br />

v<br />

y<br />

x 3<br />

w 3<br />

x n<br />

w n<br />

-1<br />

Activation<br />

function<br />

Figure 10 Adaline detail showing the ALC feeding to a bipolar output function.<br />

2.2.4 Neural Networks –perceptron training<br />

The model of the neuron described in the following figure was used by Frank<br />

Rosenblatt in 1958, as the basis for the first trainable neural network. His perceptron<br />

training algorithm provided the first procedure that could be used to allow a network<br />

to learn a task. This classic task is that of separating several patterns into two<br />

categories.<br />

30 of 200


Axons<br />

Synapses<br />

x 1<br />

w 1<br />

Dendrites<br />

x 2<br />

w 2<br />

Body<br />

x 3<br />

x n<br />

w 3<br />

w n<br />

∑<br />

Axon<br />

ƒ y = Output<br />

v<br />

Non-<br />

Linearity<br />

w 0<br />

Input<br />

Weights<br />

1<br />

Bias (b)<br />

y f<br />

n<br />

<br />

i1<br />

( w x w0)<br />

i<br />

i<br />

Figure 11 neuronic detail<br />

From the model we find that the input (v) i.e. induced local field of the neuron to the<br />

activation function; in this case a hard limiter, is:<br />

n<br />

v wi<br />

x<br />

i1<br />

i<br />

b<br />

(2)<br />

Where b refers to the bias, w the particular weight for an input, x a particular input<br />

and v the hard limiter input. The aim of the perceptron is to correctly classify the set<br />

of inputs into either one class or the other. That is to one class if the perceptron<br />

output is > 0 (or 0) and to the other if it is


is correct (one class) and -1 (the remaining class) if the perceptron answer is wrong.<br />

And y = binary output signal.<br />

In its simplest form the two decision regions are separated by a hyperplane defined<br />

by:<br />

n<br />

i wi xi<br />

b 0<br />

(3)<br />

i1<br />

Referring back to equation (2) it can be said that more generally (taking into account<br />

one neuron connecting to another) that the induced local field of the neuron to the<br />

activation function is:<br />

net<br />

j<br />

w<br />

n<br />

0<br />

xiwij<br />

(4)<br />

i1<br />

Where w 0 is the bias and is considered to be connected to a unit that always has an<br />

activation of 1; x i is the activation for neuron i and w ij is the weight connecting from<br />

neuron i to neuron j.<br />

2.2.5 Linear separability<br />

The following figure illustrates the concept of linear separability – the pattern classes<br />

can be separated into two classes by drawing a single line. The figure shows four<br />

example pattern sets: the first is linearly separable and the other three are not. In this<br />

example each pattern consists of a vector of two real numbers graphed as a single<br />

point in the diagram. Patterns in the first class are graphed as points in the clear area<br />

and patterns in the second class are graphed as points in the shaded area.<br />

32 of 200


B<br />

A<br />

Linearly separable regions<br />

(a)<br />

B<br />

A<br />

A<br />

B<br />

A<br />

B<br />

B<br />

A<br />

(b)<br />

(c)<br />

(d)<br />

Figure 12 (a) shows linear discrimination as opposed to (b), (c) and (d)<br />

The linear separability concept can be demonstrated graphically by plotting the<br />

inputs to a perceptron i.e. one single neuron (or unit) one against the other to<br />

represent the input/pattern/feature space, this is shown in figure 13.<br />

33 of 200


x2<br />

Linear separation<br />

1.00<br />

0.90<br />

0.80<br />

0.70<br />

0.60<br />

0.50<br />

0.40<br />

x2<br />

Class A<br />

Class B<br />

Linear (x2)<br />

0.30<br />

0.20<br />

0.10<br />

0.00<br />

-0.60 -0.40 -0.20 0.00 0.20 0.40 0.60 0.80 1.00<br />

x1<br />

Figure 13 A hyperplane separating two classes/clusters of data.<br />

Based on the equation for a straight line i.e. y = mx +c we can say that the equation<br />

for a line separating the data clusters of class A and class B is x 2 = m x 1 +c. This can<br />

be re-arranged to give x 2 – mx 1 – c = 0 and we can find the value for c by noting<br />

where the line passes through the x 2 axis (the y of the fundamental straight line<br />

equation), i.e. at x 2 = 0.235. Therefore: when x 1 = 0, x 2 = 0.235. Substituting this into<br />

x 2 = m x 1 +c, this gives 2.35 – m*0 – c = 0 and thus c = 2.35.<br />

Likewise m can be found by observing where the line crosses the x 1 axis, at x = -0.4,<br />

so when x 2 = 0, then x 1 will equal -0.4. Substituting gets 0 – m (-0.4) – 2.35 = 0, thus<br />

0 – m (-0.4) = 2.35 which is 0.4m = 2.35, therefore m = 2.35/0.4 = 5.875.<br />

The equation for the straight line (hyperplane) of figure 10 is:<br />

x 2 – 5.875x 1 – 2.35 = 0. (5)<br />

In terms of neural networks this hyperplane is fixed by the weights and biases of the<br />

neuron. The bias defines the position of the plane in terms of its perpendicular<br />

34 of 200


distance from the origin. The weight vector of the neuron will determine the<br />

orientation of the decision/hyperplane, i. e. it affects the slope (m) of the graph.<br />

As the hyperplane is fixed by the weights and biases of the neuron if different values<br />

of x 2 and x 1 are substituted into the hyperplane equation (5) and the result is greater<br />

than zero then that co-ordinate point lies above the hyperplane and is class A.<br />

Whereas if the result of the substitution < zero then that co-ordinate point lies below<br />

the hyperplane and is class B.<br />

Recall: our previous definition of the hyperplane (3) i.e. i wi<br />

xi<br />

b 0<br />

Typically the bias term is replaced by w 0 x 0 where x 0 is always 1 therefore (3) can be<br />

replaced by:<br />

The following hyperplane definition<br />

2<br />

w i<br />

i0<br />

i<br />

x w0<br />

x0<br />

w1<br />

x1<br />

w2<br />

x2<br />

0 (at thehyperplane )<br />

(6)<br />

Therefore: for a perceptron i.e. a single neuron.<br />

The output of the unit, y, is 1 when the weighted sum is greater than 0, i.e.<br />

y 1when<br />

w x0<br />

w1<br />

x1<br />

w2<br />

x2<br />

0<br />

<br />

Similarly, the output is 0 when the weighted sum is less than o, i.e.<br />

y 0 when w x0<br />

w1<br />

x1<br />

w2<br />

x2<br />

0<br />

<br />

The hyperplane between them corresponds to the weighted sum being equal to 0.<br />

0<br />

0<br />

n<br />

i1<br />

2.2.6 The Delta Rule (used to determine separability)<br />

“When an input vector is applied to an adaline neuron the computation of the net<br />

input to the adaline is the standard weighted sum.<br />

In a two dimensional example this means that the input will be: I = x 1 w 1 + x 2 w 2<br />

35 of 200


This is also the Cartesian co-ordinate system method of computing the dot product of<br />

the input vector and the weight vector.<br />

Conclusion: the weighted sum most commonly used in the transfer functions of<br />

neural networks is the same as computing the dot product of the input pattern vector<br />

with the receiving neurons weight vector.<br />

Alternatively x ● w = |x| x |w| Cos Ø where Ø is the angle between the input vector<br />

and the weight vector. N.B: From trigonometry cos is +ve if the angle is < ± 90° and<br />

–ve otherwise.”<br />

w x w x w x 0<br />

(7)<br />

0 0 1 1 2 2<br />

<br />

This can be rearranged to give:<br />

w<br />

2x2<br />

w1<br />

x1<br />

w0<br />

x0<br />

(8)<br />

w w<br />

x<br />

(9)<br />

1 0<br />

2<br />

x1<br />

x0<br />

w2<br />

w2<br />

Comparing equation (9) with equation (5) i.e. x 2 – 5.875x 1 – 2.35 = 0 and putting<br />

x 1gives<br />

w1<br />

w0<br />

0<br />

m , and c <br />

w2<br />

w2<br />

There is no unique solution to these two equations. Just as an example, let us start by<br />

assuming that w 1 0 then equation (5) is obtained from the values:<br />

2<br />

.<br />

w<br />

0<br />

2.35 and w1<br />

5.875<br />

With these weights the unit can discriminate between the two classes of objects. A<br />

single unit with these weights classifies objects A and B in the preceding figure 13.<br />

36 of 200


2.3 Address Event Representation (AER)<br />

Address-Event-Representation (AER) is a communication protocol that emulates the<br />

nervous system’s neurons communication and that is typically used for transferring<br />

images between chips. It was originally developed for bio-inspired and real-time<br />

image processing systems.[4, 59, 61, 62, 64, 143-159]<br />

2.3.1 The message packet<br />

This consists of a source address segment; determined by the image size containing<br />

the pixel data to be routed, and a payload segment containing the event or train of<br />

events associated to each address. In the case of a train of events not all of the events<br />

would necessarily be there.<br />

Address segment<br />

Payload<br />

Figure 14 message packet<br />

Although the payload can be empty when using the AER protocol (not definitive)<br />

this would imply that the address itself is indicative of an event. The AER packet is<br />

in fact used in this way; when processed after transmission reception, and also for<br />

convenience in the test setup when demonstrating reception of the AER packet for<br />

display to the Digital Video Interfaced (DVI) VGA monitor. However by using a<br />

payload of a pulse count, rather than the way just detailed, bit rate can be reduced<br />

dramatically in the sense that rather than 50 addresses being transmitted for 50<br />

separate events; at maximum intensity, meaning 50 separate AER packets, this<br />

reduces to one AER packet with one payload representing a train of events.<br />

37 of 200


2.4 Comparison to Frame Based Representation<br />

(FBR)<br />

An objective of the retinal prosthesis as part of its aim to improve the quality of life<br />

for those for whom it can be used is that it should operate in real time. Failure of real<br />

time operation could be the cause of; for example, a road accident where traffic is<br />

not seen at the time. The following sections describe and compare the use of FBR;<br />

the methodology used for video and television with AER the methodology used for<br />

neuromorphic communications to confirm the method closest to real time in the<br />

present context.<br />

Frame Based Representation (FBR) of images captures images up to 25 frames per<br />

second (fps); H.264 is a new video compression scheme that is becoming the<br />

worldwide digital video standard for consumer electronics and personal computers.<br />

H.264 has been adopted by the Motion Picture Experts Group (MPEG) sometimes<br />

referred to as MPEG-4 Part 10 or as AVC (MPEG-4’s Advanced Video Coding).<br />

While MPEG-2 is a video-only format, MPEG-4 is a more generic media exchange<br />

format, with H.264 as one of several video compression schemes offered by MPEG-<br />

4. Because H.264 is up to 3 times more compressive than MPEG-2 due to extra<br />

overhead e.g. entropy encoding, smaller block size and in-loop deblocking it comes<br />

at an exceptionally high computational cost making it impossible to rely solely on<br />

PC’s CPU, for example. As the AER task under discussion is not a generic media<br />

exchange format, the MPEG-2 synchronous video-only format will now be discussed<br />

in this context.<br />

The MPEG-2 White paper states four levels of bitrate and the frame (image size<br />

associated with those bitrates, the highest bitrate defined at the highest level is<br />

80Mb/s. An MPEG bitrate for a 32 by 32 test scene with no interframe compression<br />

38 of 200


would be 12.78Mbit and is defined for the Society of Motion Pictures and Television<br />

Engineers (SMPTE). Essentially these levels refer to the source format and selfevidently<br />

the “high” level will be used for this discussion. Along with these four<br />

levels referring to source format being defined, five profiles are described; the inputs<br />

to all profiles are YUV component video. The “High Profile’, to be used in our<br />

discussion, includes tools for improving signal-to-noise-ration (SNR), or spatial<br />

resolution, plus the ability to code line-simultaneous colour-difference signals.<br />

Encoding of video information is accomplished by spatial compression occurring on<br />

` (I) ntra’ frames and temporal compression occurring on both ` (P) redictive’ frames<br />

and ` (B) i-directional’ frames. I, P and B frames are grouped into a specified<br />

sequence known as a group of pictures (GOP). The `I-frame’ is a compressed version<br />

of a single uncompressed (raw) frame. The MPEG - I frame is a variation of the<br />

Joint Photographic Expert Group (JPEG) format which uses the discrete cosine<br />

transform (DCT), quantising, run length codes and Huffman coding.{(JPEGs are not<br />

suitable for graphs, charts and explanatory explanations because the text appears<br />

fuzzy, especially at low resolutions). P frames require an `I frame’ or `P frame’ to<br />

forward predict motion estimation, B frames are encoded using motion compensated<br />

prediction on the previous and next pictures, which must be either a B or P frame. B<br />

frames are not used in subsequent predictions. It is better to use a shorter group<br />

length for fast moving scenes otherwise pixellated frames occur. For this reason of<br />

quality affected by rapid scene changes and excessive motion the sequence of I to B<br />

frames must often be altered, this is done by I frame “forcing” i.e. adding I frames.<br />

In this way each new scene can start with a fresh I frame for B and P frame<br />

calculations. GOPs are optional in an MPEG-2 bitstream but are compulsory in DVD<br />

video.<br />

39 of 200


Typical GOP structure (Typically an I frame would occur every 15 frames)<br />

I B B P B B P B B P B B P B B<br />

Figure 15 I frame sequencing<br />

However AER is not constrained to the MPEG maximum bitrate and power<br />

consumption is markedly lower than that of MPEG. The optic nerve expects a pulsed<br />

format rather than a bit word. MPEG would require conversion to that format as it is<br />

designed to output to a monitor. This AER system on the other hand will be designed<br />

to output a bespoke pulsed format from the received transmission signal for the optic<br />

nerve implants.[155, 160-167]<br />

2.4.1 Comparison Criteria<br />

Based on the biomimetics the visual prosthetic must be able to accommodate both<br />

rapid and full scene changes. Using P and B frames MPEG-2 cannot reliably do this<br />

to acceptable biological standards of visual acuity. Therefore in order to compare<br />

synchronous MPEG-2 with AER for real time operation the GOP should be I frames<br />

only, this unavoidably reduces compression from circa 83:1 to circa 5:1; same as M-<br />

JPEG. This is fully justified as this AER system does not require motion estimation<br />

and prediction to transmit scene changes which effectively show motion as they<br />

occur in real time. Qualitative metrics: MPEG-2 is computationally intensive,<br />

particularly at the encoding side, uses a global clock which will be power hungry,<br />

does not give the required neuromorphic output at the decoding output, uses a<br />

restricted bitrate and uses JPEG flavour spatial/intraframe compression that actually<br />

discards some picture information. AER, described next (prior to quantitative<br />

metrics) is not computationally intensive, gives a neuromorphic output, has low<br />

power consumption, uses a bitrate restricted only by the technology; not the<br />

standard, and gives a choice of picture format, thereby not necessarily discarding<br />

40 of 200


picture information. In MATLAB terms *.bmp file format has been used for the<br />

uncompressed (raw) frames upon which the I frames are based although in principle<br />

*.jpg could be used. CCD/CMOS digital cameras give a choice of outputs: a “raw”<br />

proprietary format commonly based on a bayer filter (though there are others)<br />

YUV/RGB and with including software JPEG and TIFF (4-plane) format. The TIFF<br />

format, in terms of its flexible usage of its fourth plane capacity could potentially<br />

give the option of a yellow plane of colour! Retaining a fourth plane could, even as a<br />

`yellow’ plane impart luminance data for use with RGC related to rods, for use with<br />

hybrid retinal prosthesis as technological capability increases.<br />

2.4.2 Address Event Representation<br />

The Address Event Representation (AER) communication protocol {Thiago T.<br />

Culurciello E. Andreou A.G., 2006 #56} uses a fast digital bus in conjunction with<br />

time division multiplexing (TDM) to propagate spike information from one point to<br />

another. This transmission has a twofold legitimacy to simplifying parallel<br />

computation (1) reducing the number of dedicated wires which would be required in<br />

a purely parallel computation, to that of a number of shared wires, and (2) that of<br />

allowing decoding to be performed at the output end: in context; both in terms of<br />

demultiplexing and converting the spike information from unit time into real time<br />

spikes. The spikes; called events when discussing the protocol are inextricably linked<br />

to an address, in effect counting the number of addresses per unit time gives the<br />

number of events or spikes. Because time multiplexing is used an increased<br />

bandwidth is expected e.g. compared to a standard using data compression such as<br />

MPEG-2, however, as spikes are only produced when events occur a lot of the<br />

bandwidth will use negligible power, less power consumption than say MPEG-2.<br />

Events occur on a still image; the address is there, otherwise it would not be seen, the<br />

41 of 200


point is an event occurs for spike frequency duration but at outerpulse frequency i.e.<br />

one time in five and that at maximum intensity otherwise less often.<br />

2.5 MATLAB sub-retinal simulation<br />

Dealing with the visible light spectrum from a ‘scene’ of pixels in the form<br />

suggested by the colour opponent theory of: Red (R), Green (G), Blue (B) and<br />

Yellow (Y) primaries and converting those incoming retinal signals to Red (R),<br />

Green (G), Blue (B) primaries suggested by the Young-Helmholtz trichromacy<br />

theory required by the optic nerve.<br />

Signals within the retina<br />

Signals leaving the retina<br />

Optic nerve<br />

Figure 16 colour dichotomy<br />

2.5.1 MATLAB representation<br />

A MATLAB program has been constructed to illustrate the signals leaving the retina,<br />

initially with a scene of 128 by 128 pixels later refined to an 8 by 8 scene for quicker<br />

implementation [appendix A]. The program utilizing the Kronecker product[168] , as<br />

a shortcut to compose some large matrices with repeated rows or columns, consists<br />

of six files: mixedsource.m, ysend.m, pwork.m, scan.m, subi.m and ycut.m.<br />

In terms of the Matlab simulation the control word for the addressing which will be<br />

the heart of the AER router is fitted in between the R, G, B and Y planes of colour.<br />

42 of 200


none<br />

Thus for row addressing and then column addressing on planes five and four;<br />

respectively, we have:<br />

b8 b7 b6 b5 b4 b3 b2 b1<br />

2 7 2 6 2 5 2 4 2 3 2 2 2 1 2 0<br />

128 64 32 16 8 4 2 1<br />

A7 A6 A5 A4 A3 A2 A1 A0<br />

b8 b7 b6 b5 b4 b3 b2 b1<br />

2 7 2 6 2 5 2 4 2 3 2 2 2 1 2 0<br />

128 64 32 16 8 4 2 1<br />

A7 A6 A5 A4 A3 A2 A1 A0<br />

Table 4 row and column addressing<br />

2.5.2 Program operation<br />

The following flowchart outlines the main operation of the program.<br />

start<br />

Load<br />

images<br />

Initialise<br />

variables<br />

Read<br />

previous<br />

image<br />

Read<br />

current<br />

image<br />

more<br />

Check for<br />

more<br />

images<br />

Capture<br />

changes<br />

Display<br />

revised<br />

images<br />

End<br />

Figure 17 Flowchart of MATLAB program of Appendix A<br />

43 of 200


2.5.1 Files in Appendix A<br />

“ycut.m” is the main file which calls mixedsource.m to write the nine test frames<br />

contained therein to the directory/folder to be read by the program. Those image files<br />

are in `bmp’ format. The remaining files are function files. The first level function<br />

file “pwork.m” applies to the group of event changes within each test frame. The<br />

second level function files have the following functions: (1) “scan.m” adds the<br />

control word for row and column addressing to the 5 th and 4 th planes respectively of<br />

the RGB coloured frames and also adds the 6 th (`yellow’) plane. (2) “subi.m” breaks<br />

down the image into separate pixel information which is then loaded into a cell array<br />

for use in the processing. (3) “ysend.m” shows the neuromorphic representation of<br />

the pulsing sent between the retina and the router/implant.<br />

2.5.2 Use of the program<br />

Run “ycut.m” accepting defaults with the return key. The figures show (from left to<br />

right) the first reference scene; prior to a change (a collection of a group of event<br />

changes). The second figure shows the changed scene. Thirdly, neuromorphic<br />

pulsing is represented. The penultimate figure on the second row is a visual<br />

representation of the changes sent from the retina to the router. The final image<br />

represents the signals which would be presented to the implanted array at the optic<br />

nerve. Presently, for ease of use in terms of implementation timing the test frames<br />

are eight square, this can be altered to 128 square but will run more slowly.<br />

44 of 200


Figure 18 program output<br />

2.6 Epi-retinal; method of choice<br />

The following diagram gives some idea of the differences involved between FBR<br />

and an AER approach to producing the biphasic pulses required for propagation to<br />

the ganglion cells comprising the optic nerve fibre. Essentially the AER approach is<br />

pixel based rather than frame based.<br />

45 of 200


MPEG-2 setup<br />

AER setup<br />

Camera<br />

Camera<br />

Encoder<br />

Sender<br />

Transmission lines<br />

Decoder<br />

Receiver<br />

Router<br />

Conversion<br />

(to optic<br />

nerve<br />

format)<br />

To optic nerve implants<br />

Router<br />

To optic nerve implants<br />

Figure 19 Comparing epi_retinal approaches<br />

In the case of the FBR approach it can be seen that an extra step is required for<br />

conversion to optic nerve format that an AER approach inherently supplies. In effect<br />

the sender chip receiving an RGB format from the camera generates an AER signal<br />

for transmission to the receiver/router chip which drives the retinal implant array of<br />

electrodes.<br />

46 of 200


2.6.1 MATLAB representation of epi_retinal approach<br />

AER protocol<br />

Camera<br />

stream<br />

Sender<br />

chip<br />

Digital address<br />

stream (AER)<br />

Receiver<br />

chip<br />

Actual pulse<br />

stream<br />

<strong>Retinal</strong><br />

implant<br />

32 x 32<br />

image<br />

Figure 20 Simulation context diagram (level 0)<br />

In practice the camera would be affixed to specialised headgear or in the case of a<br />

miniaturised camera to bespoke spectacle frames. The power supply for the camera<br />

would be attached to a belt, also containing the `sender’ chip (responsible for<br />

conversion to AER). After propagation of AER to the `receiver’ chip, wireless<br />

telemetry could be adopted[169]. The receiver chip would be sited behind the ear<br />

similar to the procedure presently adopted for cochlear implants [101]. Although<br />

some early papers [56] suggest the possibility of incorporating the in vivo portion of<br />

the receiver chip inside the eyeball itself this adds extra complication to present<br />

telemetry communication already in use. Extra complication such as restriction of<br />

eyeball movement (necessary for efficient inductive coupling), distance of telemetry<br />

(unfeasible - from external to the skull to the eyeball), extra mass\size and extra<br />

power dissipation within the eyeball being greater than achieving without breaching<br />

safety guidelines. This document addresses the issue of prototype retinal prostheses<br />

without considering extra telemetry issues such as those outlined which are best<br />

relegated to future work. The wiring from the subcutaneous portion of the receiver<br />

chip is presently (10/07/2012) limited to 256 wires into the eyeball due to the size<br />

limitation (5mm) of the incision [54] which can safely be made without risk of<br />

biological damage. Given present research into nanowire technology this limit may<br />

potentially increase.<br />

47 of 200


2.6.2 FPGA representation of epi_retinal approach<br />

The following diagram (figure 21) gives a basic overview of the epi_retinal<br />

prosthesis. The sender chip receives the camera output and converts it to an AER<br />

protocol[170]. This serial communication of a stream of pulses is wired from the<br />

sender chip to the receiver chip. In this diagram the separation of the receiver chip<br />

into two distinct parts i.e. external to the body and internal to the body is not shown,<br />

this is ably discussed in other papers [53]. The action of the receiver chip is to<br />

convert the serial transmission into the parallel form required for transmission to the<br />

retinal implant incorporating timing considerations, to form biphasic pulses. The<br />

implant (Addressing the implant) is then addressed in the sense that all wires now<br />

entering the eyeball carry a separate stream of information for each electrode to be<br />

driven by the post processing electrode drivers attached to the electrode array.<br />

AER protocol<br />

Receiver.<br />

<strong>Retinal</strong> implant.<br />

PISO<br />

Sender.<br />

Digital address<br />

stream (AER)<br />

Serial to parallel conversion<br />

Timing considerations<br />

Addressing the implant<br />

Form biphasic pulses<br />

Electrode<br />

array<br />

Control/<br />

addressing<br />

Figure 21 FPGA view of processing<br />

48 of 200


2.7 Background to stimulators<br />

<strong>Retinal</strong> stimulation approaches up to 2007[21, 53, 54, 124, 171, 172] and in some<br />

cases beyond [102, 173] have presumed for epiretinal prosthetic targets the use of<br />

bioadhesives or retinal tacks [174, 175] to affix the retinal array in position.<br />

Relatively large electrodes e.g. 500µm[102]; in relation to neuron size, are another<br />

feature of current work. In 2008 [176] 3-D electrodes with diameter 100µm and<br />

height of 25µm were used. In 2010 [97] penetrating electrodes were used. Note; the<br />

latest penetrating array for epiretinal prosthesis [29] makes possible high density<br />

arrays (≈100 electrodes/mm 2 ) and arrays of 2µm, 5µm, 10µm, 20µm and 30µm<br />

diameter with heights of 60µm-100µm and pitches of 50µm-400µm. So closer<br />

proximity to the target cell/neuron can be achieved. We already know [102] that a<br />

smaller electrode size of 50–100µm in diameter can have a smaller current stimulus<br />

of between 10-100µA; say rheobase of 50µA and 100µA at a chronaxie of 1ms and<br />

closer proximity at smaller electrode diameters implies an improvement on this!<br />

2.7.1 Charge considerations<br />

The safe charge limit [103]for Platinum(Pt) electrodes is 0.35mC/cm 2 , for Iridium<br />

oxide (IrO x ) 3mC/cm 2 and for titanium nitride(TiN) 23mC/cm 2 . So for the charge for<br />

a 1ms pulse at 100µA; using Q=It we have a charge requirement for stimulation of<br />

100nC (0.1µC). Given these figures implies the following minimum simulation site<br />

diameters for the achromatic approach using disc electrodes ≈190µm (Pt), ≈65µm<br />

(IrO x ) and ≈24µm.<br />

2.7.2 Achromatic current requirements<br />

In the early approach of “A Neuro-Stimulus Chip with Telemetry Unit for <strong>Retinal</strong><br />

Prosthetic Device” [53] the stimulator is based on a 4-bit binary-weighted digital-toanalog<br />

converter (DAC), and thus consists of 15 parallel current sources. A switched<br />

bridge circuit was designed to create a biphasic current pulse from a single current<br />

source thus allowing a reduced voltage swing. The pulse width modulated (PWM)<br />

signal used in this early effort allowed the recovery of a clock signal using the rising<br />

edges of each pulse recovered from the amplitude shift keying (ASK) modulated<br />

waveform. In other approaches [124, 171] there are outline diagrams showing the<br />

voltages involved during biphasic stimulus.<br />

49 of 200


Chapter 3 Concepts<br />

An epiretinal implant has no light sensitive areas; unlike subretinal implants, but<br />

typically receives electrical signals from a distant camera and processing unit outside<br />

of the body. Electrodes in the epiretinal implant then directly stimulate the axons of<br />

the inner layer ganglion cells that form the optic nerve. The epiretinal implant is<br />

effectively a readout chip receiving electrical signals containing image information<br />

from a camera and processing unit, and is electrically coupled to the ganglion cell<br />

axons. This project will concentrate on the processing [177-182] unit outside of the<br />

body but will be novel in the sense that it will use colour information of scenes<br />

viewed as opposed to achromatic approaches taken in every other work to date.<br />

Colour is an extra challenge as the RGC axon needs to be targeted in an individual<br />

sense and as the RGC axon diameter is of the order of 1µm/2.5µm [49 {Schröder,<br />

1988 #257]} use of 50µm/100µm electrodes is not an option as might be the case<br />

with achromatic stimulation. Colour stimulation in that 10% of axons that are<br />

devoted to the fovea is now viable due to the order of magnitude decrease in<br />

electrode sizing [29]. A possible reason, among others, for the appearance of<br />

coloured phosphenes in achromatic retinal prostheses may be this blanket coverage<br />

of RGCs including those comparatively few expecting colour information as<br />

opposed to the preponderance of nerve fibres stimulated by the rods. Another issue<br />

with colour is three times the `wiring’ is needed for the same pixel size as is needed<br />

for the achromatic approach. However targeting the high acuity region of the fovea<br />

with its high concentration of RGCs; evolved to process colour from the cone<br />

photoreceptors is the more biomimetic and natural approach. In other words we were<br />

designed to see in colour not shades of grey.<br />

50 of 200


The work has evolved from inception to using Address Event Representation (AER)<br />

[12, 153, 155, 161-164, 183, 184]. as a way of addressing the issues involved with<br />

transmission of visual signal processing information with coding of the signal to<br />

avoid extensive wiring between system components, namely between the sender chip<br />

and the receiver chip. Use of AER will also reduce bit rate and hence power<br />

requirements. This is important if telemetry is used; to keep specific absorption rate<br />

(SAR), to acceptable limits. In the USA the SAR limit is 1.6 W/kg, averaged over a<br />

volume of 1 gram of tissue, for the head. In Europe, the European Union Council has<br />

adopted the recommendations made by the International Commission on Non-<br />

Ionising Radiation Protection (ICNIRP Guidelines 1998). These recommendations<br />

set a SAR limit of 2.0 W/kg in 10g of tissue.[185] In terms of physical size of a<br />

retinal implant chip it must fit within the eyeball: vertical diameter is 24 mm; the<br />

transverse being larger. The weight of the human eye is 7.5g. The volume is 6.5 ml<br />

(0.4 cu. in.) and the weight is 7.5 g (0.25oz).<br />

51 of 200


AER Transmission<br />

lines<br />

INPUT<br />

image<br />

signal<br />

SENDER<br />

(chip)<br />

RECEIVER<br />

(chip)<br />

256<br />

electrode<br />

implant<br />

De_multiple<br />

xer<br />

Up to 256<br />

256<br />

electrode<br />

implant<br />

Figure 22 AER Delivery<br />

3.1 Neural concepts<br />

For many years algorithms have been sought to simulate the visual processing of the<br />

human brain. The brain contains up to 100 billion processing units called neurons,<br />

each having a switching delay of several milliseconds. The fast reaction of the brain<br />

is achieved by massive parallel processing; due to the networking of these neurons,<br />

i.e. massive interconnection. The basic component parts of a neuron are the axon –<br />

signal output, the synapse – the interface between the axon of one neuron and the<br />

input to the dendrites (thread like connections to the body of the neuron) of the next,<br />

and of course the body (soma) itself. (See Figure: Biological neuron)<br />

52 of 200


Figure 23 Biological neuron<br />

3.2 Biological Neuron operation<br />

The specific process performed by a neuron depends on the properties of its external<br />

membrane. This fulfils five functions: it serves to propagate electrical impulses along<br />

the length of the axon and its dendrites; it releases transmitter substances at the<br />

extremity of the axon. It reacts with these transmitter substances in the dendrites at<br />

the cell body. It reacts to the electrical impulses which are transmitted from the<br />

dendrites and generates, or fails to generate a new electrical pulse [186-188]. During<br />

development of the brain the external membrane enables the biological neuron, of<br />

which it is part, to recognise other neurons to which it should be connected. The<br />

53 of 200


electrical pulse generated is called in biological terminology, an action potential, and<br />

is shown in Figure 21.<br />

Membrane potential (mv)<br />

+30<br />

0<br />

-70<br />

0 1 2 3 4 5<br />

Time(ms)<br />

Figure 24 Action Potential<br />

3.3 Natural limit for pulse duration<br />

The biological process of producing an action potential delivered from the retina to<br />

the optic nerve fibres creates the pulse. [10, 11, 48]As can be seen from Figure 21;<br />

although the actual pulse occurs within 2ms there is a 2ms refractory period before<br />

another pulse can be produced. The pulse rate varies between five to fifty pulses/s<br />

where 5 is the eye `resting’ (if all fibres of the nerve receive the same signal) and 50<br />

where the optic nerve fibre is responding to a full intensity signal. At this maximum<br />

throughput a spike plus its interval to next spike will take 1000 50 = 20ms. At a<br />

nominal minimum throughput of five spikes a spike plus its interval to next spike<br />

would be 10005 = 200ms.<br />

54 of 200


3.4 <strong>Retinal</strong> colour perception<br />

The retina appears to be inside out, since the receptors that transduce visual signals<br />

are located in the outer layer at the very back of the eye, with their photosensitive<br />

outer segments aimed away from the incoming light [37, 189-191]. Over the whole<br />

human retina, there are 120 x 10 6 rods and 6 x 10 6 cones but only 1 x 10 6 ganglion<br />

cells.[46] The fovea comprises less than 1% of retinal size but takes up over 50% of<br />

the visual cortex in the brain. The human fovea has a diameter of about 1.5 mm with<br />

a high concentration of cone photoreceptors. The center of the fovea is the foveola<br />

and is approximately 0.35 mm in diameter and lies in the center of the fovea and<br />

contains only cone cells, and a cone-shaped zone of Müller cells [192]. The foveola<br />

is located within a region called the macula, a yellowish, cone photo receptor filled<br />

portion of the human retina. The following diagram illustrates visual processing<br />

from a colour point of view. Within the diagram R, G and B signify the<br />

photoreceptors for red, green and blue. RGC is the retinal ganglions cell the axon of<br />

which forms part of the optic nerve, BC is the bipolar cell for the cone, HC the<br />

relevant horizontal cell and AC the amacrine cell.<br />

55 of 200


Figure 25 <strong>Retinal</strong> colour processing (original)<br />

56 of 200


3.5 Human visual processing<br />

Humans are trichromats i.e. the vast majority see in colour by the retina responses to<br />

the colours red, green and blue. However upon leaving the retina via the optic nerve<br />

(composed of 1.2 million nerve fibres) 80% of those fibres are tailored to accept<br />

colour output from the retina in terms of three colour signal ranges historically<br />

termed red, green and blue. The red cones respond maximally to 558nm but cover<br />

the red wavelength (630-700) nm The green cones respond maximally to 531nm but<br />

cover the green wavelength (520-570) nm and also the yellow wavelength of<br />

580nm.The blue cones respond maximally to 420nm but cover the blue wavelength<br />

(450-495) nm. [193] Rods respond maximally at 491nm. Figure_5 illustrates the way<br />

the four colour opponent theory; operating inside the retina; is construed to operate<br />

to achieve the three colour signals (RGB) leaving the retina at the ganglion cell<br />

axons. [12]<br />

Figure 4 depicts two channels of colour information: one channel carrying data<br />

derived from the L (long wavelength (red)) and M (medium wavelength (green))<br />

cones of the retina i.e. red and green signals and one channel derived from the L and<br />

M and S (short wavelength (blue)) cones.<br />

So the red (modulated by green) half channel propagates its signals down each of<br />

the nerve fibres (ganglion cells) tailored to accept red signals and the green<br />

(modulated by red) half channel propagates its signals down each of the nerve fibres<br />

tailored to accept green signals The blue (modulated by `yellow’) half channel<br />

propagates its signals down each of the nerve fibres tailored to accept blue signals.<br />

The remaining half channel propagates a `yellow’ signal to koniocellular ganglion<br />

57 of 200


cells to modulate this luminance pathway signal which is otherwise ON to the blue<br />

cone. [193], [14]<br />

Due to the retinotopic mapping involved in human vision massive (i.e. 120 million<br />

rods, 6 million cones and about 1.2 million ganglion cells) parallel processing is<br />

necessary. This retinotopic mapping means that each pixel by pixel operation is not<br />

separable and must be completed in conjunction with neighbouring pixels in an<br />

image within a time frame of notionally 20 milliseconds; the time between one spike<br />

and another at maximum throughput. An issue arises with transferring information<br />

from the sender chip; built to take in information from the outside world, and the<br />

retinal receiver chip built to deliver a stimulus to the implant. Address Event<br />

Representation (AER) reduces the amount of parallelism required for transfer of<br />

information between the sender chip and the receiver chip. This is possible because<br />

(1) the transfer of information propagated along the red, green, blue and yellow half<br />

channels occurs at the relatively slow rate of between 5 to 50 impulses per second,<br />

and (2) the AER protocol allows for time multiplexing thus reducing the amount of<br />

parallelism needed for transmission of the `sender’ signal to the receiver chip. In<br />

practice this will reduce the physical complexity of the proposed artificial visual<br />

processing. The following work will focus on producing a workable AER protocol<br />

for this purpose.<br />

As the work proposed will concern a small (


microphotodiode pixels are 20 µm x 20 µm with 9 µm x 9 µm electrodes separated<br />

from adjacent pixels by 10 µm channel stops [194]. The result of this<br />

correspondence indicates that the half channel pathways of Figure 4 result in three<br />

outgoing signals (signals leaving the retina) of red, green and blue to the axons of the<br />

ganglion cells forming the optic nerve.<br />

59 of 200


L+ M-<br />

+ -<br />

R-G G-R B-Y Y-B<br />

Note: L(red), M(green) & S(blue).<br />

Red-green pathway<br />

L- and M-cone difference<br />

Blue-yellow pathway<br />

S-cone (difference) with SUM of L- & M-cone<br />

responses<br />

Luminance pathway<br />

L- and M-cone sum<br />

Figure 26 Four Colour Opponent Theory<br />

60 of 200


3.6 Current <strong>Retinal</strong> Implants<br />

Commonality exists between sub retinal [195, 196] and epi-retinal approaches in<br />

terms of stimulus pulse requirements to be delivered by the implants. The difference<br />

between these two approaches is that the sub retinal implant is meant to substitute<br />

for retinal processing within the outer retina whereas the epi-retinal implant bypasses<br />

both the inner and outer retina and uses signals derived from a camera to produce the<br />

stimulus pulsing.<br />

3.6.1 <strong>Retinal</strong> structure<br />

There are five major classes of neuron within the retina which can be considered to<br />

lie within the seven layers of the following representation (Figure 1) of a cross<br />

section of the retinal architecture; namely cones and rods, horizontal, amacrine and<br />

retinal ganglion cells (RGCs). The outer retina extends from the pigment epithelium<br />

to the boundary of the inner nuclear layer whilst the inner retina encompasses the<br />

inner nuclear layer, the inner plexiform layer and the ganglion cell layer. The<br />

synaptic layers housing ion channels are the outer plexiform layer (OPL) and the<br />

inner plexiform layer (IPL). A sub retinal implant (section 1.1.1) [197] would<br />

occupy the space of photoreceptors in a healthy retina i.e. adjacent to the choroid<br />

(the retinal pigment epithelium (RPE) lines the choroid), the photodiodes taking the<br />

place of photoreceptors and the stimulatory electrodes impinging at the boundary of<br />

the inner nuclear layer and typically but not exclusively stimulating bipolar cells. An<br />

epi retinal implant (section 1.1.2) [111] is tacked to the inner limiting membrane of<br />

the retina at the axon side of the ganglion layer reached from the inside of the<br />

eyeball.<br />

61 of 200


3.6.2 <strong>Retinal</strong> operations<br />

Light reaching the photoreceptors (section 3.4) of which there are 126 x 10 6 over the<br />

whole human retina (section 3.4), is transduced into electrical signals synapsing onto<br />

bipolar and horizontal cells. The cones represent a twentieth of the photoreceptors<br />

however their dendritic arbor is typically larger than that of the rods. Bearing in<br />

mind that the optic nerve has only 1.2 million fibres [174] then it can be seen that the<br />

retina is operating as a very efficient spatiotemporal filter. Photoreceptors threshold<br />

and summate; with horizontal cells providing lateral processing, by interacting with<br />

photoreceptor and bipolar cells. Amacrine cells also provide lateral processing in the<br />

inner retina by interacting with bipolar cells and RGCs.<br />

62 of 200


Pigment<br />

epithelium<br />

Outer<br />

segments of<br />

photoreceptors<br />

Outer<br />

nuclear<br />

layer<br />

Cone<br />

Rod<br />

Outer<br />

plexiform<br />

layer<br />

Inner<br />

nuclear<br />

layer<br />

Inner<br />

plexiform<br />

layer<br />

Ganglion<br />

cell layer<br />

Horizontal<br />

cell<br />

Bipolar cell<br />

Amarcrine<br />

cell<br />

RGCs<br />

Parasol RGC Midget RGC Small bistratified RGC<br />

Direction of<br />

light from eye<br />

lens<br />

Figure 27 <strong>Retinal</strong> Structure<br />

This convergence of the retina, in terms of intensity signals derived from the<br />

photoreceptors is represented at the ganglionar level in terms of the receptive field<br />

concept. In this abstraction the signals arriving at a ganglion cell are represented<br />

either by an ON-OFF configuration when excitatory signals predominate [198] and<br />

depolarisation causes an action potential to be produced and propagated along the<br />

optic nerve fibre and an OFF-ON configuration when this is not the case i.e. where<br />

inhibitory signal predominate (Table 1). The centre in a primate retina is half the size<br />

of the periphery; characterised by spatial frequency, cycles per degree (c/d)<br />

63 of 200


quantified by number of cycles present in the stimulus of a 1º visual angle. [198], the<br />

size referred to here is in terms of the centre radius compared to the peripheral radius<br />

[199]. So for a centre radius (r c ) the surround radius (r s ) will be 2r c giving an area of<br />

centre of πr 2 c and an outer area of π(2r c ) 2 giving an annulus area of 4πr 2 c - πr 2 c i.e. 3<br />

πr 2 c meaning the relationship between centre area and annulus is 3 (Figure 2).<br />

Destination of signals On centre ganglion cell field Off centre ganglion cell field<br />

Converging signals impinging<br />

on surround only<br />

Converging signals impinging<br />

on centre only<br />

No signals reaching centre or<br />

surround<br />

Signals reaching centre and<br />

surround<br />

ganglion cell axon does not fire<br />

ganglion cell axon fires rapidly<br />

ganglion cell axon does not fire<br />

low frequency firing<br />

contentious but an off-bipolar<br />

cell field would fire<br />

contentious but an off-bipolar<br />

cell field would not fire<br />

ganglion cell axon does not fire<br />

low frequency firing<br />

Table 5 Receptive ON/OFF operation<br />

Diamete<br />

r 3.36µm<br />

When r surround equals 2r centre annulus area<br />

equals 6.65µm 2 Receptive field<br />

+<br />

+<br />

Diamete<br />

r 1.68µm<br />

function<br />

64 of 200


Figure 28 Receptive field example<br />

3.6.3 Commonality of sub retinal and epi retinal approaches<br />

In Carver Meads 1990 invited paper “[200]” which refers to Mahowald’s “Silicon<br />

Retina” [201] a description is given of each node in the retina (photoreceptor) and<br />

the hexagonal resistive network between them. The effect of this network is to<br />

compute a spatially weighted average of photoreceptor inputs meaning the output<br />

from the circuit is the difference between the resistive network and that node. So this<br />

sub retinal approach using photodiodes as receptors replicates the centre surround<br />

processing of the vertebrae retina[202]. This centre surround processing is also used<br />

in epi-retinal approaches by using the particular edge detection method described<br />

below to process the received image from an external camera [40, 111, 200, 201,<br />

203, 204]. The receptive field concept is described by Difference of Gaussians<br />

(DOG) which for the centered two-dimensional case is,<br />

( )<br />

( ) ( ) ( ) ( )<br />

(1)<br />

Where x and y are the pixel co-ordinates, σ (standard deviation); pragmatically the<br />

diameter of the centre, and β (space constant) is the ratio of outer diameter to inner<br />

diameter of the receptive field function. To define the difference of Gaussian (DOG)<br />

i.e. f(x,y,σ)from the above description in image processing terms, that part of the<br />

equation relating to the inner circle i.e. the kernel is defined as g inner (x,y,σ) =<br />

( ) ( )<br />

( ) (2)<br />

65 of 200


(Where * is the convolution operator and I refers to the intensity of the point)<br />

Similarly g outer (x,y,βσ) =<br />

( ) ( )<br />

( )<br />

(3)<br />

So DOG = g inner (x,y,σ) - g outer (x,y,βσ)<br />

(4)<br />

Typically finite impulse response filters (FIRs) are used to implement these concepts<br />

within FPGA fabric.<br />

3.6 AER concept in context<br />

AER is a time division multiplexing system, therefore signals are sent in one<br />

complex transmission and the receiving end has to separate the individual signals.<br />

This gives an opportunity to emulate the many wires that would otherwise be<br />

required to transmit the values of pixel intensity of each pixel of a scene. So for a<br />

1024 by 1024 scene: required at the fovea of the eye, 1,048,576 wires would need to<br />

be emulated. In the case of a scene of 32 by 32 the connectivity of 1024 wires would<br />

need to be emulated and for a scene of 4 by 4 the connectivity of 16 wires would<br />

need to be emulated. At 25 fps the time for each frame would be 0.04s (40ms) such<br />

timing would be appropriate for the initial test setup to display on a monitor. A spike<br />

plus interval occurs in 20ms at maximum pulsing and in 200ms for minimal pulsing.<br />

Pulses per second can be determined from interspike time, within this time frame;<br />

see figure 27:<br />

66 of 200


Spike Representation<br />

Looking at the red plane at maximum pulse rate over 1 second<br />

Tspike<br />

(4ms)<br />

Ispike<br />

(16ms:<br />

interspike<br />

interval)<br />

Tplane<br />

Looking at the red plane at minimum pulse rate over 1 second<br />

Tspike<br />

(4ms)<br />

Ispike<br />

(196ms:<br />

interspike<br />

interval)<br />

Tplane<br />

Figure 29 showing fifty pulses/s (representing maximum intensity) and 5 pulses/s<br />

(representing minimum intensity)<br />

3.7 AER data format<br />

The data transmitted through the AER ‘virtual’ wiring will be for the colour signals<br />

of red (R), green (G) and blue (B). The major requirement for transmission of<br />

information is that all information be transferred in 40ms or less; in conventional<br />

terms (T frame ) this is 25 frames per second (fps). For all information to be transferred<br />

in 40ms implies an address-event time (T ae ) for each virtual wire (pixel) to be related<br />

to image size. So in the case of a 32 by 32 test image this time per pixel (T pixel )<br />

would be 0.04s 1024 = 39062.5 ns. AER packet time (T packet ) is synonymous with<br />

T pixel for this project and will encompass address bits with pixel data as payload.<br />

During AER transmission for the three planes of each pixel the nominal time per<br />

plane would be:<br />

67 of 200


T<br />

frame<br />

C * R * M<br />

p<br />

Tpacket<br />

i.e. Tplane<br />

(1) (a)<br />

M<br />

p<br />

Where T frame = MPEG frame time determined from fps of camera<br />

T plane = time for one plane of one packet<br />

T packet = time for one packet<br />

C = number of columns in image<br />

R = number of rows in image<br />

M P = number of planes in message packet<br />

So in this case T plane =<br />

39062.5<br />

3<br />

≈ 13020.8 ns i.e.<br />

0.<br />

04<br />

13020.<br />

8ns<br />

32 32<br />

3<br />

Before and after transmission T plane and T packet are concurrent and in this case the<br />

following adaptation of (1) (a) is used:<br />

T<br />

frame<br />

C* R<br />

T (1); (b)<br />

plane<br />

This means to maintain maximum throughput for one plane of 50 spikes per unit<br />

time implies spike + interval would be 39062.5 50 = 781.25 ns.<br />

T<br />

spike<br />

Tplane<br />

I<br />

spike<br />

<br />

(2)<br />

P<br />

p max<br />

Where T spike = time for impulse to complete<br />

I spike = the interspike interval<br />

68 of 200


T plane = time for one plane<br />

P p = number of pulses per plane<br />

Equations (1) (b) and (2) can be combined in the following way:<br />

T<br />

spike<br />

Tpacket<br />

I<br />

spike<br />

(3)<br />

P<br />

p max<br />

So in this case T spike + I spike =<br />

39062.<br />

5<br />

781.<br />

25ns<br />

50<br />

From observation of the preceding spike representation figure it can be seen that for<br />

maximum pulsing:<br />

T pulse = 5*T spike (where T pulse = (interval + T spike ) this gives a factor to be used with<br />

(3) to calculate T spike directly:<br />

T<br />

spike<br />

I<br />

spike<br />

Tpacket<br />

<br />

<br />

M<br />

p<br />

* Pp<br />

Tpacket<br />

<br />

<br />

M<br />

p<br />

* Pp<br />

max<br />

max<br />

1 <br />

<br />

5<br />

<br />

4 <br />

<br />

5<br />

<br />

. (4)<br />

Using (3) and (4) would give a spike time of 781.25 ns 5 = 156.25 ns. This<br />

minimum duration spike time, derived at maximum pulsing, will be a constant<br />

<br />

<br />

spike constant<br />

T for the test scene. In this case thenT spike<br />

= 156.25 ns. Now that<br />

constant<br />

Tspike constant<br />

has been determined for the system the constraint for maximum pulsing can<br />

69 of 200


e relaxed and from (2) the temporal spacing between spikes at any pulsing can be<br />

worked out.<br />

So in this instance for a minimal pulsing of 5 and a maximal pulsing of fifty:<br />

I spike<br />

39062.5 156.<br />

25<br />

5<br />

7812.<br />

5 156.<br />

25<br />

7656.<br />

25ns<br />

I spike<br />

39062.5 156.<br />

25<br />

50<br />

781.<br />

25 156.<br />

25<br />

625ns<br />

Virtual Spike Representation<br />

Looking at the red plane at maximum pulse rate over 39062.5ns<br />

Tspike<br />

(156.25ns)<br />

Ispike<br />

(625ns:<br />

interspike<br />

interval)<br />

Tplane<br />

Tspike<br />

(156.25ns)<br />

Ispike<br />

(7656.25ns :<br />

interspike<br />

interval)<br />

Tplane<br />

Looking at the red plane at minimum pulse rate over 625000ns<br />

Figure 30 50 pulses/s and 5 pulses/s for one plane of virtual wire of test scene<br />

70 of 200


For the three colour planes; R, G, B at maximum throughput (North Sky White)<br />

there will be 150 events or spikes per packet.<br />

E<br />

packet<br />

f<br />

event<br />

(5)<br />

T<br />

packet<br />

Where f event is event frequency<br />

E packet is number of events per packet<br />

T packet is cycle time for message packet<br />

f event<br />

150<br />

<br />

0 . 04<br />

3750 events per second<br />

To accommodate the virtual wiring of our test scene this maximum event frequency<br />

per packet must increase by the number of pixels within the image:<br />

1024*<br />

3750 3840000events/second<br />

3.<br />

84 Mevents/second<br />

3.8 Behavioural test image to be used<br />

The test image initial sizing is 32 * 32 pixels consisting of 64 blocks of colour:<br />

71 of 200


one tw o three four five six seven eight<br />

RED GREEN BLUE CYAN MAGENTA YELLOW ORANGE INDIGO<br />

nine ten eleven tw elve thirteen fourteen fifteen sixteen<br />

GREEN RED GREEN BLUE CYAN MAGENTA YELLOW ORANGE<br />

seventeen eighteen nineteen tw enty tw enty_one tw enty_tw o tw enty_three tw enty_four<br />

BLUE GREEN RED GREEN BLUE CYAN MAGENTA YELLOW<br />

tw enty_five tw enty_six tw enty_seven tw enty_eight tw enty_nine thirty thirty_one thirty_tw o<br />

CYAN BLUE GREEN RED GREEN BLUE CYAN MAGENTA<br />

thirty_three thirty_four thirty_five thirty_six thirty_seven thirty_eight thirty_nine forty<br />

MAGENTA CYAN BLUE GREEN RED GREEN BLUE CYAN<br />

forty_one forty_tw o forty_three forty_four foprty_five forty_six forty_seven forty_eight<br />

YELLOW MAGENTA CYAN BLUE GREEN RED GREEN BLUE<br />

forty_nine fifty fifty_one fifty_tw o fifty_three fifty_four fifty_five fifty_six<br />

ORANGE YELLOW MAGENTA CYAN BLUE GREEN RED GREEN<br />

fifty_seven fifty_eight fifty_nine sixty sixty_one sixty_tw o sixty_three sixty_four<br />

INDIGO ORANGE YELLOW MAGENTA CYAN BLUE GREEN RED<br />

Figure 31 1024 pixel behavioural test image<br />

The breakdown of the component primary colours of red (R), green (G) and blue (B)<br />

forming each of these blocks is detailed completely in the figure following.<br />

72 of 200


[1] R = 11111111 [2] R = 000 [3] R = 000 [4] R = 000 [5] R = 11111111 [6] R = 11111111 [7] R = 11111111 [8] R = 01111111<br />

G = 000 G = 11111111 G = 000 G = 11111111 G = 000 G = 11111111 G = 01111111 G = 000<br />

B = 000 B = 000 B = 11111111 B = 11111111 B = 11111111 B = 000 B = 000 B = 01111111<br />

[9] R = 000 [10] R = 11111111 [11] R = 000 [12] R = 000 [13] R = 000 [14] R = 11111111 [15] R = 11111111 [16] R = 11111111<br />

G = 11111111 G = 000 G = 11111111 G = 000 G = 11111111 G = 000 G = 11111111 G = 01111111<br />

B = 000 B = 000 B = 000 B = 11111111 B = 11111111 B = 11111111 B = 000 B = 000<br />

[17] R = 000 [18] R = 000 [19] R = 11111111 [20] R = 000 [21] R = 000 [22] R = 000 [23] R = 11111111 [24] R =11111111<br />

G = 000 G = 11111111 G = 000 G = 11111111 G = 000 G = 11111111 G = 000 G = 11111111<br />

B = 11111111 B = 000 B = 000 B = 000 B = 11111111 B = 11111111 B = 11111111 B = 000<br />

[25] R = 000 [26] R = 000 [27] R = 000 [28] R = 11111111 [29] R = 000 [30] R = 000 [31] R = 000 [32] R = 11111111<br />

G = 11111111 G = 000 G = 11111111 G = 000 G = 11111111 G = 000 G = 11111111 G = 000<br />

B = 11111111 B = 11111111 B = 000 B = 000 B = 000 B = 11111111 B = 11111111 B = 11111111<br />

[33] R = 11111111 [34] R = 000 [35] R = 000 [36] R = 000 [37] R = 11111111 [38] R = 000 [39] R = 000 [40] R = 000<br />

G = 000 G = 11111111 G = 000 G = 11111111 G = 000 G = 111111111 G = 000 G = 11111111<br />

B = 11111111 B = 11111111 B = 11111111 B = 000 B = 000 B = 000 B = 11111111 B = 11111111<br />

[41] R = 11111111 [42] R = 11111111 [43] R = 000 [44] R = 000 [45] R = 000 [46] R = 11111111 [47] R = 000 [48] R = 000<br />

G = 11111111 G = 000 G = 11111111 G = 000 G = 11111111 G = 000 G = 11111111 G = 000<br />

B = 000 B = 11111111 B = 11111111 B = 11111111 B = 000 B = 000 B = 000 B = 11111111<br />

[49] R = 11111111 [50] R = 11111111 [51] R = 11111111 [52] R = 000 [53] R = 000 [54] R = 000 [55] R = 11111111 [56] R = 000<br />

G = 01111111 G = 11111111 G = 000 G = 11111111 G = 000 G = 11111111 G = 000 G = 11111111<br />

B = 000 B = 000 B = 11111111 B = 11111111 B = 11111111 B = 000 B = 000 B = 000<br />

[57] R = 01111111 [58] R = 11111111 [59] R = 11111111 [60] R = 11111111 [61] R = 000 [62] R = 000 [63] R = 000 [64] R = 11111111<br />

G = 000 G = 01111111 G = 11111111 G = 000 G = 11111111 G = 000 G = 11111111 G = 000<br />

B = 01111111 B = 000 B = 000 B = 11111111 B = 11111111 B = 11111111 B = 000 B = 000<br />

Figure 32 Colour composition<br />

73 of 200


3.8.1 Other test images<br />

For other behavioural and structural work smaller test images based on subsets of the<br />

32 by 32 test image may be used; in particular for receiver implementation the 4 by 4<br />

test image i.e. 16 pixels will be used.<br />

Red Green Blue Cyan<br />

Magenta Yellow Orange Indigo<br />

Green Red Green Blue<br />

Cyan Magenta Yellow Orange<br />

Figure 33 sixteen pixel image<br />

3.9 Biphasic pulse<br />

In order to ensure zero charge build up on nerve cells and allow chemical<br />

interactions to occur; when the implant is driven by the electronics biphasic pulses<br />

must be formed.[57] The shape of the proposed biphasic pulse for this work; in<br />

comparison to spike representation maximum pulsing, is as shown in figure 32.<br />

74 of 200


Pulse generation is not part of the receiver chip design per se rather it’s the driver<br />

(circuitry driving the electrodes) post processing interface, typically mounted as part<br />

of the retinal implant, typically atop the electrode array. The negative going phase of<br />

the pulse lasts for one quarter of the biphasic pulse time in this case 1ms and the<br />

positive going phase of the pulse again lasts for one quarter of the biphasic pulse<br />

time i.e. 1ms. In this case the biphasic pulse time matches with the 4ms incurred by<br />

the biological action potential shown earlier in this chapter.<br />

Spike plus off time equals the time for the biological action potential; represented by<br />

the biphasic pulse, plus the interspike interval which in the case of maximum pulsing<br />

i.e. full hue intensity, will be 20ms. NB the spike time will never alter only the off<br />

time will vary according to the shade of colour (hue intensity) between 16ms at full<br />

pulsing and 196ms at minimum pulsing.<br />

75 of 200


Tspike<br />

(4ms)<br />

Ispike<br />

(16ms:<br />

interspike<br />

interval)<br />

Tplane<br />

-ve<br />

0ve<br />

Tspike<br />

(4ms)<br />

Spike plus off time<br />

Interphase delay<br />

Ispike<br />

(16ms:<br />

interspike<br />

interval)<br />

Tplane<br />

Figure 34 Biphasic pulse compared to `spike’<br />

3.10 Concluding this chapter<br />

The spike time determined by this analysis of 52.08ns is specific to the behavioural<br />

test scene to be used of 32 by 32 that is an image composed of 1024 pixels. As at<br />

present (10/07/2012) retinal implants are unlikely to cope with much more than 1024<br />

electrodes and achromatic approaches to retinal prostheses are now in clinical trials<br />

[116] this at first sight seems a justifiable picture size to work with. Unfortunately<br />

with three colours there would be an electrode requirement of three times this<br />

amount i.e. 3072 electrodes. An electrode requirement of 3072 electrodes for the<br />

retinal prosthesis; to deal solely with colour is difficult, given the RGC axon size of<br />

76 of 200


the order of 1µm/2.5µm, and the foveal area (2.66mm 2 ) in which the preponderance<br />

of colour sensitive fibres exist. Beyond the foveal area the positioning of electrodes<br />

to interface with colour sensitive RGCs would be very onerous for little extra value<br />

in terms of retinal prosthesis capability. In any case the camera approach in its<br />

present form would also have a limit of a 1024 by 1024 image i.e. 1048576 pixels as<br />

beyond that size there is no longer a one to one correspondence between cones and<br />

ganglion axons via bipolar cells. What is being said here is that beyond that one-toone<br />

correspondence horizontal cells and amacrine cells exert their influence to give<br />

convergence of the receptive fields to the ganglion cells. [205]<br />

77 of 200


INDIGO<br />

ORANGE<br />

YELLOW<br />

MAGENTA<br />

CYAN<br />

BLUE<br />

GREEN<br />

RED<br />

ORANGE<br />

YELLOW<br />

MAGENTA<br />

CYAN<br />

BLUE<br />

GREEN<br />

RED<br />

GREEN<br />

YELLOW<br />

MAGENTA<br />

CYAN<br />

BLUE<br />

GREEN<br />

RED<br />

GREEN<br />

BLUE<br />

MAGENTA<br />

CYAN<br />

BLUE<br />

GREEN<br />

RED<br />

GREEN<br />

BLUE<br />

CYAN<br />

CYAN<br />

BLUE<br />

GREEN<br />

RED<br />

GREEN<br />

BLUE<br />

CYAN<br />

MAGENTA<br />

BLUE<br />

GREEN<br />

RED<br />

GREEN<br />

BLUE<br />

CYAN<br />

MAGENTA<br />

YELLOW<br />

INDIGO<br />

RED<br />

GREEN<br />

BLUE<br />

CYAN<br />

MAGENTA<br />

YELLOW<br />

ORANGE<br />

RED<br />

GREEN<br />

BLUE<br />

CYAN<br />

MAGENTA<br />

YELLOW<br />

ORANGE<br />

INDIGO<br />

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32<br />

33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64<br />

65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96<br />

97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128<br />

129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160<br />

161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192<br />

193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224<br />

225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256<br />

257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288<br />

289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320<br />

321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352<br />

353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384<br />

385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416<br />

417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448<br />

449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480<br />

481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512<br />

513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544<br />

545 546 547 548 549 550 551 552 553 554 545 546 547 548 549 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576<br />

577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608<br />

609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640<br />

641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672<br />

673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704<br />

705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736<br />

737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768<br />

769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800<br />

801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832<br />

833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864<br />

865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 890 891 892 893 894 895 896 897 898 899 890 891 892 893 894 895 896<br />

897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928<br />

929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960<br />

961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992<br />

993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024<br />

Figure 35 Correspondence of pixels to colour<br />

78 of 200


Chapter 4 Sender chip<br />

The purpose of the sender chip is to convert an incoming signal from its<br />

conventional representation to an AER format suitable for transmission to the<br />

receiver chip. For the purposes of this work the incoming signal will be in the form<br />

of the picture data held in a ROM. This picture data initially being in the form of a<br />

test image of 32 by 32 i.e. 1024 pixel count, this being later revised to a smaller test<br />

image for implementation. Eventually this source will be replaced by a direct camera<br />

input as indicated by the following diagram, ideally the camera would output in<br />

RGB format to alleviate pre-processing from other formats such as YUV.<br />

Test scene<br />

(from camera)<br />

SENDER - Accept, and<br />

process scene into AER<br />

protocol<br />

AER<br />

Transmission<br />

lines<br />

RECEIVER<br />

chip (routes<br />

signals to<br />

implants)<br />

Figure 36; Envisaged System


4.1 Sender concepts<br />

From chapter one we know that the maximum address event time for an AER packet<br />

representing the pixel of a 1024 pixel count image can be 39062.5ns therefore the<br />

full image would take 1024*39062.5 = 40000000ns i.e. 40ms. The figure calculated<br />

for spike time (T spike ) on this basis was 156.25ns and for T pulse was 781.25ns i.e. 5*<br />

T spike .<br />

The AER packet time (T packet ) aka T pixel can be shortened at the expense of increased<br />

fps the chart (taken from fps_calcs.xls) demonstrates this relationship.<br />

45000<br />

40000<br />

35000<br />

30000<br />

25000<br />

Tpacket<br />

20000<br />

fps<br />

Tae (ns)<br />

15000<br />

10000<br />

5000<br />

0<br />

25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 79 82 85 88 91 94 97 100<br />

fps<br />

Figure 37 T ae versus fps


E.g. choosing a frame rate of 100 gives a spike time of 39.0625ns and a partial spike<br />

time of 9.7656.25 ns from a subset of the table (fps_calcs.xls) used to construct the<br />

above chart.<br />

fps T f T ae (@1024 T ae (ns) T long_pulse T spike_pulse T partial_spike<br />

f pixel<br />

25 0.04 3.906E-05 39062.5 781.25 156.25 39.0625 25600<br />

50 0.02 1.953E-05 19531.25 390.625 78.125 19.53125 51200<br />

75 0.013333 1.302E-05 13020.83 260.4167 52.08333 13.02083 76800<br />

100 0.01 9.766E-06 9765.625 195.3125 39.0625 9.765625 102400<br />

Table 6 T spike related to fps<br />

The partial spike time refers to the quarter component part of the biphasic pulse<br />

which can be accommodated using the clock pulse (10ns) of a typical FPGA running<br />

at 100MHz. For this chapter, dealing with AER stream production, we could put this<br />

consideration on hold and typify the AER sender chip as operating at circa 25MHz<br />

with an address event timing of T ae = 39062.5 ns and a spike time of 156.25 ns.<br />

For one pixel of an image at maximum intensity there will be fifty `outerpulses’ i.e.<br />

the actual spike plus the time before another one is allowed; or makes sense. As this<br />

pixel information must be transmitted in one second; each outerpulse, in real time,<br />

must take 0.02s. So in conventional terms for a frame rate of 50fps then each pixel,<br />

during propagation, experiences one outerpulse per frame albeit at faster than `real<br />

time’. As biologically the innerpulse (comprising `spike’ plus recovery time) takes<br />

4ms (0.004s) this implies an `off’ time; at maximum intensity of (0.02-0.004) equals<br />

0.016s, again in `real time’, as has been previously stated. Using an example of a<br />

1024 pixel image at 50fps as each frame takes 0.02s then the serialised time for a


pixel must equate to 0.02/1024 i.e. 0.00001953125s = 19531.25ns as shown in table<br />

5.<br />

Example Hierarchy<br />

T periodic (ns) T periodic (s) frequency (HZ) frequency (MHz)<br />

[1] Circa 25MHz 40 0.00000004 25000000 25 partial_spike<br />

[2] Circa 6.25MHz 160 0.00000016 6250000 6.25 spike<br />

[3] Circa 1.25MHz 800 0.0000008 1250000 1.25 outer_ pulse<br />

[4] Circa 25.6KHz 39062.5 3.9063E-05 25600 0.0256 pixel<br />

2nd Example<br />

Hierarchy<br />

[1] Circa 25000MHz 0.04 4E-11 25000000000 25000 partial_spike<br />

[2] Circa 6250MHz 0.16 1.6E-10 6250000000 6250 spike<br />

[3] Circa 1250MHz 0.8 8E-10 1250000000 1250 outer_ pulse<br />

[4] Circa 25MHz 40 0.00000004 25000000 25 pixel<br />

3rd Example<br />

Hierarchy<br />

[1]Circa 100MHz 10 0.00000001 100000000 100 partial_spike<br />

[2]Circa 25MHz 40 0.00000004 25000000 25 spike<br />

[3]Circa 5MHz 200 0.0000002 5000000 5 outer_ pulse<br />

[4]Circa 102.4KHz 9765.625 9.7656E-06 102400 0.1024 pixel<br />

Table 7 examples of clocking hierarchy<br />

The diagrams illustrate the component parts for construction of the sender chip. The<br />

first following top level block diagram illustrates the component parts for<br />

construction of the sender chip in high level programming terms and the second in<br />

Xilinx FPGA conceptual terms and thirdly in<br />

a VHDL programming<br />

description[Appendix C].


Image (frame buffer)<br />

Derive pulse<br />

count from<br />

hue intensity<br />

AER stream<br />

(incorporate<br />

addresses)<br />

Output AER<br />

stream<br />

Figure 38 forming AER stream<br />

AER<br />

Pkt.<br />

Node<br />

Pulses<br />

Sender (FPGA)<br />

MV-D640 SERIES<br />

(Photonfocus)<br />

Ctrl/Supervision<br />

Camera ctrl<br />

+<br />

framegrabber<br />

RGB<br />

to<br />

`spikes’ (with<br />

address)<br />

AER<br />

Interface<br />

AER<br />

Packet =<br />

Pixel<br />

(spikes)<br />

RAM<br />

control<br />

Into SRAM<br />

RAM<br />

Figure 39 Forming AER stream


Clocking hierarchy<br />

(32 by 32 image)<br />

FPGA<br />

(incoming)<br />

e.g.<br />

100MHz<br />

Data flow components<br />

Stored<br />

image<br />

(frame)<br />

Partial_spike<br />

_clock e.g.<br />

25MHz<br />

Convert<br />

intensity to<br />

pulse_count<br />

Spike_clock<br />

(25÷4)MHz<br />

Form AER<br />

stream<br />

Pixel_clock<br />

(25÷1000)MHz<br />

NB<br />

partial_spike_clock<br />

frequency is<br />

determined by<br />

image size<br />

Output<br />

AER<br />

stream<br />

R G B<br />

Figure 40 sender program overview


4.2 Sender AER format<br />

Each AER packet; representing a pixel, contains address bits followed by a pulse<br />

count for each plane of colour within the pixel. An example of the first row of a 32<br />

by 32 image is shown in the following table of values. The first ten bits are the<br />

address bits and the remaining 18 bits represent the data containing colour<br />

information in the form of a pulse count.<br />

0000000001 110010000000000000<br />

0000000010 110010000000000000<br />

0000000011 110010000000000000<br />

0000000100 110010000000000000<br />

0000000101 000000110010000000<br />

0000000110 000000110010000000<br />

0000000111 000000110010000000<br />

0000001000 000000110010000000<br />

0000001001 000000000000110010<br />

0000001010 000000000000110010<br />

0000001011 000000000000110010<br />

0000001100 000000000000110010<br />

0000001101 000000110010110010<br />

0000001110 000000110010110010<br />

0000001111 000000110010110010<br />

0000010000 000000110010110010<br />

0000010001 110010000000110010<br />

0000010010 110010000000110010<br />

0000010011 110010000000110010<br />

0000010100 110010000000110010<br />

0000010101 110010110010000000<br />

0000010110 110010110010000000<br />

0000010111 110010110010000000<br />

0000011000 110010110010000000<br />

0000011001 110010011001000000<br />

0000011010 110010011001000000<br />

0000011011 110010011001000000<br />

0000011100 110010011001000000<br />

0000011101 011001000000011001<br />

0000011110 011001000000011001<br />

0000011111 011001000000011001<br />

0000100000 011001000000011001<br />

red<br />

green<br />

blue<br />

cyan<br />

magenta<br />

yellow<br />

orange<br />

Indigo<br />

Table 8 short AER format<br />

In the example shown there are ten address bits and eighteen bits for the AER<br />

`payload’ this payload will always be eighteen bits regardless of image size whereas<br />

85 of 200


the number of address bits will alter with image size. The number of address bits<br />

required for various image sizes can be calculated from the following formula; where<br />

the answer obtained is rounded up to the nearest whole number.<br />

Log (pixel_count)/log2<br />

side of square<br />

address bits<br />

log<br />

image<br />

10 pixel_count<br />

required<br />

Total bit length<br />

2 0.60206 4 2 20<br />

4 1.20412 16 4 22<br />

8 1.80618 64 6 24<br />

16 2.40824 256 8 26<br />

32 3.0103 1024 10 28<br />

1024 6.0206 1048576 20 38<br />

Table 9 example correspondences<br />

4.2.1 Alternative (discounted) sender AER format<br />

An alternative robust AER format can be constructed whereby the bitstream consists<br />

of ten bits of address (32 by 32 image) followed by a payload of 150 bits of which<br />

the first 50 bits encode red plane information; the second 50 bits encode green plane<br />

information and finally 50 bits of blue plane information. Such a format would be<br />

more error tolerant in terms of loss of bits within the payload e.g. a two bit error<br />

would be negligible but could only be achieved if a longer bit stream could be<br />

tolerated. As this format will be integral to the receiver chip design it can justifiably<br />

be used as part of the test setup to demonstrate that such a format can be output to a<br />

monitor to display the picture expected to be formed. Programming components (or<br />

extracts thereof) will be included in appendix B of this thesis.<br />

86 of 200


4.3 Test setup<br />

The structural program for a test setup to demonstrate the display of an AER<br />

formatted picture consists of the following components:<br />

Image in<br />

AER format<br />

Convert AER<br />

stream to<br />

pulse count<br />

VGA module<br />

DVI interface<br />

Convert<br />

pulse count<br />

to hue<br />

intensity<br />

Display<br />

monitor<br />

Figure 41 Display of AER test image<br />

The purpose of the test setup is to display on a monitor the expected image to be<br />

presented by the retinal implants and perceived by the patient.<br />

4.4 Sender chip reports<br />

The Xilinx software used for FPGA behavioural simulations and structural<br />

implementation has the facility to produce detailed reports at various stages of the<br />

design. The following detailed reports can be obtained: (1) Synthesis report, (2)<br />

87 of 200


Translation report, (3) Map report, (4) Place and route report, (5) Post-PAR Static<br />

Timing Report, (6) Power Report and (7) Bitgen report [Appendix E]<br />

4.4.1 Power Analysis<br />

Xilinx FPGAs have a power analyser facility example screenprints are shown here<br />

for power analysis after the implementation of the sender chip within the FPGA<br />

fabric. The quiescent power is 0.60008w, dynamic power is 0.03844w (38.4mw).<br />

Figure 42 sender chip power analyser screenprint<br />

4.5 Sender schematic<br />

The Register Transfer Level (RTL) schematic (figure 43) illustrates the derivation of<br />

the clocks required for the programming blocks from the Digital Clock Multiplier<br />

(DCM) supplied as intellectual property (IP) within the FPGA fabric. Using the<br />

DCM output frequency the partial_spike_clock frequency is derived the “out_clock”<br />

signal and hence spikes (spike_clock) and pixel frequency (pixel_clock). At 25fps<br />

88 of 200


for a 16 pixel image held in memory (mem_of_16_stream) then partial_spike_clock<br />

frequency is 400kHz i.e.1000 times the required pixel frequency of 400Hz. Spike<br />

frequency (spike_clock) is 250 times the required pixel frequency of 400Hz i.e.<br />

100kHz. At 5fps for a 16 pixel image then using the same logic clock frequencies<br />

would reduce to a fifth of these values.<br />

89 of 200


90 of 200<br />

CLKIN_IN<br />

RST_IN<br />

pixelclock_dcm<br />

CLKDV_OUT<br />

LOCKED_OUT<br />

CLKO_OUT<br />

CLKIN_IBUFG_<br />

OUT<br />

P<br />

VCC<br />

XST_VCC<br />

PB_ENTER<br />

USER_CLK<br />

addra(3:0)<br />

mem_of_16_stream<br />

clka<br />

Inst_mem_of_16_stream<br />

douta(23:0)<br />

Inst_pixelclock_dcm<br />

value(7:0)<br />

pc_clk<br />

pulsar<br />

Inst_pulsar_blue<br />

ZZ(7:0)<br />

B_count(7:0)<br />

G_count(7:0)<br />

R_count(7:0)<br />

four_bit_addr<br />

ess(3:0)<br />

pc_clk<br />

pc_stream(21:0)<br />

pulse_count_to_pc_stream<br />

Inst_pulse_count_to_pc_stream<br />

IO_L14P_11(21:0)<br />

get_pixel_clock_from_partial_spike<br />

_clock<br />

partial_spike_clock<br />

pixel_clock<br />

out_clock<br />

received_clock<br />

Inst_get_pixel_clock_from_partial<br />

_spike_clock<br />

get_partial_clock_from_incoming<br />

_clock<br />

Inst_get_partial_clock_from<br />

_incoming_clock<br />

Inst_get_spike_on_from_partial_spike_clock<br />

partial_spike_clock<br />

spike_clock<br />

get_spike_on_from_partial_spike_clock<br />

address1<br />

port_data<br />

Q(3:0)<br />

COUNTER:1<br />

COUNT<br />

up<br />

pulsar<br />

pc_clk<br />

ZZ(7:0)<br />

Inst_pulsar_red<br />

pulsar<br />

Inst_pulsar_green<br />

ZZ(7:0)<br />

value(7:0)<br />

pc_clk<br />

Clk<br />

value(7:0)<br />

Figure 43 sender_rtl_16


In the Xilinx Virtex 5 the DCM has the following possible divisions from either<br />

400MHz or 100MHz: 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5, 5.5, 6, 6.5, 7, 7.5, 8, 9, 10, 11, 12,<br />

13, 14, 1`5 and 16. So one possible combination to obtain 400kHz as the `out_clock’<br />

value would derive 40MHz as the output from the DCM and divide this by 100 in<br />

the programming block labeled “get_partial_clock_from_incoming_clock”. At 5fps<br />

for the same image to obtain 80kHz as the out_clock derive 20MHz as the output<br />

from the DCM and divide this by 250.<br />

In the block “get_spike_on_from_partial_spike_clock” divide by four to get the<br />

spike_clock frequency for all fps e.g. @25fps this would be100kHz.<br />

In the block get pixel_clock_from_partial_spike_clock divide by one thousand to get<br />

the pixel frequency for all fps e.g. @25fps this would be 400Hz<br />

The three instance of the “pulsar” block convert conventional 24 bit colour measure<br />

to a pulse count (described earlier), over time, for each of the three colour planes:<br />

red, green and blue.<br />

The counter counts and outputs the address bits for each pixel as it occurs.<br />

Finally “pulse_count_to_pc_stream” time multiplexes and outputs the AER stream<br />

consisting of AER packets with a payload of 18 bits. The first six bits of the payload<br />

represent the colour intensity for the red plane, followed by six bits for green and<br />

blue planes respectively.<br />

4.6 Chapter summary<br />

The size of the image in pixel count (to be transmitted) will be limited by the number<br />

of electrodes possible at the retinal implant and presently (2010) this is expected to<br />

rise to over 1000 within several years. For the sender chip; programmes were<br />

completed both behaviourally and structurally i.e. synthesised and implemented for<br />

image sizes ranging from 16 pixels (4 by 4) up to 1024 pixels (32 by 32). To<br />

91 of 200


incorporate colour planes for each pixel the electrode requirement, when received by<br />

the receiver chip, will increase for each of those image sizes; shown in Table 9.<br />

In essence the sender chip converts the colour image obtained from a commercial,<br />

off-the-shelf (COTS) camera to an AER stream. AER packet bit length will increase<br />

with image size as shown in the charts, following table 9 (source: aer_compare.xls).<br />

pix_count x 3<br />

4 12<br />

8 24<br />

16 48<br />

32 96<br />

64 192<br />

128 384<br />

256 768<br />

512 1536<br />

1024 3072<br />

Table 10 electrode requirement<br />

1200<br />

1000<br />

800<br />

600<br />

400<br />

Total bit length<br />

pixel count<br />

200<br />

0<br />

20 22 24 26 28<br />

Figure 44 upto 1024 image i.e. 32 by 32<br />

92 of 200


1200000<br />

1000000<br />

800000<br />

600000<br />

400000<br />

Total bit length<br />

pixel count<br />

200000<br />

0<br />

20 22 24 26 28 38<br />

Figure 45 (1048576) image limit i.e. 1024 by 1024<br />

93 of 200


Chapter 5 Receiver chip<br />

The following block diagram indicates the components for the receiver chip<br />

components; programming code extracts will be included in appendix D of this<br />

thesis.<br />

Incoming (`short’)<br />

AER_stream<br />

Produce colour<br />

data streams<br />

Convert to `long’<br />

format<br />

Stimulus<br />

generator<br />

Produce output<br />

streams<br />

Form biphasic<br />

pulses and<br />

stimulate<br />

implant<br />

Figure 46 Receiver block diagram<br />

5.1 Clocking calculations (sender)<br />

For one `frame’ at 25fps; of a 32 by 32 image, an entire AER stream of 1024 packets<br />

must occur for delivery at the implant. As each `frame’ takes 0.04s each packet must<br />

take at most 0.04/1024 = 0.0000390625s i.e. 39062.5ns so the frequency of each<br />

packet (T packet ) is 25.6 kHz. Within each packet there are potentially 250 spikes<br />

although only 50 are expected, this being the case the periodic time to be allowed for<br />

each spike to occur will be 39062.5ns/250 = 156.25ns and hence f spike (frequency of<br />

the spike clock) will be the reciprocal of 156.25ns i.e. 6.4MHz. Finally the periodic<br />

94 of 200


time for T partial_spike ; nominally the `quarter component of a biphasic pulse, is<br />

156.25ns/4 = 39.0625ns (a thousandth of T packet ) giving a frequency of 25.6MHz.<br />

f packet T spike<br />

f spike (f biphasic )<br />

fps T frame row col pix_count T packet (ns) (kHz) (T biphasic )(ns) (MHz)<br />

25 0.04 32 32 1024 39062.5 25600 156.25 6.4<br />

25 0.04 16 16 256 156250 6400 625 1.6<br />

25 0.04 8 8 64 625000 1600 2500 0.4<br />

25 0.04 4 4 16 2500000 400 10000 0.1<br />

25 0.04 2 2 4 10000000 100 40000 0.025<br />

50 0.02 32 32 1024 19531.25 51200 78.125 12.8<br />

50 0.02 16 16 256 78125 12800 312.5 3.2<br />

50 0.02 8 8 64 312500 3200 1250 0.8<br />

50 0.02 4 4 16 1250000 800 5000 0.2<br />

50 0.02 2 2 4 5000000 200 20000 0.05<br />

100 0.01 32 32 1024 9765.625 102400 39.0625 25.6<br />

100 0.01 16 16 256 39062.5 25600 156.25 6.4<br />

100 0.01 8 8 64 156250 6400 625 1.6<br />

100 0.01 4 4 16 625000 1600 2500 0.4<br />

100 0.01 2 2 4 2500000 400 10000 0.1<br />

Table 11 clocking calculations<br />

The following table (taken from “fps_calcs.xls”) illustrates these calculations:<br />

Circa 25MHz f f(kHz) f(MHz) T periodic T periodic(ns) T periodic(µ)<br />

T aer_stream 25 0.025 0.000025 0.04 40000000 40000<br />

3.90625E-<br />

T pixel 25600 25.6 0.0256 05 39062.5 39.0625<br />

T spike 6400000 6400 6.4 1.5625E-07 156.25 0.15625<br />

3.90625E-<br />

T partial_spike circa 40ns 25600000 25600 25.6 08 39.0625 0.0390625<br />

The following sub-table either reads each frame four times or allows for 100fps;<br />

Circa 100MHz f f(kHz) f(MHz) T periodic T periodic(ns) T periodic(µ)<br />

T aer_stream 100 0.1 0.0001 0.01 10000000 10000<br />

T pixel 102400 102.4 0.1024<br />

9.76563E-<br />

06 9765.625 9.765625<br />

T spike 25600000 25600 25.6<br />

3.90625E-<br />

08 39.0625 0.0390625<br />

T partial_spike circa 10ns 102400000 102400 102.4<br />

9.76563E-<br />

09 9.765625 0.009765625<br />

Using a 100MHz clock and showing the effect on T partial_spike !<br />

Exactly 100MHz f f(kHz) f(MHz) T periodic T periodic(ns) T periodic(µ)<br />

T aer_stream 97.65625 0.0976563 9.77E-05 0.01024 10240000 10240<br />

T pixel 100000 100 0.1 0.00001 10000 10<br />

T spike 25000000 25000 25 0.00000004 40 0.04<br />

T partial_spike exactly<br />

10ns 100000000 100000 100 0.00000001 10 0.01<br />

Table 12 simulation clocking<br />

95 of 200


5.2 Power Analysis<br />

Xilinx FPGAs have a power analyser facility an example screenprint is shown here<br />

for power analysis after the implementation of a 16 pixel (colour) receiver chip<br />

within FPGA fabric. The quiescent power is 0.60006w, dynamic power is 0.03772w<br />

(37.72mw).<br />

Figure 47 receiver power analysis<br />

96 of 200


5.3 Receiver schematics<br />

In figure 47 the first block: “pulse_count_to_AER_stream receives from the sender<br />

chip the AER stream formatted as address bits followed by a payload of 18 bits of<br />

colour data. The 16 pixel image to be dealt with has 4 address bits associated with<br />

the payload. This block converts the payload from a pulse count to a string of 50 bits<br />

for each colour plane, retains the pixel address and outputs a 154 bit stream.<br />

The second block “preform_for_electrodes” separates the planes of colour into three<br />

separate streams with each plane of colour retaining the pixel address with which it<br />

is associated, giving the potential for each plane of colour to be addressed separately.<br />

The third block (produce_colour_data_streams) strips out the address bits and<br />

continues the processing sequentially on each plane of colour data.<br />

Block four (fully_extend_data_streams) packs the colour data into pixel length<br />

streams to be fed into the routing stage. Block five, the routing stage<br />

(produce_wire_outputs) has a twofold function (1) it routes the pixel streams to the<br />

relevant inputs to the electrodes driving circuitry (2) it allows routing adjustments to<br />

be made where necessary within the software environment.<br />

97 of 200


98 of 200<br />

produce_wire<br />

_outputs<br />

Inst_produce_<br />

wire_outputs<br />

blue(999:0)<br />

green(999:0)<br />

red(999:0)<br />

extended_blue<br />

(999:0)<br />

extended_red<br />

(999:0)<br />

extended_green<br />

(999:0)<br />

blue_data<br />

_incoming<br />

(49:0)<br />

fully_extend_<br />

data_streams<br />

Inst_fully_extend_data_<br />

streams<br />

green_data<br />

_incoming(<br />

49:0)<br />

red_data_<br />

incoming(<br />

49:0)<br />

pulse_count_to_<br />

AER_stream<br />

blue_data<br />

_out(49:0)<br />

green_data<br />

_out(49:0)<br />

red_data_<br />

out(49:0)<br />

produce_colour<br />

_data_streams<br />

B_pulse_<br />

count(5:0)<br />

R_pulse_<br />

count(5:0)<br />

single_address<br />

(3::0)<br />

pcs_clock<br />

G_pulse_<br />

count(5:0)<br />

Inst_pulse_count_to<br />

_AER_stream<br />

XST_GND<br />

g<br />

gnd<br />

Inst_produce_colour<br />

_data_streams<br />

preform_for_<br />

electrodes<br />

Inst_preform_<br />

for_electrodes<br />

blue<br />

(57:0)<br />

green<br />

(57:0)<br />

red<br />

(57:0)<br />

blue_with<br />

_address<br />

(57:0)<br />

green_with<br />

_address<br />

(57:0)<br />

red_with_<br />

address<br />

(57:0)<br />

this_spike_<br />

clock<br />

spike_data<br />

_clock<br />

incoming<br />

_stream(<br />

153:0)<br />

AER_RGB_pulse<br />

_stream(153:0)<br />

preform<br />

_clock<br />

blue_1(3:0)<br />

blue_16(3:0)<br />

blue_14(3:0)<br />

blue_13(3:0)<br />

blue_12(3:0)<br />

blue_11(3:0)<br />

blue_10(3:0)<br />

blue_9(3:0)<br />

blue_8(3:0)<br />

blue_7(3:0)<br />

blue_6(3:0)<br />

blue_5(3:0)<br />

blue_4(3:0)<br />

blue_3(3:0)<br />

blue_2(3:0)<br />

blue_15(3:0)<br />

green_16(3:0)<br />

green_14(3:0)<br />

green_13(3:0)<br />

green_12(3:0)<br />

green_11(3:0)<br />

green_10(3:0)<br />

green_9(3:0)<br />

green_8(3:0)<br />

green_7(3:0)<br />

green_6(3:0)<br />

green_5(3:0)<br />

green_4(3:0)<br />

green_3(3:0)<br />

green_2(3:0)<br />

green_15(3:0)<br />

red_16(3:0)<br />

red_14(3:0)<br />

red_13(3:0)<br />

red_12(3:0)<br />

red_11(3:0)<br />

red_10(3:0)<br />

red_9(3:0)<br />

red_8(3:0)<br />

red_7(3:0)<br />

red_6(3:0)<br />

red_5(3:0)<br />

red_4(3:0)<br />

red_3(3:0)<br />

red_2(3:0)<br />

red_15(3:0)<br />

green_1(3:0)<br />

red_1(3:0)<br />

IO_L15P_15(3:0)<br />

IO_L15N_15(3:0)<br />

IO_L16P_15(3:0)<br />

IO_L16N_15(3:0)<br />

IO_L17P_15(3:0)<br />

IO_L17N_15(3:0)<br />

IO_L18P_15(3:0)<br />

IO_L18N_15(3:0)<br />

IO_L19P_15(3:0)<br />

IO_L19N_15(3:0)<br />

IO_L13P_17(3:0)<br />

IO_L13N_17(3:0)<br />

IO_L14P_17(3:0)<br />

IO_L15P_17(3:0)<br />

IO_L15N_17(3:0)<br />

IO_L16P_17(3:0)<br />

IO_L13P_15(3:0)<br />

IO_L19N_13(3:0)<br />

IO_L19P_13(3:0)<br />

IO_L18N_13(3:0)<br />

IO_L18P_13(3:0)<br />

IO_L17N_13(3:0)<br />

IO_L17P_13(3:0)<br />

IO_L16N_13(3:0)<br />

IO_L16P_13(3:0)<br />

IO_L15N_13(3:0)<br />

IO_L15P_13(3:0)<br />

IO_L14P_13(3:0)<br />

IO_L13N_13(3:0)<br />

IO_L13P_13(3:0)<br />

IO_L13N_15(3:0)<br />

IO_L18N_12(3:0)<br />

IO_L18P_12(3:0)<br />

IO_L17N_12(3:0)<br />

IO_L17P_12(3:0)<br />

IO_L16N_12(3:0)<br />

IO_L16P_12(3:0)<br />

IO_L15N_12(3:0)<br />

IO_L15P_12(3:0)<br />

IO_L14P_12(3:0)<br />

IO_L13N_12(3:0)<br />

IO_L13P_12(3:0)<br />

IO_L14P_11(3:0)<br />

IO_L13N_11(3:0)<br />

IO_L13P_11(3:0)<br />

IO_L14P_15(3:0)<br />

IO_L19P_12(3:0)<br />

IO_L19N_12(3:0)<br />

CLKIN_IN<br />

RST_IN<br />

pixelclock_dcm<br />

CLKDV_OUT<br />

LOCKED_OUT<br />

CLKO_OUT<br />

CLKIN_IBUFG_<br />

OUT<br />

P<br />

VCC<br />

XST_VCC<br />

USER_<br />

CLK<br />

Inst_pixelclock_dcm<br />

get_spike_on_from_partial<br />

_spike_clock<br />

partial_spike_clock<br />

spike_clock<br />

biphasic<br />

_clock<br />

Inst_get_spike_on_from_partial_<br />

spike_clock<br />

get_pixel_clock_from_partial_spike<br />

_clock<br />

partial_spike_clock<br />

pixel_clock<br />

wire_<br />

clock<br />

out_clock<br />

get_partial_clock_from<br />

_incoming_clock<br />

Inst_get_partial_clock_from<br />

_incoming_clock<br />

received_clock<br />

Figure 48 receiver rtl_16


5.4 Clocking calculations (receiver)<br />

The following table (Table 12) indicates the relationships between frequencies where<br />

the assumption is made that the camera frame rate is 1fps. The frequency at which<br />

the system would then be set to run is denoted by f part which will be each component<br />

part of the biphasic pulse forming the representation of the biological spike. In real<br />

time each spike lasts 4ms and therefore each component of the biphasic spike will<br />

last 1ms. That is the 1ms of the biphasic pulse represents an `on’ pulse of negative<br />

polarity, 2ms of zero polarity and 1ms representing an `on’ pulse of positive polarity.<br />

Derived from f part is f spike i.e. spiking time and finally f packet which is the frequency of<br />

the AER packet i.e. pixel frequency. For completeness the periodic time for these<br />

frequencies is also shown in the table.<br />

fps T frame pix_count T packet T part<br />

T spike<br />

(T biphasic )(ns)<br />

f spike<br />

(f biphasic )<br />

(Hz)<br />

f part {Hz)<br />

f packet<br />

(Hz)<br />

1 1 1024 0.00097656 9.76563E-07 3.90625E-06 256000 1024000 1024<br />

1 1 256 0.00390625 3.90625E-06 0.000015625 64000 256000 256<br />

1 1 64 0.015625 0.000015625 0.0000625 16000 64000 64<br />

1 1 16 0.0625 0.0000625 0.00025 4000 16000 16<br />

1 1 4 0.25 0.00025 0.001 1000 4000 4<br />

Table 13 receiver calculations<br />

5.5 Programming components descriptions<br />

These subsections describe the following component parts: `Incoming(short) AER<br />

stream’, `Production of colour data streams’, `Convert to `long’ format’, `Production<br />

of output streams’ and finally `Formation of biphasic pulses prior to delivery to the<br />

retinal implant’.<br />

99 of 200


5.5.1 Incoming (short) AER stream<br />

The `short’ AER stream for a 16 pixel image consists of 8 address bits and 18 bits of<br />

colour plane information. This initial component (preform_for_electrodes) simply<br />

produces 8 address bits and 6 bits of colour information - this encoding of the integer<br />

range 0..63 holds the pulse count of up to 50 pulses for maximum intensity; for each<br />

plane of colour.<br />

5.5.2 Production of colour data streams<br />

The pulse count for each plane is now converted (produce_colour_data_streams) to a<br />

50 bit representation whereby each bit of value one is an active pulse and of value<br />

zero inactive i.e. no pulse. At this stage of the processing, within the FPGA fabric,<br />

the value of one means that the next stage of the processing will recognise that an<br />

innerpulse aka biphasic pulse representing a spike aka, an action potential in<br />

biological terminology, occurs within an outerpulse.<br />

5.5.3 Convert to long format<br />

For the red, green and blue planes the colour data stream held in fifty bits is now<br />

converted so that each outerpulse is represented by twenty bits where four bits<br />

represent the `spike’ aka as the innerpulse and the remaining 16 bits represent the off<br />

time of the outerpulse. Therefore the information for a full pixel for each plane is<br />

held in 1000 bits.<br />

5.5.4 Production of output streams<br />

Using a case statement enables 250 blocks of four bits to be sent to each of the<br />

outgoing wires (sixteen for each colour) equally spaced over a second where those<br />

100 of 200


four bits represent a spike. The active spike aka biphasic pulse is represented by<br />

1001 i.e. where the first bit will be a negative going pulse and the fourth a positive<br />

going pulse. No pulse occurring is 0000. As there are only 50 active spikes allowed<br />

to occur within a second due to an `off time’ being associated with each one then any<br />

change in colour intensity will only be seen after this off time.<br />

5.6 Summary of FPGA resource utilisation<br />

Number of Slice Registers 102 28800 0%<br />

Number of Slice LUTs 119 28800 0%<br />

Number of fully used LUT-FF pairs 100 121 82%<br />

Number of bonded IOBs 193 480 40%<br />

Number of BUFG/BUFGCTRLs 3 32 9%<br />

Number of DCM_ADVs 1 12 8%<br />

Table 14 Receiver chip FPGA resource data<br />

5.7 Post processing for electrodes<br />

The biphasic pulse represented in the output stream by “1001” requires some post<br />

processing enabling the first bit to represent a negative going current pulse to the<br />

implant then after two biphasic clock pulses a positive going current pulse to be<br />

delivered to the implant. Although this is a parallel bus topology at this point the<br />

address bits have been expended within the receiver chip and hence each output wire<br />

can deliver stimulation data directly to the digital to analogue converter circuitry<br />

which in turn feeds each stimulation site of the implant. The following block<br />

diagram illustrates this process.<br />

101 of 200


Receiver<br />

Red<br />

plane<br />

r1<br />

DAC<br />

(4-bit)<br />

To<br />

corresponding<br />

electrodes<br />

Where a<br />

pulse exists,<br />

zero<br />

otherwise<br />

r2<br />

DAC<br />

(4-bit)<br />

r3<br />

DAC<br />

(4-bit)<br />

r4<br />

DAC<br />

(4-bit)<br />

r5<br />

DAC<br />

(4-bit)<br />

r6<br />

DAC<br />

(4-bit)<br />

r7<br />

DAC<br />

(4-bit)<br />

r8<br />

DAC<br />

(4-bit)<br />

r9<br />

DAC<br />

(4-bit)<br />

r10<br />

DAC<br />

(4-bit)<br />

r11<br />

DAC<br />

(4-bit)<br />

r12<br />

DAC<br />

(4-bit)<br />

r13<br />

DAC<br />

(4-bit)<br />

r14<br />

DAC<br />

(4-bit)<br />

r15<br />

DAC<br />

(4-bit)<br />

r16<br />

DAC<br />

(4-bit)<br />

Figure 49 Outputting to electrodes<br />

102 of 200


5.8 Stimulator <strong>Retinal</strong> Interface<br />

In the sub_retinal approach the electrode array is positioned, replacing damaged<br />

photoreceptors, such that the electrodes impinge into the INL and stimulate bipolar<br />

cells. In the epiretinal approach the electrode array is tacked onto the innermost layer<br />

of the retina and the electrodes stimulate the axons or soma of the ganglion cells; the<br />

stimulator circuitry is atop of the electrode array. The distinctions between these two<br />

approaches were being formed in the 1990’s when several major projects in the<br />

USA, Germany and elsewhere were on-going.<br />

Early electrode array formulations [120, 206] were housed on thin flexible polyimide<br />

strips; e.g. 4µm thick, on which at one end was housed the stimulator circuitry and at<br />

the other the electrodes to be held to the retina (≈250µm thick). In 1999 [207]<br />

Humayun et al, in their investigation of planar disc electrodes of > 125µm noted a<br />

dramatic increase in current requirement, when the distance between a stimulating<br />

electrode and the retina was more than 0.5mm (500µm); thus highlighting the<br />

importance of this particular parameter.<br />

5.8.1 Factors affecting current requirement for stimulation<br />

Electrode positioning; in terms of distance [52] from electrode to point of<br />

stimulation e.g. 30µm [49] , electrode geometry; in terms of size and shape,<br />

electrode material in terms of biocompatibility and charge delivery capability.<br />

Electrode arrays will also take into consideration number of electrodes, electrode<br />

spacing and number of electrodes actually to be or being used. In the case of a<br />

sub_retinal array [122, 174] typically the number of electrodes will match up to the<br />

number of photodiodes as each photodiode will be connected to the electrode<br />

designed ostensibly to stimulate bipolar cells. In the case of an epiretinal array it was<br />

realised a decade ago (2002) [174] that to achieve safe current density with smaller<br />

electrodes it would be necessary to use penetrating electrodes rather than planar<br />

electrodes; as this geometry would allow a reduced distance to target cells that are<br />

embedded in ganglion cell layers of 20-40µm within nerve fibre layers of 20-200µm.<br />

Electrode materials used for stimulation are titanium nitride, platinum, and iridium<br />

oxide [208], which have good charge carrying capacity.<br />

103 of 200


5.8.2 Electrode size and positioning in current retinal<br />

approaches<br />

The diameter of implanted electrodes in current retinal approaches [88, 103, 209,<br />

210] previous to 2007 has been of an order of magnitude of > 50µm. During 2007 it<br />

was determined that a change in electrode-retina distance of 100µm can result in a<br />

ten times change in threshold stimulation and also early results in the Second Sight<br />

device (epiretinal) in Humans showed an inverse square relationship between<br />

impedance and retina-electrode distance [210]. Clinical trials also began on the<br />

Second Sight Second-Generation implant at this time (www.2-sight.com) an earlier<br />

(2006) subretinal device [115] by Retina Implant GmbH and an a second clinical<br />

trial of an epiretinal device by Intelligent Medical Implants AG. In this reference it<br />

was pointed out that as an individual with 20/20 vision can resolve differences of<br />

1/60 of one degree of visual angle that this would translate to 5µm on the retina<br />

implying that 5µm electrodes would be needed for vision at this level. An estimate<br />

was also that a 100µm electrode could provide vision equivalent to 20/400 acuity. In<br />

2008 [176] the EPI-RET-3 prosthesis subsequently had 3D-electrodes with a<br />

diameter of 100µm and a height of 25µm.<br />

5.8.3 Recent developments<br />

2010 to now (2012) has seen a resurgence of work in this area; subretinally [23] the<br />

use of a parylene-based 3-D microelectrode array (MEA) demonstrated in vitro and<br />

in vivo how distance between the stimulating electrode (Ti/Au/Pt) and targeted cells<br />

could be decreased by the use of the tip shaped electrodes as opposed to nearly flat<br />

planar electrodes. Epiretinally [102] an in vivo study indicates smaller current<br />

thresholds; ≈100µA than previous thresholds; 500µA. Also [97] the possibility of<br />

lower stimulus voltage for 3-D electrodes as opposed to flat planar electrodes. The<br />

latest penetrating array for epiretinal prosthesis [29] makes possible high density<br />

arrays (≈100 electrodes/mm 2 ) and arrays of 2µm, 5µm, 10µm, 20µm and 30µm<br />

diameter with heights of 60µm-100µm and pitches of 50µm-400µm.<br />

104 of 200


Chapter 6 Concluding chapter<br />

This chapter presents the conclusions and results of this research and future<br />

recommendations for work to be done both in this school and others.<br />

6.1 Conclusions and results<br />

The paper by Professor M. Humayun et al [171]describes the great promise of a<br />

greyscale retinal implant of 16 pixels and 16 electrodes. The implementation<br />

described in this work describes a structural colour version involving 16 pixels and<br />

48 electrodes wherein 16 electrodes are used to represent each of the three colour<br />

planes of red, green and blue. Also behaviourally programming has been done for<br />

upto 1024 pixels i.e. a 32 by 32 image this would imply 3072 electrodes and for a 16<br />

by 16 image i.e. 256 pixels implying 768 electrodes. Similarly for an 8 by 8 image<br />

i.e. 64 pixels implying 192 electrodes. As the present (2012) limit for the wiring to<br />

those electrodes is 256 wires through a 5mm incision made in the eyeball; use of an<br />

8 by 8 image would leave (256-192) leaving 64 wires `spare’ for an 8 by 8 image.<br />

The reasoning for the use of three electrodes for each pixel is down to retinotopic<br />

mapping, [160]an established principle in neuroscience. In practice this implies that<br />

the topography of mapping is directly related to geometric principles. In other words<br />

there is typically a consistent grouping of ganglion nerve fibres such that each group<br />

of three correctly spaced axons will expect to propagate red(R), green (G) and blue<br />

(B) colour signals. So in the case of a fully functioning retina there is a one to one<br />

correspondence between a red cone and a `red’ ganglion cell, a green cone and a<br />

`green’ ganglion cell and a blue cone and a `blue’ ganglion cell. This being the case<br />

by correctly forming the retinal implant to adhere to the retinotopic mapping then the<br />

105 of 200


sequence of wiring from the receiver chip to the retinal implant should be red, green,<br />

blue etc.<br />

Prior to actual connection to an implant all behavioral programmes created can be<br />

ran to produce files of written output to confirm signals are as to be expected. These<br />

written files can be read with MATLAB software to display the expected produced<br />

picture. The fully implemented 16 pixel image outputs its signals on the dedicated<br />

input/output pins of the Virtex 5 FPGA 64 pin header.<br />

6.2 Implementation of design<br />

When the receiver discussed in the previous chapter has produced the pulse<br />

frequency modulated (PFM) digital signal it goes to the digital to analogue converter<br />

(DAC) within the electrode driver (stimulator) circuitry atop of the retinal array. In<br />

this proposed design the incoming signal feeds into a shift register composed of four<br />

flip flops. This circuit (see figure 50) allows data which appears on the data bus for<br />

several milliseconds to be held at the output indefinitely,<br />

output 1 output 2 output 3<br />

output 4<br />

Data<br />

in<br />

D<br />

Q<br />

D<br />

Q<br />

D<br />

Q<br />

D<br />

Q<br />

1<br />

2<br />

3<br />

4<br />

FPGA<br />

clock<br />

input<br />

Figure 50 Shift (reference) register<br />

106 of 200


So in the application of the D type flip flop as a shift register, this is used to store<br />

number of bits which are entered into the register (or group of flip flops) in a serial<br />

fashion. Data is presented a bit at a time to input D of flip flop 1. After a clock pulse<br />

the data at D2 appears at output 2. Whilst the new data which had been at D1 is now<br />

at Q 1 (or output 1). This process is repeated until this initial register is full. The data<br />

is transferred out in parallel from outputs 1 to 4 for DAC operation. The forming of<br />

the control signal; by masking, to enable operation of the DAC is shown in figure 51<br />

(Enable DAC) and the Boolean table in Table15<br />

b1<br />

b2<br />

b3<br />

b4<br />

Figure 51 Enable DAC<br />

b1 b2 b3 b4 Output<br />

0 0 0 0 0<br />

1 0 0 1 1<br />

0 1 0 0 0<br />

0 0 1 0 0<br />

0 0 0 1 0<br />

Table 15 Control signals to enable DAC<br />

6.2.1 DAC operation<br />

To operate a unipolar converter; figure 52, as a bipolar converter, figure 53, it is<br />

necessary to produce an offset voltage and the way the basis of the circuit to be used<br />

107 of 200


does this is to inject an equivalent offset current, into the summing junction of the<br />

op-amp as shown in figure 53.<br />

I<br />

R<br />

R<br />

R<br />

Vref<br />

2R 2R 2R<br />

2R<br />

2R<br />

I/2<br />

I/4 I/8<br />

I/16<br />

R F<br />

-<br />

+<br />

V O<br />

ENABLE<br />

4-bit register<br />

MSB<br />

LSB<br />

Figure 52 An R-2R ladder network D/A converter<br />

For an R-2R ladder network the input resistance of the network is R, as we know that<br />

the maximum current (I) delivered from the receiver (FPGA) circuit is to be 100µA<br />

and retinal tissue resistance is at least 10kΩ, R is chosen to be 10kΩ. V ref can now be<br />

calculated as 1v.<br />

Typically, maximum output voltage is obtained for an input codeword consisting<br />

entirely of logic 1’s so the current entering the feedback loop is<br />

108 of 200


Hence the output voltage is<br />

I<br />

R<br />

R<br />

R<br />

2R 2R 2R<br />

2R<br />

I/2<br />

-V ref<br />

I/16<br />

2R<br />

I/4 I/8<br />

R F<br />

-<br />

+<br />

V O<br />

I OFF<br />

Figure 53 A bipolar D/A converter<br />

However the input codeword for this design is 1001, so in this case the total current<br />

entering the summing junction of the amplifier will be<br />

By making R f equal to 4R (in this case 40kΩ) then I off (offset current) will equal I/4<br />

and therefore I s (summing junction current) will equal 5(I)/16 and<br />

109 of 200


6.2.2 Envisaged retinal array<br />

This is the nature of the connection between the stimulator and the retinal array. The<br />

optic disc is an oval of 1.76mm by 1.92mm giving an area of 2.66mm 2<br />

[211]. There<br />

needs to be an assurance for a colour orientated prosthesis that only one axon will be<br />

targeted. Given 50% of the ganglion cells within the optic disc are concerned with<br />

colour signals originating from the fovea [212] then this implies 1.33mm 2 housing<br />

600, 000 fibres will have fibre areas of ≈2.2µm 2 and diameters of 1.68µm. The<br />

overlap between a 2µm diameter electrode and a 1.68µm fibre is 0.32 i.e. the<br />

encroachment upon neighbouring fibres will be 0.16µm, however as supporting<br />

tissue (0.34µm) is present before the axon we have a 0.17µm `safe’ zone before the<br />

neighbouring axon is targeted, figure 54.<br />

Figure 54 Axon clearance<br />

The stimulation site of a 2µm electrode is therefore well able to stimulate the 1µm<br />

axon of the RGC. As each electrode has an associated pitch of 50µm (0.05mm) the<br />

obtainable electrode density can be estimated by producing a square to cover the<br />

same area (1.33mm 2 ) meaning a side of ≈1.15mm. Then the number of pitches to<br />

110 of 200


each side would be 1.15 ÷ 0.05 equals 23 giving potentially 23 by 23 equals 529<br />

electrodes (in practice; due to pragmatic limitations 170 i.e (2 x 85)). However three<br />

factors need to be born in mind; firstly, 256 wires through a 5mm incision is the<br />

present (2012) limitation to electrode connections, secondly for structural<br />

implementation of a four by four picture a third of this wiring capability (≈ 85) needs<br />

to be available outside of this foveal specific area of the optic disc for the `blue’<br />

signals to the small bistratified RGC’s [213, 214]. Recall the majority of cones fed<br />

from the fovea are `red’ and `green’ whereas the `blue’ cone signals directed to the<br />

parafoveal retina can use 30µm diameter electrodes to generate larger stimulation<br />

sites to resonate [215] with the small bistratified RGC’s; responsible for handling<br />

blue cone signals, outside of this foveal specific area of the optic disc. In<br />

configuration each plane of the test image can be configured separately to ensure<br />

correct placement of the phosphenes. Figure 55 shows the perimeter of 30µm<br />

electrodes associated with blue cone signals. The expanded view of the internal<br />

portion of the array, with 2µm electrodes; focused over the foveal pit, targets the<br />

RGC’s carrying red and green cone signals. Effectively a one-to-one correspondence<br />

between stimulation site and electrode<br />

6.2.3 Connecting the detailed engineering to the overall<br />

concept<br />

Chapter four describes the design and implementation of the sender chip which<br />

translates the camera image to an AER format for transmission to the receiver chip<br />

described in chapter five. Chapter five describes the routing of the timed signals to<br />

the driver circuitry housed atop the retinal implant. Future work could avoid<br />

telemetry once a colour camera becomes available that can be housed in the eyeball;<br />

111 of 200


a monochrome camera in this space is already possible, this would resolve issues of<br />

variable inductive coupling which presently exist with current retinal implants.<br />

6.2.4 Differences between proposed technique and current<br />

retinal implants<br />

Current retinal prostheses are asynchronous and achromatic requiring an arbiter in<br />

the first case and amplitude variation to attain brightness variation of the greyscale<br />

image; in the second case. Any amplitude variation in this approach would be purely<br />

for reasons of electrode positioning and patient sensitivity. Current retinal prostheses<br />

use comparatively large electrodes relying on pulse duration to resonate correctly<br />

with RGC’s, this approach for the inner portion of the retinal implant is synchronous<br />

whilst maintaining a frame rate for possible hybrid prostheses, although in principle<br />

being pixel based rather than frame based this is not necessary.<br />

112 of 200


500µm<br />

880µm<br />

1760µm<br />

minor axis<br />

Major<br />

axis<br />

1<br />

9<br />

2<br />

0<br />

µ<br />

m<br />

Pitch of 85 outer 30µm diameter electrodes: 68µm<br />

Figure 55 <strong>Retinal</strong> implant<br />

6.3 Lower power FPGA<br />

A low power FPGA with a smaller footprint such as Actmel’s “IGLOO PLUS”<br />

could be utilised to lower the power requirement for this initial prototype (circa<br />

113 of 200


632mW) thus extending battery life by several orders of magnitude. Once this is<br />

done any future work is likely to necessitate a post processing interface from this<br />

FPGA to connect to the electrodes of the retinal implant and surgical expertise.<br />

Power requirement for Virtex 5 used to implement 4 by 4 retinal prostheses as<br />

described in this document.<br />

-------------------------------------------------------<br />

| Power Supply Summary |<br />

-------------------------------------------------------<br />

| | Total | Dynamic | Quiescent |<br />

-------------------------------------------------------<br />

| Supply Power (mW) | 631.29 | 36.59 | 594.70 |<br />

-------------------------------------------------------<br />

-------------------------------------------------------------------------------------<br />

--------------------------<br />

| Power Supply Currents<br />

|<br />

-------------------------------------------------------------------------------------<br />

--------------------------<br />

| Supply Source | Supply Voltage | Total Current (mA) | Dynamic Current (mA)<br />

| Quiescent Current (mA) |<br />

-------------------------------------------------------------------------------------<br />

--------------------------<br />

| Vccint | 1.00 | 417.80 | 16.59<br />

| 401.20 |<br />

| Vccaux | 2.50 | 83.40 | 8.00<br />

| 75.40 |<br />

| Vcco25 | 2.50 | 2.00 | 0.00<br />

| 2.00 |<br />

For upto 193 I/Os the AGL600 meets the replacement requirements, the product<br />

table is shown here.<br />

114 of 200


Table 16 IGLOO product table<br />

6.4 AER between sender and receiver chip<br />

Recall the relationship between image size and bit length of the AER transmission<br />

packet as shown in the following table.<br />

side of square image pixel count address bits required Total bit length<br />

2 4 2 20<br />

4 16 4 22<br />

8 64 6 24<br />

16 256 8 26<br />

32 1024 10 28<br />

1024 1048576 20 38<br />

Table 17 image size versus bit length<br />

115 of 200


For the implemented/structural case of the 32 by 32 image the bit length of the<br />

transmitted AER packet is composed of ten address bits and eighteen stimulus bits.<br />

This corresponds to a bit rate determined by the AER packet frequency. The AER<br />

frequency is derived from the partial_spike clock frequency. In the case of the test<br />

setup described earlier using an FPGA clock of 25Mhz as the partial_spike<br />

frequency the spike frequency would be a quarter of this i.e. 6.25MHz. As an<br />

outerpulse (at maximum intensity) has a periodic time of five times spike time then<br />

outerpulse frequency is 1.25MHz. As there are fifty outerpulses in a pixel then pixel<br />

(AER) frequency is 25Khz.<br />

6.5 Post Processing Interface<br />

In medical experiments equivalent impedance for the retinal tissue has been<br />

estimated at ≈ 10KΩ [53]. The post processing required to convert the signals from<br />

the FPGA to those suitable for driving the electrodes of 10kΩ impedance [54] must<br />

deliver sufficient charge to simulate the biological action potential (AP) within the<br />

RGC of the optic nerve fibre. As there is only one interconnect lead per stimulation<br />

site (to reduce the wiring through the surgical incision) the interface will require two<br />

supply voltages to produce the biphasic current pulse. For a minimum supply voltage<br />

of 5v [51] i.e. +5v and -5v the maximum current per electrode would be 500µA. The<br />

instantaneous power would therefore be 2.5mw resulting in 2.5w if all electrodes<br />

were continuously active. However for 50 active biphasic pulses within a second the<br />

electrode is only active for 2ms of each pulse (1ms for each phase – 10% duty cycle)<br />

and that’s at maximum intensity then the stimulation power requirement will reduce<br />

to 0.25w. Presuming a 0.3 activity factor i.e. a scene is rarely fully white therefore<br />

less than 50 pulses per second are common then power requirement would fall to<br />

250/3 i.e. circa 83mw.<br />

116 of 200


Membrane potential (mv)<br />

+30<br />

0<br />

-70<br />

0 1 2 3 4 5<br />

Time(ms)<br />

Peak to peak 100<br />

x 50 pulses equal 5v<br />

Figure 56 AP propagating through optic nerve<br />

6.6 Pre processing<br />

Presently the incoming image for this prototype system is held in ROM, prior to full<br />

implementation a camera would be interfaced to the FPGA, either a RGB camera or<br />

a YUV camera utilizing an intellectual property (IP) core to convert to RGB signal<br />

format. Future work could encompass the possibility of converting a camera with<br />

universal signal bus (USB) protocol to RGB format. When using an RGB or YUV<br />

camera the frame rate to be set will be determined by the partial_spike_clock<br />

frequency derived from the FPGA clock frequency. The ratio is largely determined<br />

by the image size to be transmitted e.g. for a circa 1000 pixel image a<br />

partial_spike_clock frequency of 25Mhz would expect a frame rate of 25fps and<br />

similarly a partial_spike_clock frequency of 50Mhz would expect a frame rate of<br />

50fps for the same size of image.<br />

6.7 Future work<br />

117 of 200


The sender and receiver circuitry presently implemented in FPGA fabric needs to be<br />

produced in the form of two separate application specific integrated circuits<br />

(ASIC’s) to reduce the physical footprint. The power pack supplying power to the<br />

system is worn at the waist of the implant patient. Camera and receiver chip is<br />

housed on a pair of spectacles worn by the patient. The following diagram illustrates<br />

this part of the setup.<br />

Interconnection<br />

to retinal implant<br />

Receiver chip<br />

Camera<br />

power<br />

Data<br />

feed<br />

Data<br />

feed<br />

Sender chip<br />

Battery<br />

pack<br />

Belt<br />

Figure 57 Practical setup<br />

6.7.1 Surgical implantation<br />

The microstimulator or driver circuitry (post processing interface) is implanted<br />

alongside the epiretinal electrode array. A 5mm surgical incision allows up to 256<br />

wires to pass through; at the time of writing (February 2012). So for the<br />

implementation of a 4 by 4 image as has been implemented in this prototype 48<br />

118 of 200


wires plus the return wire would pass through this incision when placing the retinal<br />

implant. Can the commercially available Argus II retinal implant [110, 216] be<br />

utilized for this colour implementation? The Argus II retinal implant is of an<br />

achromatic design; therefore it will not be directly suitable for use in its present<br />

configuration, although the driver circuitry may be applicable. As the cost of the<br />

Argus II is presently $100000 then a bespoke colour vision design would be a good<br />

idea if will retinal implantation is ever to be commercially viable. The issue to be<br />

resolved is an alternative cheaper coating to the diamond coating presently used to<br />

prevent bio degradation.<br />

6.7.2 Inductive linking considerations<br />

The interconnection to the retinal implant of this prototype system could be refined<br />

from a physical connection through the skull wall, some 7mm, to a minimally<br />

invasive inductive link [217-225]. Issues which would need to be addressed, among<br />

others, are the extra power requirements, due to inefficiencies of power transfer<br />

through an inductive link and also potential extra electromagnetic heating effects<br />

[226-228].<br />

6.7.3 Initial configuration<br />

Within FPGA programming the control outputs feeding the post processing are<br />

rerouted to differing electrodes of the implanted array as necessary. This enables<br />

each plane of colour to be setup separately utilizing configuration test images to<br />

allow for anomalies in electrode placement in terms of retinotopic diversity.<br />

119 of 200


Bibliography<br />

1. Woodburn R. Murray A.F., Pulse-stream techniques and circuits. Circuits<br />

and Devices Magazine, IEEE, 1996. 12(4): p. 43-47.<br />

2. Culurciello, E., R. Etienne-Cummings, and K. Boahen, Arbitrated address<br />

event representation digital image sensor. Digest of Technical Papers.<br />

ISSCC. 2001 IEEE International Solid-State Circuits Conference, 2001.,<br />

2001: p. 92-93.<br />

3. Teixeira, T., et al., Address-event imagers for sensor networks: evaluation<br />

and modeling. IPSN 2006. The Fifth International Conference on Information<br />

Processing in Sensor Networks, 2006., 2006: p. 458-466.<br />

4. Serrano-Gotarredona, R., et al., A Neuromorphic Cortical-Layer Microchip<br />

for Spike-Based Event Processing Vision Systems. IEEE Transactions on<br />

Circuits and Systems I: Regular Papers,, 2006. 53(12): p. 2548-2566.<br />

5. Saude T., Ocular Anatomy and Physiology. 1993.<br />

6. Berne R. M. Levy M. N., Physiology (Fourth Edition). 1989.<br />

7. Malvin G. M. Malvin R. L. Johnson M. D., Concepts of Human Physiology.<br />

1997.<br />

8. Snell R. S. Lemp M. A., Clinical Anatomy of the Eye. 1998.<br />

9. Schmidt R. F. Thew G., Human Physiology. 2nd ed. 1983/1989.<br />

10. DeWitt W., Human Biology Form, Function and Adaptation. 1989.<br />

11. Widmair E. P. Raff H. Strang K. T., Vanders Human Physiology. 10th ed.<br />

2006.<br />

12. Tovec M. J., An Introduction to the Visual System. 1996.<br />

13. Davison H., Physiology of the Eye. Fifth ed. 1990.<br />

14. Forrester J. V. Dick A. B. McManamin P. G. Lee W. R., The Eye (basic<br />

sciences in practice). 2nd ed. 2002.<br />

15. Zeki S., A Vision of the Brain. 1993.<br />

16. Helgemo D. R., Digital Signal Processing at 1GHz in a Field-Programmable<br />

Object Array. MAPLD 2003 Conference, 2003.<br />

17. Hu, N., et al. Displacement simulation analysis of Microelectrode Array in<br />

Artificial Retina. in Complex Medical Engineering, 2007. CME 2007.<br />

IEEE/ICME International Conference on<br />

Complex Medical Engineering, 2007. CME 2007. IEEE/ICME International<br />

Conference on VO -. 2007.<br />

18. Weiland J.D. Anderson D.J. Humayun M.S., In vitro electrical properties for<br />

iridium oxide versus titanium nitride stimulating electrodes. IEEE<br />

Transactions on Biomedical Engineering, 2002. 49(12): p. 1574-1579.<br />

19. Chu, A.P., et al., Stimulus induced pH changes in retinal implants.<br />

Engineering in Medicine and Biology Society, 2004. IEMBS '04. 26th<br />

Annual International Conference of the IEEE, 2004. 2: p. 4160-4162.<br />

20. Gerhardt, M., J. Alderman, and A. Stett, Electric Field Stimulation of Bipolar<br />

Cells in a Degenerated Retina&#x2014;A Theoretical Study. Neural Systems<br />

and Rehabilitation Engineering, IEEE Transactions on. 18(1): p. 1-10.<br />

21. Nadeau, P. and M. Sawan. A flexible high voltage biphasic currentcontrolled<br />

stimulator. in Biomedical Circuits and Systems Conference, 2006.<br />

BioCAS 2006. IEEE. 2006.<br />

120 of 200


22. Vidal, J. and M. Ghovanloo. Towards a Switched-Capacitor based<br />

Stimulator for efficient deep-brain stimulation. in Engineering in Medicine<br />

and Biology Society (EMBC), 2010 Annual International Conference of the<br />

IEEE.<br />

23. Wang, R., et al., Fabrication and Characterization of a Parylene-Based<br />

Three-Dimensional Microelectrode Array for Use in <strong>Retinal</strong> <strong>Prosthesis</strong>.<br />

Microelectromechanical Systems, Journal of. 19(2): p. 367-374.<br />

24. Lovell, N.H., et al., Biological&#x2013;Machine Systems Integration:<br />

Engineering the Neural Interface. Proceedings of the IEEE. 98(3): p. 418-<br />

431.<br />

25. Xiaohong, S., et al. Encapsulation and Evaluation of a MEMS-Based<br />

Flexible Microelectrode Array for Acute In-Vivo Experiment. in Biomedical<br />

Engineering and Informatics, 2009. BMEI '09. 2nd International Conference<br />

on. 2009.<br />

26. Rodger, D.C. and T. Yu-Chong, Microelectronic packaging for retinal<br />

prostheses. Engineering in Medicine and Biology Magazine, IEEE, 2005.<br />

24(5): p. 52-57.<br />

27. Sarje, A. and N. Thakor. Neural interfacing. in IEMBS '04. 26th Annual<br />

International Conference of the IEEE Engineering in Medicine and Biology<br />

Society, 2004. 2004.<br />

28. Rodger, D.C., et al. Flexible Parylene-based Microelectrode Technology for<br />

Intraocular <strong>Retinal</strong> Prostheses. in Nano/Micro Engineered and Molecular<br />

Systems, 2006. NEMS '06. 1st IEEE International Conference on. 2006.<br />

29. Ganesan, K., et al. Diamond penetrating electrode array for Epi-<strong>Retinal</strong><br />

<strong>Prosthesis</strong>. in Engineering in Medicine and Biology Society (EMBC), 2010<br />

Annual International Conference of the IEEE.<br />

30. Schröder, J.M., J. Bohl, and U. Bardeleben, Changes of the ratio between<br />

myelin thickness and axon diameter in human developing sural, femoral,<br />

ulnar, facial, and trochlear nerves. Acta Neuropathologica, 1988. 76(5): p.<br />

471-483.<br />

31. Asher, A., et al., Image Processing for a High-Resolution Optoelectronic<br />

<strong>Retinal</strong> <strong>Prosthesis</strong>. Biomedical Engineering, IEEE Transactions on, 2007.<br />

54(6): p. 993-1004.<br />

32. Carlidge E., Vision on a chip. Physics World, 2007.<br />

33. Degenaar P. LePioufle B. Griscom L. Tixier A. Akagi Y. Morita Y.<br />

Murakami Y. Yokoyama K. Fujita H. Tamiya E., A Method for Micrometer<br />

Resolution Patterning of Primary Culture Neurons for SPM Analysis. J<br />

Biochem, 2001. 130: p. 367-376.<br />

34. Banks, D., P. Degenaar, and C. Toumazou. A Bio-Inspired Adaptive <strong>Retinal</strong><br />

Processing Neuron with Multiplexed Spiking Outputs. in Circuits and<br />

Systems, 2007. ISCAS 2007. IEEE International Symposium on. 2007.<br />

35. Nikolic, K., et al. A Non-Invasive <strong>Retinal</strong> <strong>Prosthesis</strong> - Testing the Concept. in<br />

Engineering in Medicine and Biology Society, 2007. EMBS 2007. 29th<br />

Annual International Conference of the IEEE. 2007.<br />

36. Banks, D.J., P. Degenaar, and C. Toumazou, Low-power pulse-widthmodulated<br />

neuromorphic spiking circuit allowing signed double byte data<br />

transfer along a single channel. Electronics Letters, 2007. 43(13): p. 704-<br />

706.<br />

37. Degenaar, P., T.G. Constandinou, and C. Toumazou, Adaptive ON-OFF<br />

spiking photoreceptor. Electronics Letters, 2006. 42(4): p. 196-198.<br />

121 of 200


38. Banks, D.J., P. Degenaar, and C. Toumazou, Distributed current-mode image<br />

processing filters. Electronics Letters, 2005. 41(22): p. 1201-1202.<br />

39. Constandinou, T.G., et al. An on/off spiking photoreceptor for adaptive<br />

ultrafast/ultrawide dynamic range vision chips. in Biomedical Circuits and<br />

Systems, 2004 IEEE International Workshop on. 2004.<br />

40. Al-Atabany, W., et al., Designing and testing scene enhancement algorithms<br />

for patients with retina degenerative disorders. BioMedical Engineering<br />

OnLine. 9(1): p. 27.<br />

41. Abu-Faraj, Z.O., et al. A Prototype <strong>Retinal</strong> <strong>Prosthesis</strong> for Visual Stimulation.<br />

in Engineering in Medicine and Biology Society, 2007. EMBS 2007. 29th<br />

Annual International Conference of the IEEE. 2007.<br />

42. Hering, XI Hering's four-color theory Zone theories. Documenta<br />

Ophthalmologica, 1999. 96(1): p. 165-174.<br />

43. Rolando, C.A. and et al., Neuromorphic model of magnocellular and<br />

parvocellular visual paths: spatial resolution. Journal of Physics: Conference<br />

Series, 2007. 90(1): p. 012099.<br />

44. Avraham, T. and Y. Schechner, Ultrawide Foveated Video Extrapolation.<br />

Selected Topics in Signal Processing, IEEE Journal of. PP(99): p. 1-1.<br />

45. Kandel E. R. Schwartz J. H. Jessel M., Principles of Neural Science. 2000:<br />

McGraw Hill, ISBN 0-07-112000-9.<br />

46. Patton H. D. Fuchs A. F. Hille B. Scher A. M. Steiner R., Textbook of<br />

Physiology Excitable Cells and Neurophysiology. 21st ed. Vol. 1. 1989:<br />

Harcourt.<br />

47. Rolls E. T. Deco G., Computational Neuroscience of Vision. 2002: Oxford<br />

University Press.<br />

48. Sukkar M. Y. El-Munshid H. A. Ardaw M. S. M., Concise Human<br />

Physiology. 2nd ed. 1993, 2000.<br />

49. Greenberg, R.J., et al., A computational model of electrical stimulation of the<br />

retinal ganglion cell. Biomedical Engineering, IEEE Transactions on, 1999.<br />

46(5): p. 505-514.<br />

50. Shapley R., Specificity of Cone Connections in the Retina and Colour Vision.<br />

Focus on "Specificity of Cone Inputs to Macaque <strong>Retinal</strong> Ganglion Cells".<br />

2005.<br />

51. Theogarajan, L., et al. Visual prostheses: Current progress and challenges. in<br />

VLSI Design, Automation and Test, 2009. VLSI-DAT '09. International<br />

Symposium on. 2009.<br />

52. Weiland, J.D. and M.S. Humayun, Intraocular retinal prosthesis.<br />

Engineering in Medicine and Biology Magazine, IEEE, 2006. 25(5): p. 60-<br />

66.<br />

53. Liu W. Vichienchom K. Clements M. DeMarco S.C. Hughes C. McGucken<br />

E. Humayun M.S. De Juan E. Weiland J.D. Greenberg R., A neuro-stimulus<br />

chip with telemetry unit for retinal prosthetic device. IEEE Journal of Solid-<br />

State Circuits, 2000. 35(10): p. 1487-1497.<br />

54. Sivaprakasam, M., et al., Architecture tradeoffs in high-density<br />

microstimulators for retinal prosthesis. Circuits and Systems I: Regular<br />

Papers, IEEE Transactions on, 2005. 52(12): p. 2629-2641.<br />

55. Jeng-Shyong S. Maia M. Weiland J.D. O'Hearn T. Shih-Jen C. Margalit E.<br />

Suzuki S. Humayun, M.S., Electrical Stimulation in Isolated Rabbit Retina.<br />

IEEE Transactions on Neural Systems and Rehabilitation Engineering,, 2006.<br />

14(3): p. 290-298.<br />

122 of 200


56. Sivaprakasam, M., et al., A variable range bi-phasic current stimulus driver<br />

circuitry for an implantable retinal prosthetic device. Solid-State Circuits,<br />

IEEE Journal of, 2005. 40(3): p. 763-771.<br />

57. Sivaprakasam, M., et al. A Programmable Discharge Circuitry With Current<br />

Limiting Capability for a <strong>Retinal</strong> <strong>Prosthesis</strong>. in 27th Annual International<br />

Conference of the Engineering in Medicine and Biology Society. 2005.<br />

58. Mahadevappa, M., et al., Perceptual thresholds and electrode impedance in<br />

three retinal prosthesis subjects. Neural Systems and Rehabilitation<br />

Engineering, IEEE Transactions on, 2005. 13(2): p. 201-206.<br />

59. Linares-Barranco A. Jimenez-Moreno G. Linares-Barranco B. Civit-Balcells<br />

A., On algorithmic rate-coded AER generation. IEEE Transactions on Neural<br />

Networks, 2006. 17(3): p. 771-788.<br />

60. Zaghloul K.A. Boahen K., Optic nerve signals in a neuromorphic chip II:<br />

testing and results. IEEE Transactions on Biomedical Engineering, 2004.<br />

51(4): p. 667-675.<br />

61. Azadmehr M. Abrahamsen J. P. Hafliger P., A Foveated AER Imager Chip.<br />

IEEE International Symposium on Circuits and Systems, 2005. 3: p. 2751 -<br />

2754.<br />

62. Linares-Barranco A. Jimenez-Moreno G. Civit-Ballcels A. Linares-Barranco<br />

B., On synthetic AER Generation. Proceedings of the 2004 International<br />

Symposium on Circuits and Systems, 2004. 5.<br />

63. Abrahamsen J.P. Hafliger P. Lande T.S. A time domain winner-take-all<br />

network of integrate-and-fire neurons. in ISCAS '04. Proceedings of the 2004<br />

International Symposium on Circuits and Systems. 2004.<br />

64. Vogelstein R.J. Mallik U. Vogelstein J.T. Cauwenberghs G., Dynamically<br />

Reconfigurable Silicon Array of Spiking Neurons With Conductance-Based<br />

Synapses. Neural Networks, IEEE Transactions on, 2007. 18(1): p. 253-265.<br />

65. Matolin D. Schreiter J. Schuffny R. Heittmann A. Ramacher U., Simulation<br />

and implementation of an analog VLSI pulse-coupled neural network for<br />

image segmentation. MWSCAS '04. The 2004 47th Midwest Symposium on<br />

Circuits and Systems, 2004. 2: p. II-397-II-400 vol.2.<br />

66. Chicca E. Indiveri G. Douglas R.J., An event-based VLSI network of<br />

integrate-and-fire neurons. ISCAS '04. Proceedings of the 2004 International<br />

Symposium on Circuits and Systems, 2004. 5: p. V-357-60 Vol.5.<br />

67. Murray A.F. Del Corso D. Tarassenko L., Pulse-stream VLSI neural<br />

networks mixing analog and digital techniques. IEEE Transactions on Neural<br />

Networks, 1991. 2(2): p. 193-204.<br />

68. Rowcliffe, P., J. Feng, and H. Buxton, Spiking perceptrons. Neural<br />

Networks, IEEE Transactions on, 2006. 17(3): p. 803-807.<br />

69. John L. Johnson and Mary Lou Padgett, M., IEEE, PCNN Models and<br />

Applications. IEEE Transactions on Neural Networks, 1999. 10(3): p. 480-<br />

498.<br />

70. Morris, L.P., Dlay, S. S., DSFPN, a new neural network for optical character<br />

recognition. IEEE Transactions on Neural Networks, 1999. 10(6): p. 1465-<br />

1473.<br />

71. M., C., Neural Networks - A Tutorial. 1993.<br />

72. Haykin, S., Neural Networks - A Comprehensive Foundation. Second ed.<br />

1999: Prentice Hall.<br />

73. Duch, W., Uncertainty of data, fuzzy membership functions, and multilayer<br />

perceptrons. Neural Networks, IEEE Transactions on, 2005. 16(1): p. 10-23.<br />

123 of 200


74. Freeman, J.A., Skapura, D. M., , Neural Networks Algorithms, Applications<br />

and Programming Techniques. 1991.1992: Addison-Wesley<br />

75. Zamani, M., A. Sadeghian, and S. Chartier. A bidirectional associative<br />

memory based on cortical spiking neurons using temporal coding. in Neural<br />

Networks (IJCNN), The 2010 International Joint Conference on.<br />

76. Zometzer S. F., D.J.L., Lau C., McKenna T., ed. An Introduction to Neural<br />

and Electronic Networks (2nd Edition). Second ed. 1995, 1990, Academic<br />

Press.<br />

77. Jain L. C., V.V.R., Industrial Applications of Neural Networks. 1999: The<br />

CRC Press.<br />

78. Guoyin, W. and S. Hongbao, TMLNN: triple-valued or multiple-valued logic<br />

neural network. Neural Networks, IEEE Transactions on, 1998. 9(6): p.<br />

1099-1117.<br />

79. Cotofana, S. and S. Vassiliadis, Periodic symmetric functions, serial<br />

addition, and multiplication with neural networks. Neural Networks, IEEE<br />

Transactions on, 1998. 9(6): p. 1118-1128.<br />

80. Reinhard Eckhorn, A.M.G., Anreas Bruns, Andreas Gabriel, Basim Al-<br />

Shaikhli and Mirko Saam, Different types of Signal Coupling in the Visual<br />

Cortex Related to Neural Mechanisms of Associative Processing and<br />

Perception. IEEE Transactions on Neural Networks, 2004. 15(5): p. 1039-<br />

1052.<br />

81. Garrett T. Kenyon, B.J.T., James Theiler, John S. George, Gregory J.<br />

Stephens and David W. Marshak, Stimulus-Specific Oscillations in a <strong>Retinal</strong><br />

Model. IEEE Transactions on Neural Networks, 2004. 15(5): p. 1083-1091.<br />

82. Eckhorn, R., Neural Mechanisms of Scene Segmentation: Recordings from<br />

the Visual Cortex Suggest Basic Circuits for Linking Field Models. IEEE<br />

Transactions on Neural Networks, 1999. 10(3): p. 464-478.<br />

83. Kolman, E. and M. Margaliot. Are Artificial Networks White Boxes? IEEE<br />

Transactions on Neural Networks [Fuzzy rule bases (FRBs)] 2005 July 2005<br />

[cited 16 4]; 844-852].<br />

84. Caudill M, B., C, Understanding neural networks. Vol. 1. 1992.<br />

85. Galan, R.C., et al., A bio-inspired two-layer mixed-signal flexible<br />

programmable chip for early vision. Neural Networks, IEEE Transactions<br />

on, 2003. 14(5): p. 1313-1336.<br />

86. Wohrer, A., P. Kornprobst, and T. Vieville. From Light to Spikes: a Large-<br />

Scale Retina Simulator. in Neural Networks, 2006. IJCNN '06. International<br />

Joint Conference on<br />

Neural Networks, 2006. IJCNN '06. International Joint Conference on VO -. 2006.<br />

87. Ohta, J., et al. Si-LSI Based Stimulators for <strong>Retinal</strong> <strong>Prosthesis</strong>. in Neural<br />

Networks, 2007. IJCNN 2007. International Joint Conference on. 2007.<br />

88. Ryu, S.B., et al. Optimal linear filter based light intensity decoding from<br />

rabbit retinal ganglion cell spike trains. in Neural Engineering, 2007. CNE<br />

'07. 3rd International IEEE/EMBS Conference on<br />

Neural Engineering, 2007. CNE '07. 3rd International IEEE/EMBS Conference on<br />

VO -. 2007.<br />

89. Johnson J., P.P., Designing Intelligent Machines. Vol. Two. 1995:<br />

Butterworth Heinmann.<br />

90. Culurciello E. Andreou A.G., A comparative study of access topologies for<br />

chip-level address-event communication channels. IEEE Transactions on<br />

Neural Networks, 2003. 14(5): p. 1266-1277.<br />

124 of 200


91. Mark, F.B., Barry, W. Connors, Michael, A. Paradiso Neuroscience<br />

Exploring the Brain Third ed. 2006: Lippincott Williams and Wilkins<br />

92. Kandel E. R. Schwartz J. H. Jessel T. M., Essentials of Neural Science and<br />

Behaviour. 1995: Prentice Hall International.<br />

93. Buchsbaum, G., The retina as a two-dimensional detector array in the<br />

context of color vision theories and signal detection theory. Proceedings of<br />

the IEEE, 1981. 69(7): p. 772-786.<br />

94. Liu, W. Intraocular retinal prosthesis: microelectronics meets medicine. in<br />

Microprocesses and Nanotechnology Conference, 2001 International. 2001.<br />

95. Talukder, M.I., P. Siy, and G.W. Auner. Parallel Multiplexing - a Solution of<br />

Large Scale Stimulation Needed by the <strong>Retinal</strong> Prostheses to Maintain the<br />

Persistence of Vision. in Engineering in Medicine and Biology Society, 2006.<br />

EMBS '06. 28th Annual International Conference of the IEEE. 2006.<br />

96. Wentai, L. <strong>Retinal</strong> implant: bridging engineering and medicine. in Electron<br />

Devices Meeting, 2002. IEDM '02. Digest. International. 2002.<br />

97. Tran, N., et al. A Flexible Electrode Driver Using 65 nm CMOS Process for<br />

1024-Electrode Epi-<strong>Retinal</strong> <strong>Prosthesis</strong>. in Future Information Technology<br />

(FutureTech), 2010 5th International Conference on. 2010.<br />

98. Benav, H., et al. Restoration of useful vision up to letter recognition<br />

capabilities using subretinal microphotodiodes. in Engineering in Medicine<br />

and Biology Society (EMBC), 2010 Annual International Conference of the<br />

IEEE.<br />

99. Wyatt, J. Steps toward the development of a chronic retinal implant. in<br />

Wearable and Implantable Body Sensor Networks, 2006. BSN 2006.<br />

International Workshop on. 2006.<br />

100. Xu, Z., et al. Neuro-stimulus chip with photodiodes array for sub-retinal<br />

implants. in Solid-State and Integrated-Circuit Technology, 2008. ICSICT<br />

2008. 9th International Conference on. 2008.<br />

101. Fan-Gang, Z., et al., Cochlear Implants: System Design, Integration, and<br />

Evaluation. Biomedical Engineering, IEEE Reviews in, 2008. 1: p. 115-142.<br />

102. Kuanfu, C., et al., An Integrated 256-Channel Epiretinal <strong>Prosthesis</strong>. Solid-<br />

State Circuits, IEEE Journal of, 2010. 45(9): p. 1946-1956.<br />

103. Weiland, J.D. and M.S. Humayun, A biomimetic retinal stimulating array.<br />

Engineering in Medicine and Biology Magazine, IEEE, 2005. 24(5): p. 14-<br />

21.<br />

104. Yang, W.-C., et al. A low DC-level variation retinal chip based on the<br />

neuromorphic model of on sluggish sustained ganglion cell set of rabbits. in<br />

Cellular Nanoscale Networks and Their Applications (CNNA), 2010 12th<br />

International Workshop on. 2010.<br />

105. Greenberg, R.J., et al. Preliminary results from the argus II study: A 60<br />

electrode epiretinal prosthesis. in Bionic Health: Next Generation Implants,<br />

Prosthetics and Devices, 2009 IET. 2009.<br />

106. Schwiebert, L., et al. A biomedical smart sensor for the visually impaired. in<br />

Sensors, 2002. Proceedings of IEEE. 2002.<br />

107. Watanabe, T., et al. Novel <strong>Retinal</strong> <strong>Prosthesis</strong> System with Three<br />

Dimensionally Stacked LSI Chip. in Solid-State Device Research Conference,<br />

2006. ESSDERC 2006. Proceeding of the 36th European. 2006.<br />

108. Prives, L., An eye for detail. Women in Engineering Magazine, IEEE, 2009.<br />

3(2): p. 19-20.<br />

125 of 200


109. Weiland, J.D., et al. Progress Towards A High-Resolution <strong>Retinal</strong> <strong>Prosthesis</strong>.<br />

in 27th Annual International Conference of the Engineering in Medicine and<br />

Biology Society. 2005.<br />

110. Humayun, M.S., et al. Preliminary 6 month results from the<br />

argustm ii epiretinal prosthesis feasibility study. in<br />

Engineering in Medicine and Biology Society, 2009. EMBC 2009. Annual<br />

International Conference of the IEEE. 2009.<br />

111. Vurro, M., et al. Simulation and Assessment of Bioinspired Visual Processing<br />

System for Epi-retinal Prostheses. in Engineering in Medicine and Biology<br />

Society, 2006. EMBS '06. 28th Annual International Conference of the IEEE.<br />

2006.<br />

112. Weiland, J.D., et al. Systems design of a high resolution retinal prosthesis. in<br />

Electron Devices Meeting, 2008. IEDM 2008. IEEE International. 2008.<br />

113. Chader, G.J., et al., Artificial vision: needs, functioning, and testing of a<br />

retinal electronic prosthesis, in Progress in Brain Research. 2009, Elsevier.<br />

p. 317-332.<br />

114. Veraart C. Duret F. Brelen M. Delbeke J., Vision rehabilitation with the optic<br />

nerve visual prosthesis. IEMBS '04. 26th Annual International Conference of<br />

the IEEE Engineering in Medicine and Biology Society, 2004., 2004. 2: p.<br />

4163-4164.<br />

115. Weiland, J.D. and M.S. Humayun, Visual <strong>Prosthesis</strong>. Proceedings of the<br />

IEEE, 2008. 96(7): p. 1076-1084.<br />

116. Ahuja, A.K., et al., Blind subjects implanted with the Argus II retinal<br />

prosthesis are able to improve performance in a spatial-motor task. British<br />

Journal of Ophthalmology.<br />

117. Liu, W., M.S. Humayun, and M.A. Liker, Implantable Biomimetic<br />

Microelectronics Systems. Proceedings of the IEEE, 2008. 96(7): p. 1073-<br />

1075.<br />

118. Tanaka, T., et al. Fully Implantable <strong>Retinal</strong> <strong>Prosthesis</strong> Chip with<br />

Photodetector and Stimulus Current Generator. in Electron Devices Meeting,<br />

2007. IEDM 2007. IEEE International<br />

Electron Devices Meeting, 2007. IEDM 2007. IEEE International VO -. 2007.<br />

119. Kuanfu, C. and L. Wentai. Highly programmable digital controller for highdensity<br />

epi-retinal prosthesis. in Engineering in Medicine and Biology<br />

Society, 2009. EMBC 2009. Annual International Conference of the IEEE.<br />

2009.<br />

120. Wyatt J. Rizzo J., Ocular implants for the blind. Spectrum, IEEE, 1996.<br />

33(5): p. 47-53.<br />

121. Fornos, A.P., et al., Simulation of Artificial Vision, III: Do the Spatial or<br />

Temporal Characteristics of Stimulus Pixelization Really Matter?<br />

10.1167/iovs.04-1173. Invest. Ophthalmol. Vis. Sci., 2005. 46(10): p. 3906-3912.<br />

122. Zrenner, E., Will <strong>Retinal</strong> Implants Restore Vision?<br />

10.1126/science.1067996. Science, 2002. 295(5557): p. 1022-1025.<br />

123. Theogarajan L. Wyatt J. Rizzo J. Drohan B. Markova M. Kelly S. Swider G.<br />

Raj M.Shire D. Gingerich M. Lowenstein J. Yomtov B. Minimally Invasive<br />

<strong>Retinal</strong> <strong>Prosthesis</strong>. in ISSCC 2006. Digest of Technical Papers. IEEE<br />

International Solid-State Circuits Conference, 2006. 2006.<br />

124. Sivaprakasam, M., et al., Challenges in System and Circuit Design for High<br />

Density <strong>Retinal</strong> <strong>Prosthesis</strong>. Life Science Systems and Applications<br />

Workshop, 2006. IEEE/NLM, 2006: p. 1-2.<br />

126 of 200


125. Wei, H. and X. Guan. The Simulation of Early Vision in Biological Retina<br />

and Analysis of Its Performance. in Image and Signal Processing, 2008.<br />

CISP '08. Congress on<br />

Image and Signal Processing, 2008. CISP '08. Congress on VO - 4. 2008.<br />

126. Weiland, J.D. and M.S. Humayun, Old idea, new technology. Engineering in<br />

Medicine and Biology Magazine, IEEE, 2005. 24(5): p. 12-13.<br />

127. Pramassing, F., et al. Intraocular vision aid (IOS): optical signal<br />

transmission and image generation. in Engineering in Medicine and Biology<br />

Society, 2000. Proceedings of the 22nd Annual International Conference of<br />

the IEEE. 2000.<br />

128. Scribner, D., et al., A <strong>Retinal</strong> <strong>Prosthesis</strong> Technology Based on CMOS<br />

Microelectronics and Microwire Glass Electrodes. Biomedical Circuits and<br />

Systems, IEEE Transactions on, 2007. 1(1): p. 73-84.<br />

129. Mokwa, W. <strong>Retinal</strong> implants to restore vision in blind people. in Solid-State<br />

Sensors, Actuators and Microsystems Conference (TRANSDUCERS), 2011<br />

16th International.<br />

130. Stitt, J.P., et al. An artificial neural network for neural spike classification. in<br />

Bioengineering Conference, 1997., Proceedings of the IEEE 1997 23rd<br />

Northeast. 1997.<br />

131. Kalayci, T. and O. Ozdamar, Wavelet preprocessing for automated neural<br />

network detection of EEG spikes. Engineering in Medicine and Biology<br />

Magazine, IEEE, 1995. 14(2): p. 160-166.<br />

132. Safwan, S., W.E. Faller, and M.W. Luttges. Fuzzy analyses of biological<br />

information processing. in Engineering in Medicine and Biology Society,<br />

1994. Engineering Advances: New Opportunities for Biomedical Engineers.<br />

Proceedings of the 16th Annual International Conference of the IEEE. 1994.<br />

133. Kutlu, Y., Y. Isler, and D. Kuntalp. Detection of Spikes with Multiple Layer<br />

Perceptron Network Structures. in Signal Processing and Communications<br />

Applications, 2006 IEEE 14th. 2006.<br />

134. Kasabov, N., L. Benuskova, and S.G. Wysoski. A computational<br />

neurogenetic model of a spiking neuron. in Neural Networks, 2005. IJCNN<br />

'05. Proceedings. 2005 IEEE International Joint Conference on. 2005.<br />

135. Sovierzoski, M.A., L. Schwarz, and F. Azevedo. Binary Neural Classifier of<br />

Raw EEG Data to Separate Spike and Sharp Wave of the Eye Blink Artifact.<br />

in Natural Computation, 2009. ICNC '09. Fifth International Conference on.<br />

2009.<br />

136. Bidarte U. Ezquerra J. A. Zuloaga A. Martin J. L., VHDL Modeling of an<br />

Adaptive Architecture<br />

for Real-Time Image Enhancement. Proceedings of the Fall VIUF Workshop, 1999.<br />

137. Serrano-Gotarredona T. Linares-Barranco B. Andreou A. G., Programmable<br />

Kernel Analogue VLSI Convolution Chip for Real Time Vision Processing.<br />

International Joint Conference on Neural Networks, 2000. 4.<br />

138. Serrano-Gotarredona T. Andreou A. G. Linares-Barranco B., A 2D Filtering<br />

Architecture for Real-Time Vision Processing Systems. IEEE Computer<br />

Society, 1999: p. 415.<br />

139. Indiveri G. Whatley A. M. Kramer J., A Reconfigurable Neuromorphic VLSI<br />

Multi-Chip System Applied to Visual Motion Computation. Microelectronics<br />

for Neural, Fuzzy and Bio-inspired Systems, 1999: p. 37 - 44.<br />

140. Kameda S. Yagi T., An analog silicon retina with multichip configuration.<br />

IEEE Transactions on Neural Networks, 2006. 17(1): p. 197-210.<br />

127 of 200


141. Indiveri G. Chicca E. Douglas R., A VLSI Array of Low-Power Spiking<br />

Neurons and Bistable Synapses with Spike Timing Dependent Plasticity.<br />

IEEE transactions on neural networks, 2006. 17.<br />

142. Hebb, D.O., The Organisation of Behaviour. 1949: Wiley.<br />

143. Linares-Barranco A. Senhadj-Navarro R. Garcia-Vargas I. Gomez-Rodriguez<br />

F. Jimenez G. Civit A., Synthetic Generation of Address-Events for Real-<br />

Time Image Processing. Emerging Technologies and Factory Automation<br />

Proceedings, 2003. 2: p. 462 - 467.<br />

144. Linares-Barranco A. Paz-Vicente R. Jimenez G. Pedreno-Molina J. L.<br />

Molina-Vilplana J., AER Neuro-Inspired Interface to Anthropomorphic<br />

Robotic Hand. International Joint Conference on Neural Networks, 2006: p.<br />

1497 - 1504.<br />

145. Paz-Vicente R. Linares-Barranco A. Jimenez G. Civit A., PCI to AER<br />

Hardware/Software Interface for Real-Time Vision Processing. World<br />

Automation Congress, 2004, Proceedings, 2004. 18: p. 55 - 62.<br />

146. Lazarro J. Wawrzynek J., A Multi-Sender Asynchronous Extension to the<br />

AER Protocol. Sixteenth Conference on Advanced Research in VLSI, 1995:<br />

p. 158 - 169.<br />

147. Serrano-Gotarredona R. Serrano-Gotarredona T. Acosta-Jimenez A. J.<br />

Linares-Barranco B., An Arbitrary Kernel Convolution AER-Transceiver<br />

Chip for Real-Time Image Filtering. IEEE Symposium on Circuits and<br />

Systems, 2006.<br />

148. Serrano-Gotarredona T. Andreou A. G. Linares-Barranco B., AER Image<br />

Filtering Architecture for Vision-Processing Systems. IEEE Transactions on<br />

Circuits and Systems 1: Fundamental Theory and Applications, 1999. 46(9):<br />

p. 1064 - 1071.<br />

149. Gomez-Rodriguez F. Paz R. Linares-Barranco A. Rivas M. Miro L. Vicente<br />

S. Jimenez G. Civit A. AER tools for communications and debugging. in<br />

IEEE International Symposium on Circuits and Systems. 2006.<br />

150. Choi T. Y. W. Shi B. E. Boahen K., An orientation selective 2D AER<br />

transceiver. ISCAS '03. Proceedings of the 2003 International Symposium on<br />

Circuits and Systems, 2003. 4: p. IV-800-IV-803 vol.4.<br />

151. Thiago T. Culurciello E. Andreou A.G., An Address-Event Image Sensor<br />

Network. 2006 IEEE International Symposium on Circuits and Systems,<br />

2006: p. 4467-4470.<br />

152. Linares-Barranco, A., et al., Poisson AER generator: inter-spike-intervals<br />

analysis. ISCAS 2006. Proceedings. 2006 IEEE International Symposium on<br />

Circuits and Systems, 2006: p. 4 pp.-3152.<br />

153. Patel G.N. Reid M.S. Schimmel D.E. DeWeerth S.P., An asynchronous<br />

architecture for modeling intersegmental neural communication. IEEE<br />

Transactions on Very Large Scale Integration (VLSI) Systems, 2006. 14(2):<br />

p. 97-110.<br />

154. Lichtsteiner P. Delbruck T., A 64x64 aer logarithmic temporal derivative<br />

silicon retina. 2005 PhD Research in Microelectronics and Electronics, 2005.<br />

2: p. 202-205.<br />

155. Boussaid F. Chen S. Bermak A., A scalable low power imager architecture<br />

for compound-eye vision sensors. Proceedings. Fifth International Workshop<br />

on System-on-Chip for Real-Time Applications, 2005: p. 203-206.<br />

156. Linares-Barranco B. Serrano-Gotarredona T. Serrano-Gotarredona R. Costas-<br />

Santos J., A new charge-packet driven mismatch-calibrated integrate-andfire<br />

neuron for processing positive and negative signals in AER based<br />

128 of 200


systems. ISCAS '04. Proceedings of the 2004 International Symposium on<br />

Circuits and Systems, 2004. 5: p. V-744-V-747 Vol.5.<br />

157. Paz-Vicente, R., et al. Synthetic retina for AER systems development. in<br />

Computer Systems and Applications, 2009. AICCSA 2009. IEEE/ACS<br />

International Conference on. 2009.<br />

158. Serrano-Gotarredona, T., A.G. Andreou, and B. Linares-Barranco, AER<br />

image filtering architecture for vision-processing systems. Circuits and<br />

Systems I: Fundamental Theory and Applications, IEEE Transactions on,<br />

1999. 46(9): p. 1064-1071.<br />

159. Dominguez-Morales, M.J., et al. Performance study of synthetic AER<br />

generation on CPUs for Real-Time Video based on Spikes. in Performance<br />

Evaluation of Computer & Telecommunication Systems, 2009. SPECTS<br />

2009. International Symposium on. 2009.<br />

160. Choi T. Y. W. Merolla P. A. Arthur J. V. Boahen K. A. Shi B. E.,<br />

Neuromorphic Implementation of Orientation Hypercolumns. IEE<br />

Transaction on Circuits and Systems-1: Regular Papers, 2005. 52(6): p. pages<br />

1049-1060.<br />

161. Hafliger Ph., Asynchronous event redirecting in bio-inspired communication.<br />

ICECS 2001. The 8th IEEE International Conference on Electronics, Circuits<br />

and Systems, 2001. 1: p. 87-90 vol.1.<br />

162. Boahen K.A., A retinomorphic vision system. Micro, IEEE, 1996. 16(5): p.<br />

30-39.<br />

163. Zhang L. Fang Z. Parker M. Mathew B. K. Schaelicke L. Carter J. B. Hsieh<br />

W. C. McKee S. A., The Impulse memory controller. IEEE Transactions on<br />

Computers,, 2001. 50(11): p. 1117-1132.<br />

164. Lazzaro J. Wawrzynek J. Mahowald M. Sivilotti M. Gillespie D., Silicon<br />

auditory processors as computer peripherals. IEEE Transactions on Neural<br />

Networks, 1993. 4(3): p. 523-528.<br />

165. Boussaid F. Shoushun C. Bermak A., A scalable low power imager for<br />

compound-eye vision sensors. Proceedings of the International Database<br />

Engineering and Application Symposium(IDEAS '05), 2005.<br />

166. Choi T. Y. W. Shi B.E. Boahen K. A., An ON-OFF orientation selective<br />

address event representation image transceiver chip. IEEE Transactions on<br />

Circuits and Systems I: Regular Papers,, 2004. 51(2): p. 342-353.<br />

167. Rivas M. Gomez-Rodriguez F. Paz R. Linares-Barranco A. Vicente S.<br />

Cascado D., Tools for Address-Event Representation Communication<br />

Systems and Debugging. Lecture notes in computer science, 2005.<br />

168. Songnian, Z., et al., Neural computation of visual imaging based on<br />

Kronecker product in the primary visual cortex. BMC Neuroscience. 11(1):<br />

p. 43.<br />

169. Gosalia, K., G. Lazzi, and M. Humayun, Investigation of a microwave data<br />

telemetry link for a retinal prosthesis. Microwave Theory and Techniques,<br />

IEEE Transactions on, 2004. 52(8): p. 1925-1933.<br />

170. Trimberger S. Carberry D. Johnson A. Wong J., A time-multiplexed FPGA.<br />

FPGAs for Custom Computing Machines, 1997. Proceedings., The 5th<br />

Annual IEEE Symposium on, 1997: p. 22-28.<br />

171. Sivaprakasam, M., et al., Towards a Modular 32 x 32 Pixel Stimulator for<br />

<strong>Retinal</strong> <strong>Prosthesis</strong>. Life Science Systems and Applications Workshop, 2006.<br />

IEEE/NLM, 2006: p. 1-2.<br />

129 of 200


172. Singh, P.R., et al. A matched biphasic microstimulator for an implantable<br />

retinal prosthetic device. in Circuits and Systems, 2004. ISCAS '04.<br />

Proceedings of the 2004 International Symposium on. 2004.<br />

173. Theogarajan, L.S., A Low-Power Fully Implantable 15-Channel <strong>Retinal</strong><br />

Stimulator Chip. Solid-State Circuits, IEEE Journal of, 2008. 43(10): p.<br />

2322-2337.<br />

174. Margalit, E., et al., <strong>Retinal</strong> <strong>Prosthesis</strong> for the Blind. Survey of<br />

Ophthalmology, 2002. 47(4): p. 335-356.<br />

175. Dagnelie G., Visual Prosthetics Physiology, Bioengineering, Rehabilitation.<br />

2011.<br />

176. Mokwa, W., et al. Intraocular epiretinal prosthesis to restore vision in blind<br />

humans. in Engineering in Medicine and Biology Society, 2008. EMBS 2008.<br />

30th Annual International Conference of the IEEE. 2008.<br />

177. Keman Yu Jiang Li Jizheng Xu Shipeng Li, Very low bit rate watercolor<br />

video. Proceedings of the 2003 International Symposium on Circuits and<br />

Systems, 2003. 2: p. II-712-II-715 vol.2.<br />

178. Min Hye, C., et al. A comparison of text reading speed using square and<br />

rectangular arrays for visual prosthesis. in Information Technology and<br />

Applications in Biomedicine, 2009. ITAB 2009. 9th International Conference<br />

on. 2009.<br />

179. Flexible, bus-powered USB high-speed storage is a reality - Texas<br />

Instruments. IEE Review, 2005. 51(5): p. 5-5.<br />

180. Ferwerda, J.A., Elements of early vision for computer graphics. Computer<br />

Graphics and Applications, IEEE, 2001. 21(5): p. 22-33.<br />

181. Todd, M. and R. Wilson. An anisotropic multi-resolution image data<br />

compression algorithm. in Acoustics, Speech, and Signal Processing, 1989.<br />

ICASSP-89., 1989 International Conference on. 1989.<br />

182. Es, A. and V. Isler. GPU based real time stereoscopic ray tracing. in<br />

Computer and information sciences, 2007. iscis 2007. 22nd international<br />

symposium on. 2007.<br />

183. Shi R. Z. Horiuchi T. K., A VLSI Model of the Bat Dorsal Nucleus of the<br />

Lateral Lemniscus for Azimuthal Echolocation. IEEE Trans. Circuits Syst. II,<br />

2005. 47(5).<br />

184. Chicca E. Lichtsteiner P. Delbruck T. Indiveri G. Douglas R.J., Modeling<br />

orientation selectivity using a neuromorphic multi-chip system. ISCAS 2006.<br />

Proceedings. 2006 IEEE International Symposium on Circuits and Systems,<br />

2006., 2006: p. 4 pp.<br />

185. Cooper, J., et al., Determination of safety distance limits for a human near a<br />

cellular base station antenna, adopting the IEEE standard or ICNIRP<br />

guidelines. Bioelectromagnetics, 2002. 23(6): p. 429-443.<br />

186. Gye-Hwan J. Jang Hee Y. Tae Soo L. Yong Sook G., Electrical Stimulation<br />

of Isolated Rabbit Retina. 27th Annual International Conference of the<br />

Engineering in Medicine and Biology Society, 2005: p. 5967-5970.<br />

187. Finn W.E. LoPresti P.G., Wavelength and intensity dependence of retinal<br />

evoked responses using in vivo optic nerve recording. IEEE Transactions on<br />

Neural Systems and Rehabilitation Engineering, 2003. 11(4): p. 372-376.<br />

188. Ohta J Tokuda T. Kagawa K. Furumiya T. Uehara A. Terasawa Y. Ozawa M.<br />

Fujikado T. Tano Y., Silicon LSI-based smart stimulators for retinal<br />

prosthesis. Engineering in Medicine and Biology Magazine, IEEE, 2006.<br />

25(5): p. 47-59.<br />

130 of 200


189. Ogmen, H. and M.H. Herzog, The Geometry of Visual Perception:<br />

Retinotopic and Nonretinotopic Representations in the Human Visual System.<br />

Proceedings of the IEEE. 98(3): p. 479-492.<br />

190. Kyuel, T., W. Geisler, and J. Ghosh, <strong>Retinal</strong>ly reconstructed images: digital<br />

images having a resolution match with the human eye. Systems, Man and<br />

Cybernetics, Part A: Systems and Humans, IEEE Transactions on, 1999.<br />

29(2): p. 235-243.<br />

191. Tianyi, Y., J. Fengzhe, and W. Jinglong. Central versus peripheral<br />

retinotopic and temporal frequency sensitivities of human visual areas<br />

measured using fMRI. in Mechatronics and Automation, 2009. ICMA 2009.<br />

International Conference on. 2009.<br />

192. Deutsch, S. and A. Deutsch, An Engineering Perspective. Understanding the<br />

Nervous System. 1993.<br />

193. Patton H. Fuchs A. F., Textbook of Physiology: Excitable Cells and<br />

Neurophysiology. 21st ed. Vol. 1. 1989.<br />

194. Chow, A. First trials and future technologies for artificial retinas. in Lasers<br />

and Electro-Optics Society, 2001. LEOS 2001. The 14th Annual Meeting of<br />

the IEEE. 2001.<br />

195. Benav, H., et al. Restoration of useful vision up to letter recognition<br />

capabilities using subretinal microphotodiodes. in Engineering in Medicine<br />

and Biology Society (EMBC), 2010 Annual International Conference of the<br />

IEEE.<br />

196. Zrenner, E., et al., Subretinal electronic chips allow blind patients to read<br />

letters and combine them to words. Proceedings of the Royal Society B:<br />

Biological Sciences, 2012. 278(1711): p. 1489-1497.<br />

197. Uehara, A., et al. System implementation of a CMOS vision chip for visual<br />

recovery. in Sensors and Camera Systems for Scientific, Industrial, and<br />

Digital Photography Applications IV. 2003. Santa Clara, CA, USA: SPIE.<br />

198. Rolando, C.A., J.F. Carmelo, and M.C. Elisa, Neuromorphic model of<br />

magnocellular and parvocellular visual paths: spatial resolution. Journal of<br />

Physics: Conference Series, 2007. 90(1): p. 012099.<br />

199. Croner, L.J. and E. Kaplan, Receptive fields of P and M ganglion cells across<br />

the primate retina. Vision Research, 1995. 35(1): p. 7-24.<br />

200. Mead, C., Neuromorphic electronic systems. Proceedings of the IEEE, 1990.<br />

78(10): p. 1629-1636.<br />

201. Mahowald, M., VLSI analogs of neuronal visual processing : a synthesis of<br />

form and function. 1992, California Institute of Technology, Computer<br />

Science Dept.: Pasadena, Calif.<br />

202. Chen, J.S., A. Huertas, and G. Medioni, Fast Convolution with Laplacian-of-<br />

Gaussian Masks. Pattern Analysis and Machine Intelligence, IEEE<br />

Transactions on, 1987. PAMI-9(4): p. 584-590.<br />

203. Young, R.A., The Gaussian derivative model for spatial vision: I. <strong>Retinal</strong><br />

mechanisms. Spatial Vision, 1987. 2(4): p. 273-293.<br />

204. Zukal, M., P. Cika, and R. Burget. Evaluation of interest point detectors for<br />

scenes with changing lightening conditions. in Telecommunications and<br />

Signal Processing (TSP), 2011 34th International Conference on.<br />

205. Rodriguez-Vazquez, A., et al., ACE16k: the third generation of mixed-signal<br />

SIMD-CNN ACE chips toward VSoCs. IEEE Transactions on Circuits and<br />

Systems I: Regular Papers,, 2004. 51(5): p. 851-863.<br />

131 of 200


206. Stieglitz, T., et al. Development of flexible stimulation devices for a retina<br />

implant system. in Engineering in Medicine and Biology Society, 1997.<br />

Proceedings of the 19th Annual International Conference of the IEEE. 1997.<br />

207. Humayun, M.S., et al., Pattern electrical stimulation of the human retina.<br />

Vision Research, 1999. 39(15): p. 2569-2576.<br />

208. Cogan, S.F., Neural Stimulation and Recording Electrodes. Annual Review<br />

of Biomedical Engineering, 2008. 10(1): p. 275-309.<br />

209. Humayun, M.S., et al., Visual perception in a blind subject with a chronic<br />

microelectronic retinal prosthesis. Vision Research, 2003. 43(24): p. 2573-<br />

2581.<br />

210. Johnson, L., et al., Impedance-based retinal contact imaging as an aid for the<br />

placement of high resolution epiretinal prostheses. Journal of Neural<br />

Engineering, 2007. 4(1): p. S17-23.<br />

211. Tasman, W. and E.A. Jaeger, Duane's Ophthalmology. 2009, Lippincott<br />

Williams & Wilkins/Ovid.<br />

212. http://www.britannica.com/EBchecked/topic/1688997/human-eye, humaneye.<br />

2012.<br />

213. Dacey, D.M. and O.S. Packer, Colour coding in the primate retina: diverse<br />

cell types and cone-specific circuitry. Current opinion in neurobiology, 2003.<br />

13(4): p. 421-7.<br />

214. Szmajda, B.A., U. Grunert, and P.R. Martin, <strong>Retinal</strong> ganglion cell inputs to<br />

the koniocellular pathway. The Journal of comparative neurology, 2008.<br />

510(3): p. 251-68.<br />

215. Paulo E. Stanga1, F.H., Jose A. Sahel3, Lyndon daCruz4, Francesco<br />

Merlini5, Brian Coley5, Robert J. Greenberg6, Argus II Study Group.,<br />

Patients Blinded By Outer <strong>Retinal</strong> Dystrophies Are Able To Perceive Color<br />

Using The ArgusTm II <strong>Retinal</strong> <strong>Prosthesis</strong> System.<br />

216. Kandagor, V., et al. Spatial characterization of electric potentials generated<br />

by pulsed microelectrode arrays. in Engineering in Medicine and Biology<br />

Society (EMBC), 2010 Annual International Conference of the IEEE.<br />

217. Kendir, G.A., et al., An optimal design methodology for inductive power link<br />

with class-E amplifier. Circuits and Systems I: Regular Papers, IEEE<br />

Transactions on, 2005. 52(5): p. 857-866.<br />

218. Chen, P.-J., et al. Implantable Flexible-Coiled Wireless Intraocular Pressure<br />

Sensor. in Micro Electro Mechanical Systems, 2009. MEMS 2009. IEEE<br />

22nd International Conference on. 2009.<br />

219. Singh, V., et al. Bioelectromagnetics for a <strong>Retinal</strong> <strong>Prosthesis</strong> to Restore<br />

Partial Vision to the Blind. in Electromagnetics in Advanced Applications,<br />

2007. ICEAA 2007. International Conference on<br />

Electromagnetics in Advanced Applications, 2007. ICEAA 2007. International<br />

Conference on VO -. 2007.<br />

220. Troyk, P.R. and A.D. Rush. Inductive link design for miniature implants. in<br />

Engineering in Medicine and Biology Society, 2009. EMBC 2009. Annual<br />

International Conference of the IEEE. 2009.<br />

221. Mingui, S., et al., Passing data and supplying power to neural implants.<br />

Engineering in Medicine and Biology Magazine, IEEE, 2006. 25(5): p. 39-<br />

46.<br />

222. Guoxing, W., et al. A Wireless Phase Shift Keying Transmitter with Q-<br />

Independent Phase Transition Time. in 27th Annual International Conference<br />

of the Engineering in Medicine and Biology Society. 2005.<br />

132 of 200


223. Mingcui, Z., et al. A Transcutaneous Data Telemetry System Tolerant to<br />

Power Telemetry Interference. in Engineering in Medicine and Biology<br />

Society, 2006. EMBS '06. 28th Annual International Conference of the IEEE.<br />

2006.<br />

224. Guoxing, W., et al. A Dual Band Wireless Power and Data Telemetry for<br />

<strong>Retinal</strong> <strong>Prosthesis</strong>. in Engineering in Medicine and Biology Society, 2006.<br />

EMBS '06. 28th Annual International Conference of the IEEE. 2006.<br />

225. Singh, V., et al. SAR in the human body by a wireless telemetry system for a<br />

retinal prosthesis. in Antennas and Propagation Society International<br />

Symposium, 2007 IEEE<br />

Antennas and Propagation Society International Symposium, 2007 IEEE VO -.<br />

2007.<br />

226. Lazzi G. DeMarco S.C. Liu W. Weiland J.D. Humayun M.S., Computed SAR<br />

and thermal elevation in a 0.25-mm 2-D model of the human eye and head in<br />

response to an implanted retinal stimulator - part II: results. Antennas and<br />

Propagation, IEEE Transactions on, 2003. 51(9): p. 2286-2295.<br />

227. Singh, V., et al., On the Thermal Elevation of a 60-Electrode Epiretinal<br />

<strong>Prosthesis</strong> for the Blind. Biomedical Circuits and Systems, IEEE<br />

Transactions on, 2008. 2(4): p. 289-300.<br />

228. Opie, N.L., et al. Thermal heating of a retinal prosthesis: Thermal model and<br />

in-vitro study. in Engineering in Medicine and Biology Society (EMBC),<br />

2010 Annual International Conference of the IEEE. 2010.<br />

229. Jeng-Shyong, S., et al., Electrical Stimulation in Isolated Rabbit Retina.<br />

Neural Systems and Rehabilitation Engineering, IEEE Transactions on, 2006.<br />

14(3): p. 290-298.<br />

230. Kendir, G.A., et al., An optimal design methodology for inductive power link<br />

with class-E amplifier. Circuits and Systems I: Regular Papers, IEEE<br />

Transactions on, 2005. 52(5): p. 857-866.<br />

231. Zaghloul, K.A. and K. Boahen, Optic nerve signals in a neuromorphic chip<br />

II: testing and results. Biomedical Engineering, IEEE Transactions on, 2004.<br />

51(4): p. 667-675.<br />

232. Bratkova, M., S. Boulos, and P. Shirley, oRGB: A Practical Opponent Color<br />

Space for Computer Graphics. Computer Graphics and Applications, IEEE,<br />

2009. 29(1): p. 42-55.<br />

%Appendix A MATLAB<br />

%ycut.m is this main file<br />

%this program to run frames and transmit events, and then show received<br />

%events, uses the function file "pwork.m" at the heart of this operation<br />

%<br />

%required files: mixedsource.m (test frames) and the following function files<br />

%pwork.m, scan.m, subi.m and, to show the neuromorphics - ysend.m.<br />

%<br />

%tidy;<br />

set (0,'DefaultFigureWindowStyle','docked')<br />

clear, close all, imtool close all<br />

clc<br />

%mixedsource;<br />

133 of 200


xmixedsource;<br />

numberOfFrames = 8;<br />

for t = 1: numberOfFrames<br />

%Load image<br />

priortime = t-1;<br />

gfr = imread (sprintf ('mixed%d.bmp', priortime));<br />

%gfr = imresize (gfr, 16);<br />

currentTime = t;<br />

fhr = imread (sprintf ('mixed%d.bmp', currentTime));<br />

%fhr = imresize (fhr, 16);<br />

changes = pwork (gfr, fhr);<br />

if t ~= numberOfFrames<br />

delayUser = input ('Next frame? (enter): ');<br />

end<br />

end<br />

set (0,'DefaultFigureWindowStyle','normal')<br />

%Showing neuromorphics (ysend.m)<br />

function [x] = send(x);<br />

tic<br />

rgb = double(x)/255;<br />

[nx, ny, nz]=size (rgb);<br />

r=reshape (rgb, nx*ny*nz, 1);<br />

%Lets calculate the row value of this reshaped vector<br />

rvalue = nx*ny*nz;<br />

samplenum = 50;<br />

fprintf ('You have chosen %4.f samples\n', samplenum);<br />

% modulate the pixels as spike trains of 500 samples<br />

spike= (diff (mod (kron(r/10, 0: samplenum)', 1))


%fprintf ('rgb2 is: %3.f rows, %3.f columns, colour planes: %0.f \n', t1, t2, t3);<br />

%Next line gives the last showing of the scene<br />

toc<br />

timeinminutes = toc/60<br />

%set (0,'DefaultFigureWindowStyle','normal')<br />

function changes = pwork (grandfather, father);<br />

square = 1; %Change this as required, 16 for 128 square from 8 square.<br />

grandfatherscene = imresize (grandfather, square);<br />

[d1, d2, d3] = size (grandfatherscene);<br />

%The supporting function file called here concerns the control word<br />

grandfatherscene = scan (grandfatherscene, d1, d2);<br />

[a1, a2, a3] = size (grandfatherscene);<br />

subplot (2, 3, 1)<br />

imshow (grandfatherscene (: 1:3)), title ('Ref. scene (t-1)');<br />

%figure, imshow (grandfatherscene (: 1:3)), title ('Ref. scene (t-1)');<br />

%Load up the cell array with the pixels of the test scene<br />

row = 1; %Setting dissection<br />

col = 1; %Setting dissection<br />

k = 0;<br />

for rx = 1: row: d1<br />

% b=num2str (rx);<br />

for cy = 1: col: d2<br />

% J=num2str (cy);<br />

k = k + 1;<br />

check {k} = subi (grandfatherscene, row, col, rx, cy);<br />

%check {k}<br />

%check {k} =imread (['D:\abc\subimage (' b ' th row) (' J ' th column).bmp']);<br />

end<br />

end<br />

%CHANGE THE REFERENCE SCENE BY A CERTAIN AMOUNT<br />

%chooseanimage<br />

fatherscene = imresize (father, square);<br />

[b1, b2, b3] = size (fatherscene);<br />

subplot (2, 3, 2)<br />

imshow (fatherscene (: 1:3)), title ('The scene changed');<br />

%The supporting function file called here concerns the control word<br />

fatherscene = scan (fatherscene, b1, b2);<br />

[h1, h2, h3] = size (fatherscene);<br />

%AT TIME - t i.e. Second reference<br />

%figure, imshow (fatherscene (: 1:3)), title ('At time "t"');<br />

rowf = 1; %Setting dissection<br />

colf = 1; %Setting dissection<br />

kf = 0;<br />

for rxf = 1: rowf: h1<br />

% b=num2str (rx);<br />

for cyf = 1: colf: h2<br />

% J=num2str (cy);<br />

kf = kf + 1;<br />

%Load up the cell array with the pixels of the test scene<br />

checkf {kf} = subi (fatherscene, rowf, colf, rxf, cyf);<br />

135 of 200


%check {k} =imread (['D:\abc\subimage (' b ' th row) (' J ' th column).bmp']);<br />

end<br />

end<br />

fprintf ('Therefore number of subimages (kf) will be %4.f for the test scene\n', kf);<br />

fprintf ('and the square root of kf will equal: %4.f\n', sqrt (kf));<br />

%diary on<br />

%diary worklog.txt<br />

%COMPARE THIS CHANGED SCENE AT: t; TO THE REFERENCE SCENE AT<br />

(t-1)<br />

%fid = fopen ('exp.txt', 'w');<br />

%I.E. fatherscene compared to grandfatherscene.<br />

%imshow (check {1} (: 1:3))<br />

rowf = 1; %Initialising rowf<br />

colf = 1; %Initialising colf<br />

k = 0; %Initialising k<br />

kf = 0; %Initialising kf<br />

ka = 0;<br />

for rxf = 1: rowf: h1<br />

for cyf = 1: colf: h2<br />

k = k+1;<br />

kf = kf+1;<br />

if (check{k}(:,:,1))~= (checkf{kf}(:,:,1))| (check{k}(:,:,2))~= ...<br />

(checkf {kf} (:,:, 2)) | (check{k}(:,:,3))~= (checkf{kf}(:,:,3))...<br />

| (check{k}(:,:,4))~= (checkf{kf}(:,:,4))|(check{k}(:,:,5))~= ...<br />

(checkf {kf} (: 5))|(check{k}(:,:,6))~= (checkf{kf}(:,:,6))<br />

ka = ka + 1;<br />

checka {ka} (: 1) = checkf {kf} (: 1);<br />

checka {ka} (: 2) = checkf {kf} (: 2);<br />

checka {ka} (: 3) = checkf {kf} (: 3);<br />

checka {ka} (: 4) = checkf {kf} (: 4);<br />

checka {ka} (: 5) = checkf {kf} (: 5);<br />

checka {ka} (: 6) = checkf {kf} (: 6);<br />

%sonscene(:,:,:) = [checka{ka}(:,:,1),checka{ka}(:,:,2),checka{ka}(:,:,3)]<br />

%sonscene = checka {ka};<br />

% address = [checka {ka} (: 5); checka {ka} (: 4)];<br />

% fprintf (fid,'%2.f %2.f\n', address);<br />

end<br />

end<br />

%dlmwrite ('test.txt', kf, 'delimiter', '\t', 'newline', 'pc');<br />

end<br />

%fclose (fid);<br />

for count = 1: ka<br />

changes (:,:,:) = cell2mat (checka);<br />

end<br />

%Transmit changes i.e from the send device to the receive device<br />

%Note that the send function deals with the neuromorphic waveform<br />

receive = ysend (changes);<br />

%ASSIGN SCENE PRESENTLY AT RECEIVER AS THE RECEIVED<br />

REFERENCE SCENE<br />

%WHICH WILL INCOPORATE CHANGES TO BECOME THE NEW SCENE.<br />

tprior = uint8 (grandfatherscene);<br />

136 of 200


%Essentially tprior was the (t-1) scene<br />

% [w1, w2, w3] = size (tprior);<br />

%fprintf ('tprior is %3.f rows, %3.f columns, %0.f colour planes. \n', w1, w2, w3);<br />

% [z1, z2, z3] = size (receive);<br />

%fprintf ('receive is %3.f rows, %3.f columns, %0.f colour planes\n', z1, z2, z3);<br />

%From "load up the cell array" to here for scene presently at receiver<br />

%ALTER THIS SCENE RECEIVED AT (t-1) {tprior} BY THE CHANGES<br />

NumOfChanges = ka;<br />

for ka = 1: NumOfChanges %Where "NumOfChanges" refers to pixel count<br />

%Setting variables "row" and "col" to their relevant planes<br />

row = checka {ka} (: 5);<br />

col = checka {ka} (: 4);<br />

tprior (row, col, 1:6) = checka {ka};<br />

%checks {ks} = checka {ka};<br />

%sonscene(:,:,:) = [checka{ka}(:,:,1),checka{ka}(:,:,2),checka{ka}(:,:,3)]<br />

%sonscene = checka {ka};<br />

% address = [checka {ka} (: 5); checka {ka} (: 4)];<br />

% fprintf (fid,'%2.f %2.f\n', address);<br />

end<br />

subplot (2, 3, 5)<br />

imshow (tprior (: 1:3)), title ('Tprior with changes');<br />

%figure, imshow (tprior (: 1:3)), title ('Tprior with changes');<br />

%Name of this file is "scan.m", which adds two further planes to an image<br />

%These planes will represent the "Control word" of addressing for a 128<br />

%by 128 test scene for use by AER routing<br />

function [f] = scan (f, m, n);<br />

%SCAN adds further planes<br />

pixelcount = m*n;<br />

%Setting up the yellow plane<br />

%fR = grandfatherscene (: 1);<br />

%fG = grandfatherscene (: 2);<br />

%c = 255 - fR;<br />

%m = 255 - fG;<br />

%grandfatherscene (: 4) = c;<br />

%grandfatherscene (: 5) = m;<br />

for count = 1 :( m*n)<br />

fB = f (: 3);<br />

y = (255-fB);<br />

f (: 6) = y;<br />

count = count+1;<br />

end<br />

%Getting the size of the inputted scene<br />

% [d1, d2, d3] = size (f);<br />

%fprintf ('\nThe rows of this reference scene are %4.f pixels\n', d1);<br />

%fprintf ('The columns of this reference scene are %4.f pixels\n', d2);<br />

%fprintf ('%0.f colour planes exist in this viewed scene\n', d3);<br />

rx = 1;<br />

cy = 1;<br />

rowhigh = rx + m -1;<br />

colhigh = cy + n -1;<br />

137 of 200


%Two for loops the first (outer loop) assigning the row number for every<br />

%column of the image<br />

%The second (inner) loop assigns column numbers for every row of the outer<br />

%loop. Note that the row numbers are stored on plane 5 and cols on plane 4.<br />

for r = rx: rowhigh<br />

for c = cy: colhigh<br />

%Assigning column numbers<br />

f(r, c, 4) = c;<br />

end<br />

%Name of this file is "subim.m", which accepts input(s)<br />

function [s] = subi (f, m, n, rx, cy);<br />

%SUBIM Extracts a subimage, s, from a given image, f.<br />

%The subimage is of size m-by-n and the coordinates of<br />

%its top, left corner are (rx, cy).<br />

%maxsize = imread ('D:\testimages\maxsize.bmp');<br />

%for count = 1 :( m*n)<br />

%fB = f (: 3);<br />

%y = (255-fB);<br />

%f (: 6) = y;<br />

%count = count+1;<br />

%end<br />

%rowsofinputimage = 1000;<br />

%columnsofinputimage = 1000;<br />

%f = zeros (rowsofinputimage, columnsofinputimage,'double');<br />

%f (: 1) =0;<br />

%f (: 2) =0;<br />

%f (: 3) =0;<br />

%s = zeros (m, n,'uint8');<br />

%s (: 1) =0;<br />

%s (: 2) =0;<br />

%s (: 3) =0;<br />

%showzero = imresize(s, 10);<br />

%figure, imshow (showzero)<br />

rowhigh = rx + m -1;<br />

colhigh = cy + n -1;<br />

%xcount = 0;<br />

%for r = rx: rowhigh<br />

% xcount = xcount + 1;<br />

% ycount = 0;<br />

% for c = cy: colhigh<br />

% ycount = ycount + 1;<br />

% s (xcount, ycount, 1) = f(r, c, 1);<br />

% s (xcount, ycount, 2) = f(r, c, 2);<br />

% s (xcount, ycount, 3) = f(r, c, 3);<br />

% end<br />

%end<br />

r = rx: rowhigh;<br />

c = cy: colhigh;<br />

s (: 1) = f(r, c, 1);<br />

s (: 2) = f(r, c, 2);<br />

138 of 200


s (: 3) = f(r, c, 3);<br />

s (: 4) = f(r, c, 4);<br />

s (: 5) = f(r, c, 5);<br />

s (: 6) = f(r, c, 6);<br />

%Show the resultant image<br />

%n = numel(s)<br />

%showsubimage = imresize(s, 0.1);<br />

%figure, imshow (showsubimage), title ('Final figure shown here at a tenth of actual<br />

size');<br />

%Resetting for this new image size of eight by eight<br />

%This next test image is named as mixed9<br />

%Resetting for this new image size of eight by eight<br />

im=zeros (8, 8, 3,'uint8');<br />

%This next test image is named as mixed8<br />

%Resetting for this new image size of eight by eight<br />

im=zeros (8, 8, 3,'uint8');<br />

rr1c1=255; rr1c2=255; rr1c3=255; rr1c4=000; rr1c5=000; rr1c6=127; rr1c7=96;<br />

rr1c8=127;<br />

gr1c1=000; gr1c2=127; gr1c3=255; gr1c4=255; gr1c5=000; gr1c6=000; gr1c7=000;<br />

gr1c8=127;<br />

br1c1=000; br1c2=000; br1c3=000; br1c4=000; br1c5=255; br1c6=127; br1c7=96;<br />

br1c8=127;<br />

%Row break<br />

rr2c1=127; rr2c2=255; rr2c3=255; rr2c4=255; rr2c5=000; rr2c6=000; rr2c7=127;<br />

rr2c8=96;<br />

gr2c1=127; gr2c2=000; gr2c3=127; gr2c4=255; gr2c5=255; gr2c6=000; gr2c7=000;<br />

gr2c8=000;<br />

br2c1=127; br2c2=000; br2c3=000; br2c4=000; br2c5=000; br2c6=255; br2c7=127;<br />

br2c8=96;<br />

%Row break<br />

rr3c1=96; rr3c2=127; rr3c3=255; rr3c4=255; rr3c5=255; rr3c6=000; rr3c7=000;<br />

rr3c8=127;<br />

gr3c1=000; gr3c2=127; gr3c3=000; gr3c4=127; gr3c5=255; gr3c6=255; gr3c7=000;<br />

gr3c8=000;<br />

br3c1=96; br3c2=127; br3c3=000; br3c4=000; br3c5=000; br3c6=000; br3c7=255;<br />

br3c8=127;<br />

%Row break<br />

rr4c1=127; rr4c2=96; rr4c3=127; rr4c4=255; rr4c5=255; rr4c6=255; rr4c7=000;<br />

rr4c8=000;<br />

gr4c1=000; gr4c2=000; gr4c3=127; gr4c4=000; gr4c5=127; gr4c6=255; gr4c7=255;<br />

gr4c8=000;<br />

br4c1=127; br4c2=96; br4c3=127; br4c4=000; br4c5=000; br4c6=000; br4c7=000;<br />

br4c8=255;<br />

%Row break<br />

rr5c1=000; rr5c2=127; rr5c3=96; rr5c4=127; rr5c5=255; rr5c6=255; rr5c7=255;<br />

rr5c8=000;<br />

gr5c1=000; gr5c2=000; gr5c3=000; gr5c4=127; gr5c5=000; gr5c6=127; gr5c7=255;<br />

gr5c8=255;<br />

br5c1=255; br5c2=127; br5c3=96; br5c4=127; br5c5=000; br5c6=000; br5c7=000;<br />

br5c8=000;<br />

139 of 200


%Row break<br />

rr6c1=000; rr6c2=000; rr6c3=127; rr6c4=96; rr6c5=127; rr6c6=255; rr6c7=255;<br />

rr6c8=255;<br />

gr6c1=255; gr6c2=000; gr6c3=000; gr6c4=000; gr6c5=127; gr6c6=000; gr6c7=127;<br />

gr6c8=255;<br />

br6c1=000; br6c2=255; br6c3=127; br6c4=96; br6c5=127; br6c6=000; br6c7=000;<br />

br6c8=000;<br />

%Row break<br />

rr7c1=255; rr7c2=000; rr7c3=000; rr7c4=127; rr7c5=96; rr7c6=127; rr7c7=255;<br />

rr7c8=255;<br />

gr7c1=255; gr7c2=255; gr7c3=000; gr7c4=000; gr7c5=000; gr7c6=127; gr7c7=000;<br />

gr7c8=127;<br />

br7c1=000; br7c2=000; br7c3=255; br7c4=127; br7c5=96; br7c6=127; br7c7=000;<br />

br7c8=000;<br />

%Row break<br />

rr8c1=255; rr8c2=255; rr8c3=255; rr8c4=255; rr8c5=255; rr8c6=255; rr8c7=255;<br />

rr8c8=255;<br />

gr8c1=255; gr8c2=255; gr8c3=255; gr8c4=255; gr8c5=255; gr8c6=255; gr8c7=255;<br />

gr8c8=255;<br />

br8c1=255; br8c2=255; br8c3=255; br8c4=255; br8c5=255; br8c6=255; br8c7=255;<br />

br8c8=255;<br />

%mixed PLANE COORDINATES<br />

im(1,1,1) = rr1c1; im(1,2,1) = rr1c2; im(1,3,1) = rr1c3; im(1,4,1) = rr1c4;<br />

im(1,5,1) = rr1c5; im(1,6,1) = rr1c6; im(1,7,1) = rr1c7; im(1,8,1) = rr1c8;<br />

im(2,1,1) = rr2c1; im(2,2,1) = rr2c2; im(2,3,1) = rr2c3; im(2,4,1) = rr2c4;<br />

im(2,5,1) = rr2c5; im(2,6,1) = rr2c6; im(2,7,1) = rr2c7; im(2,8,1) = rr2c8;<br />

im(3,1,1) = rr3c1; im(3,2,1) = rr3c2; im(3,3,1) = rr3c3; im(3,4,1) = rr3c4;<br />

im(3,5,1) = rr3c5; im(3,6,1) = rr3c6; im(3,7,1) = rr3c7; im(3,8,1) = rr3c8;<br />

im(4,1,1) = rr4c1; im(4,2,1) = rr4c2; im(4,3,1) = rr4c3; im(4,4,1) = rr4c4;<br />

im(4,5,1) = rr4c5; im(4,6,1) = rr4c6; im(4,7,1) = rr4c7; im(4,8,1) = rr4c8;<br />

im(5,1,1) = rr5c1; im(5,2,1) = rr5c2; im(5,3,1) = rr5c3; im(5,4,1) = rr5c4;<br />

im(5,5,1) = rr5c5; im(5,6,1) = rr5c6; im(5,7,1) = rr5c7; im(5,8,1) = rr5c8;<br />

im(6,1,1) = rr6c1; im(6,2,1) = rr6c2; im(6,3,1) = rr6c3; im(6,4,1) = rr6c4;<br />

im(6,5,1) = rr6c5; im(6,6,1) = rr6c6; im(6,7,1) = rr6c7; im(6,8,1) = rr6c8;<br />

im(7,1,1) = rr7c1; im(7,2,1) = rr7c2; im(7,3,1) = rr7c3; im(7,4,1) = rr7c4;<br />

im(7,5,1) = rr7c5; im(7,6,1) = rr7c6; im(7,7,1) = rr7c7; im(7,8,1) = rr7c8;<br />

im(8,1,1) = rr8c1; im(8,2,1) = rr8c2; im(8,3,1) = rr8c3; im(8,4,1) = rr8c4;<br />

im(8,5,1) = rr8c5; im(8,6,1) = rr8c6; im(8,7,1) = rr8c7; im(8,8,1) = rr8c8;<br />

%mixed PLANE CORDINATES<br />

im(1,1,2) = gr1c1; im(1,2,2) = gr1c2; im(1,3,2) = gr1c3; im(1,4,2) = gr1c4;<br />

im(1,5,2) = gr1c5; im(1,6,2) = gr1c6; im(1,7,2) = gr1c7; im(1,8,2) = gr1c8;<br />

im(2,1,2) = gr2c1; im(2,2,2) = gr2c2; im(2,3,2) = gr2c3; im(2,4,2) = gr2c4;<br />

im(2,5,2) = gr2c5; im(2,6,2) = gr2c6; im(2,7,2) = gr2c7; im(2,8,2) = gr2c8;<br />

im(3,1,2) = gr3c1; im(3,2,2) = gr3c2; im(3,3,2) = gr3c3; im(3,4,2) = gr3c4;<br />

im(3,5,2) = gr3c5; im(3,6,2) = gr3c6; im(3,7,2) = gr3c7; im(3,8,2) = gr3c8;<br />

im(4,1,2) = gr4c1; im(4,2,2) = gr4c2; im(4,3,2) = gr4c3; im(4,4,2) = gr4c4;<br />

im(4,5,2) = gr4c5; im(4,6,2) = gr4c6; im(4,7,2) = gr4c7; im(4,8,2) = gr4c8;<br />

im(5,1,2) = gr5c1; im(5,2,2) = gr5c2; im(5,3,2) = gr5c3; im(5,4,2) = gr5c4;<br />

im(5,5,2) = gr5c5; im(5,6,2) = gr5c6; im(5,7,2) = gr5c7; im(5,8,2) = gr5c8;<br />

im(6,1,2) = gr6c1; im(6,2,2) = gr6c2; im(6,3,2) = gr6c3; im(6,4,2) = gr6c4;<br />

im(6,5,2) = gr6c5; im(6,6,2) = gr6c6; im(6,7,2) = gr6c7; im(6,8,2) = gr6c8;<br />

140 of 200


im(7,1,2) = gr7c1; im(7,2,2) = gr7c2; im(7,3,2) = gr7c3; im(7,4,2) = gr7c4;<br />

im(7,5,2) = gr7c5; im(7,6,2) = gr7c6; im(7,7,2) = gr7c7; im(7,8,2) = gr7c8;<br />

im(8,1,2) = gr8c1; im(8,2,2) = gr8c2; im(8,3,2) = gr8c3; im(8,4,2) = gr8c4;<br />

im(8,5,2) = gr8c5; im(8,6,2) = gr8c6; im(8,7,2) = gr8c7; im(8,8,2) = gr8c8;<br />

%BLUE PLANE COORDINATES<br />

im(1,1,3) = br1c1; im(1,2,3) = br1c2; im(1,3,3) = br1c3; im(1,4,3) = br1c4;<br />

im(1,5,3) = br1c5; im(1,6,3) = br1c6; im(1,7,3) = br1c7; im(1,8,3) = br1c8;<br />

im(2,1,3) = br2c1; im(2,2,3) = br2c2; im(2,3,3) = br2c3; im(2,4,3) = br2c4;<br />

im(2,5,3) = br2c5; im(2,6,3) = br2c6; im(2,7,3) = br2c7; im(2,8,3) = br2c8;<br />

im(3,1,3) = br3c1; im(3,2,3) = br3c2; im(3,3,3) = br3c3; im(3,4,3) = br3c4;<br />

im(3,5,3) = br3c5; im(3,6,3) = br3c6; im(3,7,3) = br3c7; im(3,8,3) = br3c8;<br />

im(4,1,3) = br4c1; im(4,2,3) = br4c2; im(4,3,3) = br4c3; im(4,4,3) = br4c4;<br />

im(4,5,3) = br4c5; im(4,6,3) = br4c6; im(4,7,3) = br4c7; im(4,8,3) = br4c8;<br />

im(5,1,3) = br5c1; im(5,2,3) = br5c2; im(5,3,3) = br5c3; im(5,4,3) = br5c4;<br />

im(5,5,3) = br5c5; im(5,6,3) = br5c6; im(5,7,3) = br5c7; im(5,8,3) = br5c8;<br />

im(6,1,3) = br6c1; im(6,2,3) = br6c2; im(6,3,3) = br6c3; im(6,4,3) = br6c4;<br />

im(6,5,3) = br6c5; im(6,6,3) = br6c6; im(6,7,3) = br6c7; im(6,8,3) = br6c8;<br />

im(7,1,3) = br7c1; im(7,2,3) = br7c2; im(7,3,3) = br7c3; im(7,4,3) = br7c4;<br />

im(7,5,3) = br7c5; im(7,6,3) = br7c6; im(7,7,3) = br7c7; im(7,8,3) = br7c8;<br />

im(8,1,3) = br8c1; im(8,2,3) = br8c2; im(8,3,3) = br8c3; im(8,4,3) = br8c4;<br />

im(8,5,3) = br8c5; im(8,6,3) = br8c6; im(8,7,3) = br8c7; im(8,8,3) = br8c8;<br />

mixed {8} = im;<br />

%The preceding image planes represents an 8-by-8 image.<br />

numberOfFrames = 9;<br />

for t = 1: numberOfFrames<br />

%Load image<br />

priortime = t-1;<br />

imwrite (mixed {t}, (sprintf ('mixed%d.bmp', priortime)));<br />

end<br />

141 of 200


--Appendix B FPGA Test setup<br />

library IEEE;<br />

use IEEE.STD_LOGIC_1164.ALL;<br />

use IEEE.STD_LOGIC_ARITH.ALL;<br />

use IEEE.STD_LOGIC_UNSIGNED.ALL;<br />

Library XilinxCoreLib;<br />

Library UNISIM;<br />

use UNISIM.vcomponents.all;<br />

---- Uncomment the following library declaration if instantiating<br />

---- any Xilinx primitives in this code.<br />

--library UNISIM;<br />

--use UNISIM.VComponents.all;<br />

entity top is<br />

Port (USER_CLK: in STD_LOGIC;<br />

DVI_RESET_B: in STD_LOGIC;<br />

--xps_tft_0_TFT_DVI_CLK_N_pin:<br />

STD_LOGIC;<br />

DVI_D0: out STD_LOGIC;<br />

DVI_D1: out STD_LOGIC;<br />

DVI_D2: out STD_LOGIC;<br />

DVI_D3: out STD_LOGIC;<br />

DVI_D4: out STD_LOGIC;<br />

DVI_D5: out STD_LOGIC;<br />

DVI_D6: out STD_LOGIC;<br />

DVI_D7: out STD_LOGIC;<br />

DVI_D8: out STD_LOGIC;<br />

DVI_D9: out STD_LOGIC;<br />

DVI_D10: out STD_LOGIC;<br />

DVI_D11: out STD_LOGIC;<br />

DVI_H: out STD_LOGIC;<br />

DVI_V: out STD_LOGIC;<br />

-- DVI_XCLK_N: out STD_LOGIC;<br />

DVI_DE: out STD_LOGIC;<br />

DVI_XCLK_P: out STD_LOGIC<br />

-- VREF: IN std_logic<br />

);<br />

end top;<br />

out<br />

architecture Structural of top is<br />

--COMPONENT mem_of_pix_1024_f1x is<br />

----generic (width: integer; addr_width: integer);<br />

-- Port (clka: in STD_LOGIC;<br />

-- addra: in STD_LOGIC_VECTOR (9 downto 0);<br />

-- douta: out STD_LOGIC_VECTOR (23 downto 0));<br />

COMPONENT mem_of_aer_stream IS<br />

port (<br />

clka: IN std_logic;<br />

142 of 200


addra: IN std_logic_VECTOR (9 downto 0);<br />

douta: OUT std_logic_VECTOR (159 downto 0));<br />

END COMPONENT; --Representation of "mem_of_pix_1024_f1x" in AER format<br />

COMPONENT pixelclock_dcm<br />

Port (<br />

CLKIN_IN: IN std_logic;<br />

RST_IN: IN std_logic;<br />

CLKDV_OUT: OUT std_logic;<br />

CLKIN_IBUFG_OUT: OUT std_logic;<br />

CLK0_OUT: OUT std_logic;<br />

LOCKED_OUT: OUT std_logic<br />

);<br />

END COMPONENT;<br />

COMPONENT get_spike_on_from_clock_25MHz is<br />

Port (clock_25MHz: in STD_LOGIC;<br />

spike_clock: out STD_LOGIC);<br />

END COMPONENT;<br />

COMPONENT get_pixel_from_spike_on_clock is<br />

Port (spike_clock: in STD_LOGIC;<br />

sim_clock: out STD_LOGIC);<br />

END COMPONENT;<br />

COMPONENT avga<br />

PORT (<br />

clock_25MHz: IN std_logic;<br />

red: IN std_logic_vector (7 downto 0);<br />

green: IN std_logic_vector (7 downto 0);<br />

blue: IN std_logic_vector (7 downto 0);<br />

red_out: OUT std_logic_vector (7 downto 0);<br />

green_out: OUT std_logic_vector (7 downto 0);<br />

blue_out: OUT std_logic_vector (7 downto 0);<br />

horiz_sync_out: OUT std_logic;<br />

vert_sync_out: OUT std_logic;<br />

address: out std_logic_vector (9 downto 0);<br />

address for ROM<br />

video_on: out std_logic<br />

--confirm viewable data<br />

);<br />

END COMPONENT;<br />

--Single<br />

COMPONENT AER_test_stream_to_pulse_count is<br />

Port ( ts_clock: in STD_LOGIC;<br />

AER_RGB_incoming_stream:<br />

in<br />

STD_LOGIC_VECTOR (159 downto 0);<br />

stream_red_pulse_count: out std_logic_vector (7<br />

downto 0); --Originally nine<br />

stream_green_pulse_count: out std_logic_vector (7<br />

downto 0); --Originally nine<br />

143 of 200


stream_blue_pulse_count: out std_logic_vector (7<br />

downto 0) --Originally nine<br />

);<br />

END COMPONENT;<br />

COMPONENT convert_hue_pulses_to_intensity is<br />

Port ( ts_clock: in STD_LOGIC;<br />

stream_red_pc: in std_logic_vector (7 downto 0); --<br />

Originally nine<br />

stream_green_pc: in std_logic_vector (7 downto 0); --<br />

Originally nine<br />

stream_blue_pc: in std_logic_vector (7 downto 0); --<br />

Originally nine<br />

large_red_counter: out std_logic_vector (7 downto 0);<br />

large_green_counter: out std_logic_vector (7 downto<br />

0);<br />

large_blue_counter: out std_logic_vector (7 downto 0)<br />

);<br />

END COMPONENT;<br />

COMPONENT dvi_clk_from_vga_clock is<br />

Port (user_clock: in STD_LOGIC;<br />

data_clock: out STD_LOGIC);<br />

END COMPONENT;<br />

COMPONENT dvimux is<br />

Port (dvi_clk: IN STD_LOGIC;<br />

r_out: in STD_LOGIC_VECTOR (7 downto 0);<br />

g_out: in STD_LOGIC_VECTOR (7 downto 0);<br />

b_out: in STD_LOGIC_VECTOR (7 downto 0);<br />

--TFT_DVI_DATA: out STD_LOGIC_VECTOR (11<br />

downto 0)<br />

TFT_DVI_DATA_0: out std_logic;<br />

TFT_DVI_DATA_1: out std_logic;<br />

TFT_DVI_DATA_2: out std_logic;<br />

TFT_DVI_DATA_3: out std_logic;<br />

TFT_DVI_DATA_4: out std_logic;<br />

TFT_DVI_DATA_5: out std_logic;<br />

TFT_DVI_DATA_6: out std_logic;<br />

TFT_DVI_DATA_7: out std_logic;<br />

TFT_DVI_DATA_8: out std_logic;<br />

TFT_DVI_DATA_9: out std_logic;<br />

TFT_DVI_DATA_10: out std_logic;<br />

TFT_DVI_DATA_11: out std_logic<br />

);<br />

END COMPONENT;<br />

signal clock_25MHz: std_logic;<br />

signal red_out: std_logic_vector (7 downto 0);<br />

signal green_out: std_logic_vector (7 downto 0);<br />

signal blue_out: std_logic_vector (7 downto 0);<br />

signal red: std_logic_vector (7 downto 0);<br />

signal green: std_logic_vector (7 downto 0);<br />

144 of 200


egin<br />

signal blue: std_logic_vector (7 downto 0);<br />

signal addra: std_logic_vector (9 downto 0); -- (11 downto 0)<br />

signal address: std_logic_vector (9 downto 0);<br />

signal douta: std_logic_vector (23 downto 0);<br />

signal pixel_data: std_logic_vector (23 downto 0);<br />

signal video_on: std_logic;<br />

signal short_add: std_logic_vector (9 downto 0):= "0000000000";<br />

signal spike_clock: std_logic;<br />

signal sim_clock: std_logic;<br />

signal AER_RGB_stream: std_logic_vector (159 downto 0);<br />

signal red_hue_out: std_logic_vector (7 downto 0);<br />

signal green_hue_out: std_logic_vector (7 downto 0);<br />

signal blue_hue_out: std_logic_vector (7 downto 0);<br />

signal red_intensity: std_logic_vector (7 downto 0);<br />

signal green_intensity: std_logic_vector (7 downto 0);<br />

signal blue_intensity: std_logic_vector (7 downto 0);<br />

signal DVI_DATA: std_logic_vector (11 downto 0);<br />

signal to_dvi_clk: std_logic;<br />

--------------------------------------------------<br />

constant COLUMNLENGTH: std_logic_vector (5 downto 0):= "100000";<br />

--VGA standard: 640 displayable pixels (800 actual)<br />

constant offset: std_logic_vector (5 downto 0):= "001000"; -- (11 downto 0)<br />

constant cue_dot: std_logic_vector (5 downto 0):= "001000";<br />

constant pixcount: std_logic_vector (9 downto 0):= "1111111111";<br />

signal count: integer: = 0;<br />

Inst_pixelclock_dcm: pixelclock_dcm<br />

port map (<br />

CLKIN_IN => USER_CLK,<br />

RST_IN => not DVI_RESET_B,<br />

CLKDV_OUT => clock_25MHz<br />

);<br />

Inst_avga: avga PORT MAP (<br />

clock_25MHz => clock_25MHz,<br />

red => red,<br />

green => green,<br />

blue => blue,<br />

red_out => red_out,<br />

green_out => green_out,<br />

blue_out => blue_out,<br />

horiz_sync_out => DVI_H,<br />

vert_sync_out => DVI_V,<br />

address => address,<br />

video_on => video_on<br />

);<br />

-- DVI_XCLK_N


DVI_XCLK_P clock_25MHz,<br />

spike_clock => spike_clock<br />

);<br />

Inst_get_pixel_from_spike_on_clock: get_pixel_from_spike_on_clock<br />

Port map (spike_clock => spike_clock,<br />

sim_clock => sim_clock<br />

);<br />

--Inst_mem_of_pix_1024_f1x: mem_of_pix_1024_f1x PORT MAP (<br />

----generic (width: integer; addr_width: integer);<br />

--clka => clock_25MHz,<br />

--addra => addra,<br />

--douta => pixel_data<br />

--);<br />

Inst_mem_of_aer_stream: mem_of_aer_stream PORT MAP (<br />

clka => clock_25MHz,<br />

addra => address,<br />

douta => AER_RGB_stream<br />

);<br />

--architecture behave of my_block is<br />

--begin<br />

Mem_read: process (clock_25MHz) is<br />

Begin<br />

-- if (DVI_RESET_B = '0') then<br />

-- addra blue_hue_out --<br />

stream_blue_pc<br />

);<br />

146 of 200


Inst_convert_hue_pulses_to_intensity: convert_hue_pulses_to_intensity<br />

Port map ( ts_clock => spike_clock,<br />

--spike_clock<br />

stream_red_pc => red_hue_out,<br />

stream_green_pc => green_hue_out,<br />

stream_blue_pc => blue_hue_out,<br />

large_red_counter => red_intensity,<br />

large_green_counter => green_intensity,<br />

large_blue_counter => blue_intensity<br />

);<br />

-- red blue_intensity,<br />

--TFT_DVI_DATA => DVI_DATA<br />

TFT_DVI_DATA_0 => DVI_D0,<br />

TFT_DVI_DATA_1 => DVI_D1,<br />

TFT_DVI_DATA_2 => DVI_D2,<br />

TFT_DVI_DATA_3 => DVI_D3,<br />

TFT_DVI_DATA_4 => DVI_D4,<br />

TFT_DVI_DATA_5 => DVI_D5,<br />

TFT_DVI_DATA_6 => DVI_D6,<br />

TFT_DVI_DATA_7 => DVI_D7,<br />

TFT_DVI_DATA_8 => DVI_D8,<br />

TFT_DVI_DATA_9 => DVI_D9,<br />

TFT_DVI_DATA_10 => DVI_D10,<br />

TFT_DVI_DATA_11 => DVI_D11<br />

);<br />

end Structural;<br />

----------------------------------------------------------------------------------<br />

-- Create Date: 15:04:17 11/25/2008<br />

-- Design Name:<br />

-- Module Name: dvimux - Behavioral<br />

library IEEE;<br />

use IEEE.STD_LOGIC_1164.ALL;<br />

use IEEE.STD_LOGIC_ARITH.ALL;<br />

147 of 200


use IEEE.STD_LOGIC_UNSIGNED.ALL;<br />

---- Uncomment the following library declaration if instantiating<br />

---- any Xilinx primitives in this code.<br />

--library UNISIM;<br />

--use UNISIM.VComponents.all;<br />

entity dvimux is<br />

Port (dvi_clk: IN STD_LOGIC;<br />

r_out: in STD_LOGIC_VECTOR (7 downto 0);<br />

g_out: in STD_LOGIC_VECTOR (7 downto 0);<br />

b_out: in STD_LOGIC_VECTOR (7 downto 0);<br />

--TFT_DVI_DATA: out STD_LOGIC_VECTOR (11<br />

downto 0)<br />

TFT_DVI_DATA_0: out std_logic;<br />

TFT_DVI_DATA_1: out std_logic;<br />

TFT_DVI_DATA_2: out std_logic;<br />

TFT_DVI_DATA_3: out std_logic;<br />

TFT_DVI_DATA_4: out std_logic;<br />

TFT_DVI_DATA_5: out std_logic;<br />

TFT_DVI_DATA_6: out std_logic;<br />

TFT_DVI_DATA_7: out std_logic;<br />

TFT_DVI_DATA_8: out std_logic;<br />

TFT_DVI_DATA_9: out std_logic;<br />

TFT_DVI_DATA_10: out std_logic;<br />

TFT_DVI_DATA_11: out std_logic<br />

);<br />

end dvimux;<br />

architecture Behavioral of dvimux is<br />

begin<br />

produce_12_bit_word_twice: process (dvi_clk, r_out, g_out, b_out)<br />

variable TFT_DVI_DATA: std_logic_vector (11 downto 0);<br />

begin<br />

if dvi_clk = '1' then<br />

--if (dvi_clk'event) AND (dvi_clk = '1') then<br />

--if rising_edge (dvi_clk) then<br />

--TFT_DVI_DATA:= r_out (7 downto 0) & g_out (7 downto 4);<br />

TFT_DVI_DATA:= r_out (7 downto 6) & g_out (7 downto 6) & b_out (7 downto 6)<br />

& r_out (5 downto 4) & g_out (5 downto 4) & b_out (5 downto 4);<br />

--elsif (dvi_clk'event) AND (dvi_clk = '0') then<br />

elsif dvi_clk = '0' then<br />

--elsif falling_edge (dvi_clk) then --not supported in the current software release!<br />

--TFT_DVI_DATA:= g_out (3 downto 0) & b_out (7 downto 0);<br />

TFT_DVI_DATA:= r_out (3 downto 2) & g_out (3 downto 2) & b_out (3 downto 2)<br />

& r_out (1 downto 0) & g_out (1 downto 0) & b_out (1 downto 0);<br />

-- TFT_DVI_DATA_0


-- TFT_DVI_DATA_4


--Appendix C FPGA Sender chip<br />

library IEEE;<br />

use IEEE.STD_LOGIC_1164.ALL;<br />

use IEEE.STD_LOGIC_ARITH.ALL;<br />

use IEEE.STD_LOGIC_UNSIGNED.ALL;<br />

---- Uncomment the following library declaration if instantiating<br />

---- any Xilinx primitives in this code.<br />

--library UNISIM;<br />

--use UNISIM.VComponents.all;<br />

entity top_wrapper is<br />

Port (PB_ENTER: in STD_LOGIC;<br />

-- SYSTEM_CLOCK: in STD_LOGIC;<br />

USER_CLK: in STD_LOGIC;<br />

-- IO_L13P_11: out STD_LOGIC_VECTOR (11 downto 0);--<br />

"L33"(1) [1]<br />

--IO_L13N_11: out STD_LOGIC_VECTOR (11 downto 0);--"M32"(2) [2]<br />

--IO_L14P_11: out STD_LOGIC_VECTOR (11 downto 0)--"P34"(3) [3]<br />

IO_L14P_11: out STD_LOGIC_VECTOR (21 downto 0)--"P34"(3) [3]<br />

);<br />

end top_wrapper;<br />

architecture Behavioral of top_wrapper is<br />

COMPONENT mem_of_16_stream is<br />

Port (clka: in STD_LOGIC;<br />

addra: in STD_LOGIC_VECTOR (3 downto 0);<br />

douta: out STD_LOGIC_VECTOR (23 downto 0));<br />

END COMPONENT;<br />

COMPONENT pixelclock_dcm is<br />

port (CLKIN_IN : in std_logic;<br />

RST_IN : in std_logic;<br />

CLKDV_OUT : out std_logic;<br />

CLKIN_IBUFG_OUT: out std_logic;<br />

CLK0_OUT : out std_logic;<br />

LOCKED_OUT : out std_logic);<br />

END COMPONENT;<br />

COMPONENT get_partial_spike_clock_from_incoming_clock is<br />

Port (received_clock: in STD_LOGIC;<br />

out_clock: out STD_LOGIC);<br />

END COMPONENT;<br />

COMPONENT get_spike_on_from_partial_spike_clock is<br />

Port (partial_spike_clock: in STD_LOGIC;<br />

spike_clock: out STD_LOGIC);<br />

END COMPONENT;<br />

COMPONENT get_pixel_clock_from_partial_spike_clock is<br />

Port (partial_spike_clock: in STD_LOGIC;<br />

pixel_clock: out STD_LOGIC);<br />

150 of 200


END COMPONENT;<br />

COMPONENT pulsar --Intensity values to<br />

pulse count.<br />

Port (pc_clk: in std_logic; --pc_clk maps to "spike_clock"<br />

value: in STD_LOGIC_VECTOR (7 downto 0); --8 bits<br />

ZZ: out std_logic_vector (7 downto 0)); --8 bits<br />

END COMPONENT;<br />

--The following component: "pulse_count_to_pc_stream" is the OUTGOING signal<br />

COMPONENT pulse_count_to_pc_stream is<br />

Port (pc_clk: in STD_LOGIC;<br />

four_bit_address: in STD_LOGIC_VECTOR (3 downto 0);<br />

R_count: in STD_LOGIC_VECTOR (7 downto 0);<br />

G_count: in STD_LOGIC_VECTOR (7 downto 0);<br />

B_count: in STD_LOGIC_VECTOR (7 downto 0);<br />

pc_stream: out STD_LOGIC_VECTOR (21 downto 0) --formerly (33/23<br />

downto 0)<br />

); --Use "file_support_conventional_stream.vhd" to write to<br />

"colour_stream.csv"<br />

END COMPONENT;<br />

--dout: out std_logic_vector ((3*data_width) - 1 downto 0));<br />

constant clock_cycle: time: = 39.0625 ns; --25MHz has the periodic time 40ns<br />

--Simulation time will be: 2500000ns<br />

--signal show_p_data: std_logic_vector (23 downto 0);<br />

signal pixel_data: std_logic_vector (23 downto 0);<br />

signal incoming_clock: std_logic;<br />

signal incoming_clock_done: boolean: = not TRUE;<br />

signal partial_spike_clock: std_logic;<br />

signal spike_clock: std_logic;<br />

signal pixel_clock: std_logic;<br />

signal resits: std_logic;<br />

signal partial_clock_done: boolean: = not TRUE;<br />

signal address: std_logic_vector (3 downto 0):= "0000";<br />

signal addra: std_logic_vector (3 downto 0):= "0000";<br />

constant reset_time: time: = 1*clock_cycle;<br />

--Declaring the raw colour signals<br />

signal red: std_logic_vector (7 downto 0);<br />

signal green: std_logic_vector (7 downto 0);<br />

signal blue: std_logic_vector (7 downto 0);<br />

--Declaring the colour signals in terms of their pulse count<br />

signal red_pulse_count:std_logic_vector (7 downto 0);<br />

signal green_pulse_count:std_logic_vector (7 downto 0);<br />

signal blue_pulse_count:std_logic_vector (7 downto 0);<br />

signal pulse_count_stream: std_logic_vector (21 downto 0); --formerly (33/23<br />

downto 0)<br />

begin<br />

--tb_clk: process is<br />

--begin<br />

-- incoming_clock


-- else<br />

-- resit '1',<br />

CLKDV_OUT => incoming_clock<br />

-- CLKIN_IBUFG_OUT =>,<br />

-- CLK0_OUT =>,<br />

-- LOCKED_OUT =><br />

);<br />

Inst_get_partial_spike_clock_from_incoming_clock:<br />

get_partial_spike_clock_from_incoming_clock<br />

Port map (received_clock => incoming_clock,<br />

out_clock => partial_spike_clock<br />

);<br />

Inst_get_spike_on_from_partial_spike_clock:<br />

get_spike_on_from_partial_spike_clock<br />

Port map (partial_spike_clock => partial_spike_clock,<br />

spike_clock => spike_clock<br />

);<br />

Inst_get_pixel_clock_from_partial_spike_clock:<br />

get_pixel_clock_from_partial_spike_clock<br />

Port map (partial_spike_clock => partial_spike_clock,<br />

pixel_clock => pixel_clock<br />

);<br />

Mem_read: process (pixel_clock, address) is<br />

Begin<br />

-- if (clock'event) AND (clock = '1') then<br />

if rising_edge (pixel_clock) then --As each pixel relates to an<br />

addressed `packet'.<br />

address address,<br />

douta => pixel_data<br />

);<br />

Inst_pulsar_red: pulsar PORT MAP (<br />

pc_clk => spike_clock,<br />

value => pixel_data (23 downto 16), --Red<br />

152 of 200


ZZ => red_pulse_count<br />

);<br />

Inst_pulsar_green: pulsar PORT MAP (<br />

pc_clk => spike_clock,<br />

value => pixel_data (15 downto 8), --Green<br />

ZZ => green_pulse_count<br />

);<br />

Inst_pulsar_blue: pulsar PORT MAP (<br />

pc_clk => spike_clock,<br />

value => pixel_data (7 downto 0), --Blue<br />

ZZ => blue_pulse_count<br />

);<br />

--The following component: "pulse_count_to_pc_stream" is the OUTGOING signal<br />

Inst_pulse_count_to_pc_stream: pulse_count_to_pc_stream<br />

Port map (pc_clk => spike_clock,<br />

four_bit_address => address,<br />

R_count => red_pulse_count,<br />

G_count => green_pulse_count,<br />

B_count => blue_pulse_count,<br />

pc_stream => pulse_count_stream -- Last eighteen bits are colour information<br />

--That is after the address information<br />

);<br />

-- red


FPGA Receiver (structural) chip<br />

--Appendix D FPGA Receiver chip<br />

library IEEE;<br />

use IEEE.STD_LOGIC_1164.ALL;<br />

--use IEEE.STD_LOGIC_ARITH.ALL;<br />

--use IEEE.STD_LOGIC_UNSIGNED.ALL;<br />

use IEEE.STD_LOGIC_SIGNED.ALL;<br />

-- Uncomment the following library declaration if instantiating<br />

---- any Xilinx primitives in this code.<br />

--library UNISIM;<br />

--use UNISIM.VComponents.all;<br />

--Replacing 146 occurrences of (19 downto 0) with (3 downto 0)!<br />

--"short_AER_RGB_stream"<br />

use IEEE.numeric_std.ALL; --to_signed is used with this library<br />

entity top_wrapper is<br />

Port (USER_CLK: in STD_LOGIC;<br />

IO_L13P_11: out STD_LOGIC_VECTOR (3 downto 0);--"L33"(1) [1]<br />

IO_L13N_11: out STD_LOGIC_VECTOR (3 downto 0);--"M32"(2) [2]<br />

IO_L14P_11: out STD_LOGIC_VECTOR (3 downto 0);--"P34"(3) [3]<br />

IO_L13P_12: out STD_LOGIC_VECTOR (3 downto 0);--"H5"(1) [4]<br />

IO_L13N_12: out STD_LOGIC_VECTOR (3 downto 0);--"G5"(2) [5]<br />

IO_L14P_12: out STD_LOGIC_VECTOR (3 downto 0);--"R11"(3) [6]<br />

IO_L15P_12: out STD_LOGIC_VECTOR (3 downto 0);--"F5"(4) [7]<br />

IO_L15N_12: out STD_LOGIC_VECTOR (3 downto 0);--"F6"(5) [8]<br />

IO_L16P_12: out STD_LOGIC_VECTOR (3 downto 0);--"T10"(6) [9]<br />

IO_L16N_12: out STD_LOGIC_VECTOR (3 downto 0);--"T11"(7) [10]<br />

IO_L17P_12: out STD_LOGIC_VECTOR (3 downto 0);--"G6"(8) [11]<br />

IO_L17N_12: out STD_LOGIC_VECTOR (3 downto 0);--"G7"(9) [12]<br />

IO_L18P_12: out STD_LOGIC_VECTOR (3 downto 0);--"T9"(10) [13]<br />

IO_L18N_12: out STD_LOGIC_VECTOR (3 downto 0);--"U10"(11) [14]<br />

IO_L19P_12: out STD_LOGIC_VECTOR (3 downto 0);--"E6"(12) [15]<br />

IO_L19N_12: out STD_LOGIC_VECTOR (3 downto 0);--"E7"(13) [16]<br />

IO_L13P_13: out STD_LOGIC_VECTOR (3 downto 0);--“AK34” (1) [17]<br />

IO_L13N_13: out STD_LOGIC_VECTOR (3 downto 0);--“AK33” (2) [18]<br />

IO_L14P_13: out STD_LOGIC_VECTOR (3 downto 0);--“AG32” (3) [19]<br />

IO_L15P_13: out STD_LOGIC_VECTOR (3 downto 0);--“AJ32” (4) [20]<br />

IO_L15N_13: out STD_LOGIC_VECTOR (3 downto 0);--“AK32” (5) [21]<br />

IO_L16P_13: out STD_LOGIC_VECTOR (3 downto 0);--“AL34” (6) [22]<br />

IO_L16N_13: out STD_LOGIC_VECTOR (3 downto 0);--“AL33” (7) [23]<br />

IO_L17P_13: out STD_LOGIC_VECTOR (3 downto 0);--“AM33” (8) [24]<br />

IO_L17N_13: out STD_LOGIC_VECTOR (3 downto 0);--“AM32” (9) [25]<br />

IO_L18P_13: out STD_LOGIC_VECTOR (3 downto 0);--“AN34” (10) [26]<br />

IO_L18N_13: out STD_LOGIC_VECTOR (3 downto 0);--“AN33” (11) [27]<br />

IO_L19P_13: out STD_LOGIC_VECTOR (3 downto 0);--“AN32” (12) [28]<br />

IO_L19N_13: out STD_LOGIC_VECTOR (3 downto 0);--“AP32” (13) [29]<br />

IO_L13P_15: out STD_LOGIC_VECTOR (3 downto 0);--“T31” (1) [30]<br />

IO_L13N_15: out STD_LOGIC_VECTOR (3 downto 0);--“R31” (2) [31]<br />

IO_L14P_15: out STD_LOGIC_VECTOR (3 downto 0);--“U30” (3) [32]<br />

IO_L15P_15: out STD_LOGIC_VECTOR (3 downto 0);--“T28” (4) [33]<br />

IO_L15N_15: out STD_LOGIC_VECTOR (3 downto 0);--“T29” (5) [34]<br />

IO_L16P_15: out STD_LOGIC_VECTOR (3 downto 0);--“U27” (6) [35]<br />

154 of 200


FPGA Receiver (structural) chip<br />

IO_L16N_15: out STD_LOGIC_VECTOR (3 downto 0);--“U28” (7) [36]<br />

IO_L17P_15: out STD_LOGIC_VECTOR (3 downto 0);--“R26” (8) [37]<br />

IO_L17N_15: out STD_LOGIC_VECTOR (3 downto 0);--“R27” (9) [38]<br />

IO_L18P_15: out STD_LOGIC_VECTOR (3 downto 0);--“U26” (10) [39]<br />

IO_L18N_15: out STD_LOGIC_VECTOR (3 downto 0);--“T26” (11) [40]<br />

IO_L19P_15: out STD_LOGIC_VECTOR (3 downto 0);--“U25” (12) [41]<br />

IO_L19N_15: out STD_LOGIC_VECTOR (3 downto 0);--“T25” (13) [42]<br />

IO_L13P_17: out STD_LOGIC_VECTOR (3 downto 0);--“AD30” (1) [43]<br />

IO_L13N_17: out STD_LOGIC_VECTOR (3 downto 0);--"AC29}"(2) [44]<br />

IO_L14P_17: out STD_LOGIC_VECTOR (3 downto 0);--"AF31"(3) [45]<br />

IO_L15P_17: out STD_LOGIC_VECTOR (3 downto 0);--"AE29"(4) [46]<br />

IO_L15N_17: out STD_LOGIC_VECTOR (3 downto 0);--"AD29"(5) [47]<br />

IO_L16P_17: out STD_LOGIC_VECTOR (3 downto 0)--"AJ31"(6) [48]<br />

--IO_L13P_18: out STD_LOGIC_VECTOR (999 downto 0);--"Y11" {1}<br />

--IO_L13P_19: out STD_LOGIC_VECTOR (999 downto 0);--"K28" [215]<br />

--IO_L13N_19: out STD_LOGIC_VECTOR (999 downto 0);--"L28" {3}<br />

--IO_L14P_19: out STD_LOGIC_VECTOR (999 downto 0);--"K27" [215]<br />

--IO_L15P_19: out STD_LOGIC_VECTOR (999 downto 0);--"M28" [215]<br />

--IO_L17P_20: out STD_LOGIC_VECTOR (999 downto 0);--"E12" {6}<br />

--IO_L17N_20: out STD_LOGIC_VECTOR (999 downto 0);--"E13" {7}<br />

--IO_L18P_20: out STD_LOGIC_VECTOR (999 downto 0);--"N10" {8}<br />

--IO_L18N_20: out STD_LOGIC_VECTOR (999 downto 0); --"N9" {9}<br />

--IO_L19P_20: out STD_LOGIC_VECTOR (999 downto 0);--"F13" {10}<br />

--IO_L19N_20: out STD_LOGIC_VECTOR (999 downto 0);--"G13" {11}<br />

--IO_L13P_21: out STD_LOGIC_VECTOR (999 downto 0);--"AF24" {12}<br />

--IO_L13N_21: out STD_LOGIC_VECTOR (999 downto 0);--"AG25" {13}<br />

--IO_L14P_21: out STD_LOGIC_VECTOR (999 downto 0);--"AG27" {14}<br />

--IO_L15P_21: out STD_LOGIC_VECTOR (999 downto 0);--"AF25" {15}<br />

--IO_L15N_21: out STD_LOGIC_VECTOR (999 downto 0);--"AF26" {16}<br />

--IO_L16P_21: out STD_LOGIC_VECTOR (999 downto 0);--"AE27" {17}<br />

--IO_L16N_21: out STD_LOGIC_VECTOR (999 downto 0);--"AE26" {18}<br />

--IO_L17P_21: out STD_LOGIC_VECTOR (999 downto 0);--"AC25" {19}<br />

--IO_L17N_21: out STD_LOGIC_VECTOR (999 downto 0);--"AC24" {20}<br />

--IO_L18P_21: out STD_LOGIC_VECTOR (999 downto 0);--"AD26" {21}<br />

);<br />

end top_wrapper;<br />

--1024 x 3 = 3072 i.e. three planes describing a 64 pixelcount picture<br />

--As a pixel consists of three planes, 150 potential pulses are required per pixel<br />

--That is 50 potential pulses per plane per pixel<br />

architecture Behavioral of top_wrapper is<br />

--type current integer range -70 uA to +70 uA<br />

-- units<br />

-- uA;<br />

-- mA = 1000uA;<br />

-- end units;<br />

--COMPONENT mem_of_256_short_aer_stream is<br />

----generic (width: integer; addr_width: integer);<br />

-- Port (clka: in STD_LOGIC;<br />

-- addra: in STD_LOGIC_VECTOR (7 downto 0);<br />

-- douta: out STD_LOGIC_VECTOR (25 downto 0));<br />

155 of 200


FPGA Receiver (structural) chip<br />

--END COMPONENT;--Pulse count representation<br />

COMPONENT mem_of_256_short_aer_stream is<br />

--generic (width: integer; addr_width: integer);<br />

Port (clka: in STD_LOGIC;<br />

addra: in STD_LOGIC_VECTOR (1 downto 0);<br />

douta: out STD_LOGIC_VECTOR (21 downto 0));<br />

END COMPONENT;--Pulse count representation<br />

COMPONENT pixelclock_dcm is<br />

port (CLKIN_IN : in std_logic;<br />

RST_IN : in std_logic;<br />

CLKDV_OUT : out std_logic;<br />

CLKIN_IBUFG_OUT: out std_logic;<br />

CLK0_OUT : out std_logic;<br />

LOCKED_OUT : out std_logic);<br />

END COMPONENT;<br />

COMPONENT get_partial_spike_clock_from_incoming_clock is<br />

Port (received_clock: in STD_LOGIC;<br />

out_clock: out STD_LOGIC);<br />

END COMPONENT;<br />

COMPONENT get_pixel_clock_from_partial_spike_clock is<br />

Port (partial_spike_clock: in STD_LOGIC;<br />

pixel_clock: out STD_LOGIC);<br />

END COMPONENT;<br />

COMPONENT get_spike_on_from_partial_spike_clock is<br />

Port (partial_spike_clock: in STD_LOGIC;<br />

spike_clock: out STD_LOGIC);<br />

END COMPONENT;<br />

COMPONENT get_long_spike is<br />

Port (partial_spike_clock: in STD_LOGIC;<br />

long_spike_clock: out STD_LOGIC);<br />

END COMPONENT;<br />

COMPONENT preform_for_electrodes<br />

port (preform_clock: std_logic;<br />

incoming_stream: in std_logic_vector (153 downto<br />

0);--Four by four!<br />

--THE ADDRESSES FOLLOWING MUST REACH<br />

4 ELECTRODES!<br />

red_with_address: out std_logic_vector (57 downto 0);<br />

green_with_address: out std_logic_vector (57 downto<br />

0);<br />

blue_with_address: out std_logic_vector (57 downto 0)<br />

--58*3 equals 174 bits<br />

-- out_with_rgb_address: out std_logic_vector<br />

(173 downto 0)<br />

);--Goes to "produce_colour_data_streams"<br />

END COMPONENT;<br />

COMPONENT produce_colour_data_streams --feeds "extend_data_streams"<br />

port (spike_data_clock: std_logic;<br />

red: in std_logic_vector (57 downto 0);<br />

green: in std_logic_vector (57 downto 0);<br />

156 of 200


FPGA Receiver (structural) chip<br />

blue: in std_logic_vector (57 downto 0);<br />

red_data_out: out std_logic_vector (49 downto 0);<br />

green_data_out: out std_logic_vector (49 downto 0);<br />

blue_data_out: out std_logic_vector (49 downto 0)<br />

);--picking out separate data<br />

END COMPONENT;<br />

COMPONENT extend_data_streams<br />

port (this_spike_clock: in std_logic;<br />

red_data_incoming: in STD_LOGIC_VECTOR (49 downto<br />

0);<br />

green_data_incoming: in STD_LOGIC_VECTOR (49<br />

downto 0);<br />

blue_data_incoming: in STD_LOGIC_VECTOR (49 downto<br />

0);<br />

extended_red: out STD_LOGIC_VECTOR (199 downto 0);<br />

extended_green: out STD_LOGIC_VECTOR (199 downto<br />

0);<br />

extended_blue: out STD_LOGIC_VECTOR (199 downto 0)<br />

);<br />

END COMPONENT; --Prior to forming output stream<br />

COMPONENT fully_extend_data_streams is<br />

Port (this_spike_clock: in STD_LOGIC;<br />

red_data_incoming: in STD_LOGIC_VECTOR (49 downto 0);<br />

green_data_incoming: in STD_LOGIC_VECTOR (49 downto 0);<br />

blue_data_incoming: in STD_LOGIC_VECTOR (49 downto 0);<br />

extended_red: out STD_LOGIC_VECTOR (999 downto 0);<br />

extended_green: out STD_LOGIC_VECTOR (999 downto 0);<br />

extended_blue: out STD_LOGIC_VECTOR (999 downto 0));<br />

END COMPONENT;<br />

COMPONENT file_support_fully_extended_data_streams is<br />

Port (a_pixel_clock: in STD_LOGIC;--pixel_clock<br />

red_pulse_stream_incoming: in STD_LOGIC_VECTOR (999 downto 0);<br />

green_pulse_stream_incoming: in STD_LOGIC_VECTOR (999 downto 0);<br />

blue_pulse_stream_incoming: in STD_LOGIC_VECTOR (999 downto 0));<br />

END COMPONENT;<br />

COMPONENT file_support_squeeze is<br />

Port (a_pixel_clock: in STD_LOGIC;--pixel_clock<br />

red_pulse_stream_incoming: in STD_LOGIC_VECTOR (7999 downto 0);<br />

green_pulse_stream_incoming: in STD_LOGIC_VECTOR (7999 downto 0);<br />

blue_pulse_stream_incoming: in STD_LOGIC_VECTOR (7999 downto 0));<br />

END COMPONENT;<br />

COMPONENT produce_wire_outputs is<br />

Port (wire_clock: in STD_LOGIC;<br />

biphasic_clock: in std_logic;<br />

red: in STD_LOGIC_VECTOR (999 downto 0);--from<br />

"outer_red_pulse_stream"<br />

green: in STD_LOGIC_VECTOR (999 downto 0);--from<br />

"outer_green_pulse_stream"<br />

blue: in STD_LOGIC_VECTOR (999 downto 0);--from<br />

"outer_blue_pulse_stream"<br />

157 of 200


FPGA Receiver (structural) chip<br />

--red_1: out STD_LOGIC_VECTOR (999 downto 0);--feeds to "red_wire_1"<br />

--red_2: out STD_LOGIC_VECTOR (999 downto 0);--feeds to "red_wire_2"<br />

--red_3: out STD_LOGIC_VECTOR (999 downto 0);--feeds to "red_wire_3"<br />

--red_4: out STD_LOGIC_VECTOR (999 downto 0);--feeds to "red_wire_4"<br />

--green_1: out STD_LOGIC_VECTOR (999 downto 0);--feeds to "green_wire_1"<br />

--green_2: out STD_LOGIC_VECTOR (999 downto 0);--feeds to "green_wire_2"<br />

--green_3: out STD_LOGIC_VECTOR (999 downto 0);--feeds to "green_wire_3"<br />

--green_4: out STD_LOGIC_VECTOR (999 downto 0);--feeds to "green_wire_4"<br />

--blue_1: out STD_LOGIC_VECTOR (999 downto 0);--feeds to "blue_wire_1"<br />

--blue_2: out STD_LOGIC_VECTOR (999 downto 0);--feeds to "blue_wire_2"<br />

--blue_3: out STD_LOGIC_VECTOR (999 downto 0);--feeds to "blue_wire_3"<br />

--blue_4: out STD_LOGIC_VECTOR (999 downto 0)--feeds to "blue_wire_4"<br />

red_1: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "red_wire_1"<br />

red_2: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "red_wire_2"<br />

red_3: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "red_wire_3"<br />

red_4: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "red_wire_4"<br />

red_5: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "red_wire_4"<br />

red_6: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "red_wire_6"<br />

red_7: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "red_wire_7"<br />

red_8: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "red_wire_8"<br />

red_9: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "red_wire_9"<br />

red_10: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "red_wire_10"<br />

red_11: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "red_wire_11"<br />

red_12: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "red_wire_12"<br />

red_13: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "red_wire_13"<br />

red_14: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "red_wire_14"<br />

red_15: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "red_wire_15"<br />

red_16: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "red_wire_16"<br />

green_1: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "green_wire_1"<br />

green_2: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "green_wire_2"<br />

green_3: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "green_wire_3"<br />

green_4: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "green_wire_4"<br />

green_5: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "green_wire_5"<br />

green_6: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "green_wire_6"<br />

green_7: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "green_wire_7"<br />

green_8: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "green_wire_8"<br />

green_9: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "green_wire_9"<br />

green_10: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "green_wire_10"<br />

green_11: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "green_wire_11"<br />

green_12: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "green_wire_12"<br />

green_13: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "green_wire_13"<br />

green_14: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "green_wire_14"<br />

green_15: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "green_wire_15"<br />

green_16: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "green_wire_16"<br />

blue_1: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "blue_wire_1"<br />

blue_2: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "blue_wire_2"<br />

blue_3: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "blue_wire_3"<br />

blue_4: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "blue_wire_4"<br />

blue_5: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "blue_wire_5"<br />

blue_6: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "blue_wire_6"<br />

blue_7: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "blue_wire_7"<br />

158 of 200


FPGA Receiver (structural) chip<br />

blue_8: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "blue_wire_8"<br />

blue_9: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "blue_wire_9"<br />

blue_10: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "blue_wire_10"<br />

blue_11: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "blue_wire_11"<br />

blue_12: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "blue_wire_12"<br />

blue_13: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "blue_wire_13"<br />

blue_14: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "blue_wire_14"<br />

blue_15: out STD_LOGIC_VECTOR (3 downto 0);--feeds to "blue_wire_15"<br />

blue_16: out STD_LOGIC_VECTOR (3 downto 0)--feeds to "blue_wire_16"<br />

);<br />

END COMPONENT; --produce_wire_outputs<br />

COMPONENT produce_fast_wire_outputs is<br />

Port (wire_clock: in STD_LOGIC;<br />

red: in STD_LOGIC_VECTOR (199 downto 0);--from<br />

"outer_red_pulse_stream"<br />

green: in STD_LOGIC_VECTOR (199 downto 0);--from<br />

"outer_green_pulse_stream"<br />

blue: in STD_LOGIC_VECTOR (199 downto 0);--from<br />

"outer_blue_pulse_stream"<br />

red_1: out STD_LOGIC_VECTOR (199 downto 0);--feeds to "red_wire_1"<br />

red_2: out STD_LOGIC_VECTOR (199 downto 0);--feeds to "red_wire_2"<br />

red_3: out STD_LOGIC_VECTOR (199 downto 0);--feeds to "red_wire_3"<br />

red_4: out STD_LOGIC_VECTOR (199 downto 0);--feeds to "red_wire_4"<br />

green_1: out STD_LOGIC_VECTOR (199 downto 0);--feeds to "green_wire_1"<br />

green_2: out STD_LOGIC_VECTOR (199 downto 0);--feeds to "green_wire_2"<br />

green_3: out STD_LOGIC_VECTOR (199 downto 0);--feeds to "green_wire_3"<br />

green_4: out STD_LOGIC_VECTOR (199 downto 0);--feeds to "green_wire_4"<br />

blue_1: out STD_LOGIC_VECTOR (199 downto 0);--feeds to "blue_wire_1"<br />

blue_2: out STD_LOGIC_VECTOR (199 downto 0);--feeds to "blue_wire_2"<br />

blue_3: out STD_LOGIC_VECTOR (199 downto 0);--feeds to "blue_wire_3"<br />

blue_4: out STD_LOGIC_VECTOR (199 downto 0)--feeds to "blue_wire_4"<br />

);<br />

END COMPONENT; --produce_fast_wire_outputs<br />

COMPONENT produce_bit_wire_outputs is<br />

Port (wire_clock: in STD_LOGIC;<br />

red: in STD_LOGIC_VECTOR (49 downto 0);--from<br />

"outer_red_pulse_stream"<br />

green: in STD_LOGIC_VECTOR (49 downto 0);--from<br />

"outer_green_pulse_stream"<br />

blue: in STD_LOGIC_VECTOR (49 downto 0);--from<br />

"outer_blue_pulse_stream"<br />

red_1: out STD_LOGIC_VECTOR (49 downto 0);--feeds to "red_wire_1"<br />

red_2: out STD_LOGIC_VECTOR (49 downto 0);--feeds to "red_wire_2"<br />

red_3: out STD_LOGIC_VECTOR (49 downto 0);--feeds to "red_wire_3"<br />

red_4: out STD_LOGIC_VECTOR (49 downto 0);--feeds to "red_wire_4"<br />

green_1: out STD_LOGIC_VECTOR (49 downto 0);--feeds to "green_wire_1"<br />

green_2: out STD_LOGIC_VECTOR (49 downto 0);--feeds to "green_wire_2"<br />

green_3: out STD_LOGIC_VECTOR (49 downto 0);--feeds to "green_wire_3"<br />

green_4: out STD_LOGIC_VECTOR (49 downto 0);--feeds to "green_wire_4"<br />

blue_1: out STD_LOGIC_VECTOR (49 downto 0);--feeds to "blue_wire_1"<br />

blue_2: out STD_LOGIC_VECTOR (49 downto 0);--feeds to "blue_wire_2"<br />

159 of 200


FPGA Receiver (structural) chip<br />

blue_3: out STD_LOGIC_VECTOR (49 downto 0);--feeds to "blue_wire_3"<br />

blue_4: out STD_LOGIC_VECTOR (49 downto 0)--feeds to "blue_wire_4"<br />

);<br />

END COMPONENT; --produce_bit_wire_outputs<br />

COMPONENT AER_test_stream_to_pulse_count is<br />

--feeds "file_support_intense_pulses.vhd" & hence "intense_pulses.dat<br />

generic (bit_width: integer: = 11); --Initially six, then altered to eleven.<br />

Port ( ts_clock: in STD_LOGIC;<br />

AER_RGB_incoming_stream:<br />

in<br />

STD_LOGIC_VECTOR (155 downto 0);<br />

stream_red_pulse_count: out std_logic_vector (0<br />

downto 0);<br />

stream_green_pulse_count: out std_logic_vector (0<br />

downto 0);<br />

stream_blue_pulse_count: out std_logic_vector (0<br />

downto 0);<br />

out_red_counter: out integer;<br />

out_green_counter: out integer;<br />

out_blue_pulse_counter: out integer<br />

);<br />

END COMPONENT;<br />

COMPONENT mapping_to_outgoing_addresses is<br />

Port (implant_clk: in STD_LOGIC; --pixel clock<br />

red_in: in STD_LOGIC_VECTOR (55 downto 0);<br />

green_in: in STD_LOGIC_VECTOR (55 downto 0);<br />

blue_in: in STD_LOGIC_VECTOR (55 downto 0);<br />

red_outgoing_address_stream: in STD_LOGIC_VECTOR (5<br />

downto 0);<br />

green_outgoing_address_stream: in STD_LOGIC_VECTOR<br />

(5 downto 0);<br />

blue_outgoing_address_stream: in STD_LOGIC_VECTOR<br />

(5 downto 0);<br />

red_implant: out STD_LOGIC_VECTOR (55 downto 0);<br />

green_implant: out STD_LOGIC_VECTOR (55 downto 0);<br />

blue_implant: out STD_LOGIC_VECTOR (55 downto 0));<br />

END COMPONENT;<br />

COMPONENT generate_outgoing_address_streams is<br />

--feeds "file_support_AER_outgoing_addresses.vhd" & hence<br />

"outgoing_addresses.dat"<br />

Port ( out_clock: in STD_LOGIC; --spike_clock<br />

red_out_address: out std_logic_vector (5 downto 0);<br />

green_out_address: out std_logic_vector (5 downto 0);<br />

blue_out_address: out std_logic_vector (5 downto 0)<br />

);<br />

END COMPONENT;<br />

COMPONENT file_support_AER_outgoing_addresses is<br />

port (clk: in std_logic; --Sim_clock<br />

160 of 200


FPGA Receiver (structural) chip<br />

0);<br />

downto 0);<br />

0)<br />

);<br />

END COMPONENT;<br />

red_outgoing_address: IN std_logic_vector (5 downto<br />

green_outgoing_address: IN std_logic_vector (5<br />

blue_outgoing_address: IN std_logic_vector (5 downto<br />

COMPONENT get_rc_addresses_with_data is --Starting at the ZERO'TH<br />

single_address.<br />

Port (address_with_data: in STD_LOGIC_VECTOR (61 downto 0) ;<br />

row_address: out STD_LOGIC_VECTOR (5 downto 0);<br />

column_address: out STD_LOGIC_VECTOR (5 downto 0)<br />

); --Fed from "AER_sep_streams_with_incoming_addresses"<br />

END COMPONENT;<br />

COMPONENT file_support_wire_outputs is<br />

Port (an_pixel_clock: in STD_LOGIC;--pixel_clock<br />

in_red_wire_1: in STD_LOGIC_VECTOR (49 downto 0);<br />

in_red_wire_2: in STD_LOGIC_VECTOR (49 downto 0);<br />

in_red_wire_3: in STD_LOGIC_VECTOR (49 downto 0);<br />

in_red_wire_4: in STD_LOGIC_VECTOR (49 downto 0);<br />

in_green_wire_1: in STD_LOGIC_VECTOR (49 downto 0);<br />

in_green_wire_2: in STD_LOGIC_VECTOR (49 downto 0);<br />

in_green_wire_3: in STD_LOGIC_VECTOR (49 downto 0);<br />

in_green_wire_4: in STD_LOGIC_VECTOR (49 downto 0);<br />

in_blue_wire_1: in STD_LOGIC_VECTOR (49 downto 0);<br />

in_blue_wire_2: in STD_LOGIC_VECTOR (49 downto 0);<br />

in_blue_wire_3: in STD_LOGIC_VECTOR (49 downto 0);<br />

in_blue_wire_4: in STD_LOGIC_VECTOR (49 downto 0)<br />

);<br />

END COMPONENT; --end file_support_wire_outputs;<br />

COMPONENT pulse_count_to_AER_stream is<br />

Port (<br />

pcs_clk: in STD_LOGIC;<br />

single_address: in STD_LOGIC_VECTOR (3 downto<br />

0); --short_AER_RGB_stream (19 downto 18)<br />

R_pulse_count: in STD_LOGIC_VECTOR (5 downto<br />

0); --short_AER_RGB_stream (17 downto 12)<br />

G_pulse_count: in STD_LOGIC_VECTOR (5 downto<br />

0);--short_AER_RGB_stream (11 downto 6)<br />

B_pulse_count: in STD_LOGIC_VECTOR (5 downto<br />

0); --short_AER_RGB_stream (5 downto 0)<br />

AER_RGB_pulse_stream: out STD_LOGIC_VECTOR<br />

(153 downto 0)<br />

);<br />

END COMPONENT;<br />

subtype limited_current is integer range -70 to +70;<br />

--signal num: limited_current;<br />

--pixel time equals 39062.5 ns (@25fps), note that 25MHz has the periodic time of<br />

40ns approx.<br />

161 of 200


FPGA Receiver (structural) chip<br />

constant clock_cycle: time: = 39.0625 ns; --formerly: =39062.5 ns; 25MHz has the<br />

periodic time of 40ns.<br />

--note that 20*Tclock = 781.25 ns (Tspike at 5fps for a 32 by 32 scene)<br />

constant reset_time: time: = 1*clock_cycle;<br />

signal pixel_total: integer: = 16;<br />

signal height_in_rows: integer: = 2;<br />

signal width_in_columns: integer: = 2;<br />

signal row_length: integer: = 2;<br />

signal clka: std_logic;<br />

signal addra: std_logic_vector (1 downto 0):= "00";<br />

signal address: std_logic_vector (1 downto 0):= "00";<br />

signal resits: std_logic;<br />

signal pixel_clock: std_logic;<br />

signal pixel_done: boolean: = not TRUE;<br />

signal resit_row_cluster_done: boolean: = not TRUE;<br />

signal incoming_clock: std_logic;<br />

signal incoming_clock_done: boolean: = not TRUE;<br />

signal partial_spike_clock: std_logic;<br />

--signal partial_clock_done: boolean: = not TRUE;<br />

signal resit_part_spike: std_logic;<br />

signal spike_clock: std_logic;<br />

signal resit_spike: std_logic;<br />

signal outerpulse_clock: std_logic;<br />

signal row_cluster_clock: std_logic;<br />

signal resit_outerpulse_spike: std_logic;<br />

signal sel: std_logic:= '0';<br />

----signal pixel_row_address: std_logic_vector (5 downto 0);<br />

----signal pixel_column_address: std_logic_vector (5 downto 0);<br />

----signal combined_pulse_stream: std_logic_vector (649 downto 0);<br />

signal AER_RGB_stream: std_logic_vector (153 downto 0);<br />

signal out_red_pc: std_logic_vector (9 downto 0);<br />

signal out_green_pc: std_logic_vector (9 downto 0);<br />

signal out_blue_pc: std_logic_vector (9 downto 0);<br />

signal short_AER_RGB_stream: std_logic_vector (23 downto 0);<br />

signal douta: STD_LOGIC_VECTOR (21 downto 0);<br />

signal red_pulse_data: std_logic_vector (49 downto 0);<br />

signal green_pulse_data: std_logic_vector (49 downto 0);<br />

signal blue_pulse_data: std_logic_vector (49 downto 0);<br />

signal stream_red_pc: std_logic_vector (9 downto 0);<br />

signal stream_green_pc: std_logic_vector (9 downto 0);<br />

signal stream_blue_pc: std_logic_vector (9 downto 0);<br />

signal clk: std_logic;<br />

signal out_red_counter: integer: = 0;<br />

signal out_green_counter: integer: = 0;<br />

signal out_blue_counter: integer: = 0;<br />

signal hue_red_pc: integer;<br />

signal hue_green_pc: integer;<br />

signal hue_blue_pc: integer;<br />

signal red_counter: std_logic_vector (9 downto 0);<br />

signal green_counter: std_logic_vector (9 downto 0);<br />

signal blue_counter: std_logic_vector (9 downto 0);<br />

162 of 200


FPGA Receiver (structural) chip<br />

signal stream_red_pulse_count: std_logic_vector (9 downto 0);<br />

signal stream_green_pulse_count: std_logic_vector (9 downto 0);<br />

signal stream_blue_pulse_count: std_logic_vector (9 downto 0);<br />

--For a two by two image the address bits allowed are two bits<br />

signal red_incoming_address: std_logic_vector (51 downto 0);<br />

signal green_incoming_address: std_logic_vector (51 downto 0);<br />

signal blue_incoming_address: std_logic_vector (51 downto 0);<br />

signal red_feed: std_logic_vector (59 downto 0);<br />

signal green_feed: std_logic_vector (59 downto 0);<br />

signal blue_feed: std_logic_vector (59 downto 0);<br />

signal rc_red_feed: std_logic_vector (61 downto 0);<br />

signal rc_green_feed: std_logic_vector (61 downto 0);<br />

signal rc_blue_feed: std_logic_vector (61 downto 0);<br />

signal red_with_rc_address: std_logic_vector (61 downto 0);<br />

signal green_with_rc_address: std_logic_vector (61 downto 0);<br />

signal blue_with_rc_address: std_logic_vector (61 downto 0);<br />

signal red_between_address: std_logic_vector (9 downto 0);<br />

signal green_between_address: std_logic_vector (9 downto 0);<br />

signal blue_between_address: std_logic_vector (9 downto 0);<br />

--Signals to implant<br />

signal red_data_preform: std_logic_vector (49 downto 0);<br />

signal green_data_preform: std_logic_vector (49 downto 0);<br />

signal blue_data_preform: std_logic_vector (49 downto 0);<br />

--58*3 equals 174 {three in an rgb packet}<br />

signal composite_addressed_stream: std_logic_vector (173 downto<br />

0):="0000000000000000000000000000000000000000000000000000000000000000<br />

00000000000000000000000000000000000000000000000000000000000000000000<br />

000000000000000000000000000000000000000000";<br />

--THE ADDRESSES FOLLOWING MUST REACH 4 ELECTRODES!<br />

signal red_1_4_addressed: std_logic_vector (57 downto 0);<br />

signal green_2_5_addressed: std_logic_vector (57 downto 0);<br />

signal blue_3_6_addressed: std_logic_vector (57 downto 0);<br />

signal red_data: std_logic_vector (49 downto 0);<br />

signal green_data: std_logic_vector (49 downto 0);<br />

signal blue_data: std_logic_vector (49 downto 0);<br />

signal red_pulse_stream: std_logic_vector (199 downto 0);<br />

signal green_pulse_stream: std_logic_vector (199 downto 0);<br />

signal blue_pulse_stream: std_logic_vector (199 downto 0);<br />

signal pic_size: std_logic_vector (5 downto 0):= "000000";--Six bits;<br />

signal qpulse_size: std_logic_vector (15 downto 0):= "0000000000000000";--<br />

Sixteen bits<br />

signal red_x_out: std_logic_vector (999 downto 0);<br />

signal green_y_out: std_logic_vector (999 downto 0);<br />

signal blue_z_out: std_logic_vector (999 downto 0);<br />

signal number_out: std_logic_vector (0 downto 0):="0";<br />

signal outer_red_pulse_stream: std_logic_vector (999 downto 0);<br />

signal outer_green_pulse_stream: std_logic_vector (999 downto 0);<br />

signal outer_blue_pulse_stream: std_logic_vector (999 downto 0);<br />

signal send_num_out: std_logic_vector (0 downto 0);<br />

signal send_num_unsigned_out: std_logic_vector (0 downto 0);<br />

signal zero_out: std_logic_vector (0 downto 0);<br />

163 of 200


FPGA Receiver (structural) chip<br />

--signal red_wire_1: std_logic_vector (999 downto 0);<br />

--signal red_wire_2: std_logic_vector (999 downto 0);<br />

--signal red_wire_3: std_logic_vector (999 downto 0);<br />

--signal red_wire_4: std_logic_vector (999 downto 0);<br />

--signal green_wire_1: std_logic_vector (999 downto 0);<br />

--signal green_wire_2: std_logic_vector (999 downto 0);<br />

--signal green_wire_3: std_logic_vector (999 downto 0);<br />

--signal green_wire_4: std_logic_vector (999 downto 0);<br />

--signal blue_wire_1: std_logic_vector (999 downto 0);<br />

--signal blue_wire_2: std_logic_vector (999 downto 0);<br />

--signal blue_wire_3: std_logic_vector (999 downto 0);<br />

--signal blue_wire_4: std_logic_vector (999 downto 0);<br />

--signal red_wire_1: std_logic_vector (199 downto 0);<br />

--signal red_wire_2: std_logic_vector (199 downto 0);<br />

--signal red_wire_3: std_logic_vector (199 downto 0);<br />

--signal red_wire_4: std_logic_vector (199 downto 0);<br />

--signal green_wire_1: std_logic_vector (199 downto 0);<br />

--signal green_wire_2: std_logic_vector (199 downto 0);<br />

--signal green_wire_3: std_logic_vector (199 downto 0);<br />

--signal green_wire_4: std_logic_vector (199 downto 0);<br />

--signal blue_wire_1: std_logic_vector (199 downto 0);<br />

--signal blue_wire_2: std_logic_vector (199 downto 0);<br />

--signal blue_wire_3: std_logic_vector (199 downto 0);<br />

--signal blue_wire_4: std_logic_vector (199 downto 0);<br />

--signal red_wire_1: std_logic_vector (3 downto 0);<br />

--signal red_wire_2: std_logic_vector (3 downto 0);<br />

--signal red_wire_3: std_logic_vector (3 downto 0);<br />

--signal red_wire_4: std_logic_vector (3 downto 0);<br />

--signal green_wire_1: std_logic_vector (3 downto 0);<br />

--signal green_wire_2: std_logic_vector (3 downto 0);<br />

--signal green_wire_3: std_logic_vector (3 downto 0);<br />

--signal green_wire_4: std_logic_vector (3 downto 0);<br />

--signal blue_wire_1: std_logic_vector (3 downto 0);<br />

--signal blue_wire_2: std_logic_vector (3 downto 0);<br />

--signal blue_wire_3: std_logic_vector (3 downto 0);<br />

--signal blue_wire_4: std_logic_vector (3 downto 0);<br />

signal red_wire_1: std_logic_vector (3 downto 0);<br />

signal red_wire_2: std_logic_vector (3 downto 0);<br />

signal red_wire_3: std_logic_vector (3 downto 0);<br />

signal red_wire_4: std_logic_vector (3 downto 0);<br />

signal red_wire_5: std_logic_vector (3 downto 0);<br />

signal red_wire_6: std_logic_vector (3 downto 0);<br />

signal red_wire_7: std_logic_vector (3 downto 0);<br />

signal red_wire_8: std_logic_vector (3 downto 0);<br />

signal red_wire_9: std_logic_vector (3 downto 0);<br />

signal red_wire_10: std_logic_vector (3 downto 0);<br />

signal red_wire_11: std_logic_vector (3 downto 0);<br />

signal red_wire_12: std_logic_vector (3 downto 0);<br />

signal red_wire_13: std_logic_vector (3 downto 0);<br />

signal red_wire_14: std_logic_vector (3 downto 0);<br />

signal red_wire_15: std_logic_vector (3 downto 0);<br />

164 of 200


FPGA Receiver (structural) chip<br />

signal red_wire_16: std_logic_vector (3 downto 0);<br />

signal green_wire_1: std_logic_vector (3 downto 0);<br />

signal green_wire_2: std_logic_vector (3 downto 0);<br />

signal green_wire_3: std_logic_vector (3 downto 0);<br />

signal green_wire_4: std_logic_vector (3 downto 0);<br />

signal green_wire_5: std_logic_vector (3 downto 0);<br />

signal green_wire_6: std_logic_vector (3 downto 0);<br />

signal green_wire_7: std_logic_vector (3 downto 0);<br />

signal green_wire_8: std_logic_vector (3 downto 0);<br />

signal green_wire_9: std_logic_vector (3 downto 0);<br />

signal green_wire_10: std_logic_vector (3 downto 0);<br />

signal green_wire_11: std_logic_vector (3 downto 0);<br />

signal green_wire_12: std_logic_vector (3 downto 0);<br />

signal green_wire_13: std_logic_vector (3 downto 0);<br />

signal green_wire_14: std_logic_vector (3 downto 0);<br />

signal green_wire_15: std_logic_vector (3 downto 0);<br />

signal green_wire_16: std_logic_vector (3 downto 0);<br />

signal blue_wire_1: std_logic_vector (3 downto 0);<br />

signal blue_wire_2: std_logic_vector (3 downto 0);<br />

signal blue_wire_3: std_logic_vector (3 downto 0);<br />

signal blue_wire_4: std_logic_vector (3 downto 0);<br />

signal blue_wire_5: std_logic_vector (3 downto 0);<br />

signal blue_wire_6: std_logic_vector (3 downto 0);<br />

signal blue_wire_7: std_logic_vector (3 downto 0);<br />

signal blue_wire_8: std_logic_vector (3 downto 0);<br />

signal blue_wire_9: std_logic_vector (3 downto 0);<br />

signal blue_wire_10: std_logic_vector (3 downto 0);<br />

signal blue_wire_11: std_logic_vector (3 downto 0);<br />

signal blue_wire_12: std_logic_vector (3 downto 0);<br />

signal blue_wire_13: std_logic_vector (3 downto 0);<br />

signal blue_wire_14: std_logic_vector (3 downto 0);<br />

signal blue_wire_15: std_logic_vector (3 downto 0);<br />

signal blue_wire_16: std_logic_vector (3 downto 0);<br />

begin<br />

--tb_clk: process is<br />

--begin<br />

-- incoming_clock


FPGA Receiver (structural) chip<br />

-- CLKIN_IBUFG_OUT =>,<br />

-- CLK0_OUT =>,<br />

-- LOCKED_OUT =><br />

);<br />

Inst_get_partial_spike_clock_from_incoming_clock:<br />

get_partial_spike_clock_from_incoming_clock<br />

Port map (received_clock => incoming_clock,<br />

out_clock => partial_spike_clock<br />

);<br />

Inst_get_spike_on_from_partial_spike_clock:<br />

get_spike_on_from_partial_spike_clock<br />

Port map (partial_spike_clock => partial_spike_clock,<br />

spike_clock => spike_clock<br />

);<br />

Inst_get_pixel_clock_from_partial_spike_clock:<br />

get_pixel_clock_from_partial_spike_clock<br />

Port map (partial_spike_clock => partial_spike_clock,<br />

pixel_clock => pixel_clock<br />

);<br />

Inst_get_long_spike: get_long_spike<br />

Port map (partial_spike_clock => partial_spike_clock,<br />

long_spike_clock => outerpulse_clock<br />

);<br />

--Inst_mem_of_256_short_aer_stream: mem_of_256_short_aer_stream PORT MAP<br />

(<br />

-- clka => pixel_clock,<br />

-- addra => address,<br />

-- douta => short_AER_RGB_stream --pulse count format<br />

-- );<br />

Mem_read: process (pixel_clock, address) is<br />

Begin<br />

-- if (pixel_clock'event) AND (pixel_clock = '1') then<br />

if rising_edge (pixel_clock) then<br />

address short_AER_RGB_stream (21<br />

downto 18),--four bits<br />

R_pulse_count => short_AER_RGB_stream (17<br />

downto 12),--six bits<br />

G_pulse_count => short_AER_RGB_stream (11<br />

downto 6),--six bits<br />

B_pulse_count => short_AER_RGB_stream (5<br />

downto 0),--six bits<br />

AER_RGB_pulse_stream => AER_RGB_stream --RHS<br />

feeds out of component<br />

166 of 200


FPGA Receiver (structural) chip<br />

);--Instance (Instantiation) of pulse_count_to_AER_stream<br />

Inst_preform_for_electrodes: preform_for_electrodes<br />

--Eight bits will represent 192(24*8) addresses OK<br />

--The spike_clock MUST be used here.<br />

--fed from "pulse_count_to_AER_stream"<br />

port map (preform_clock => spike_clock,<br />

incoming_stream => AER_RGB_stream,<br />

red_with_address => red_1_4_addressed,<br />

green_with_address => green_2_5_addressed,<br />

blue_with_address => blue_3_6_addressed<br />

--out_with_rgb_address =><br />

composite_addressed_stream -- {174_bits}<br />

);--Goes to "produce_colour_data_streams"<br />

Inst_produce_colour_data_streams: produce_colour_data_streams<br />

port map (spike_data_clock => spike_clock,<br />

red => red_1_4_addressed,--Red with address bits<br />

green => green_2_5_addressed,--Green with address bits<br />

blue => blue_3_6_addressed,--Blue with address bits<br />

red_data_out => red_data, --49 downto 0 {out on RHS}<br />

green_data_out => green_data, --49 downto 0 i.e. colour<br />

information only<br />

blue_data_out => blue_data --49 downto 0 i.e. colour<br />

information only<br />

);--Colour information feeds to "extend_data_streams" AND<br />

"fully_extend_data_streams"<br />

--From "extend_dat_streams" then to nowhere!<br />

--From "fully_extend_data_streams" to<br />

"change_data_streams_to_pulsed_format"<br />

Inst_fully_extend_data_streams: fully_extend_data_streams<br />

--feeds "change_data_streams_to_pulsed_format"<br />

port map (this_spike_clock => spike_clock,--NOT pixel_clock<br />

red_data_incoming => red_data,<br />

green_data_incoming => green_data,<br />

blue_data_incoming => blue_data,<br />

extended_red => outer_red_pulse_stream, --1000<br />

bits.<br />

extended_green => outer_green_pulse_stream, --1000<br />

bits.<br />

extended_blue => outer_blue_pulse_stream --1000<br />

bits.<br />

);--sends to `fully_extended.dat' via<br />

"file_support_fully_extended_data_streams"<br />

--Inst_extend_data_streams: extend_data_streams<br />

--port map (this_spike_clock => partial_spike_clock,--NOT pixel_clock<br />

-- red_data_incoming => red_data,<br />

-- green_data_incoming => green_data,<br />

167 of 200


FPGA Receiver (structural) chip<br />

-- blue_data_incoming => blue_data,<br />

-- extended_red => red_pulse_stream, --200 bits.<br />

-- extended_green => green_pulse_stream, --200 bits.<br />

-- extended_blue => blue_pulse_stream --200 bits.<br />

-- );--sends to `extended.dat' via<br />

"file_support_extend_data_streams"<br />

--Inst_produce_wire_outputs: produce_wire_outputs<br />

-- Port map (wire_clock => pixel_clock,--pixel_clock<br />

-- biphasic_clock => spike_clock,<br />

-- red => outer_red_pulse_stream,--1000 bits Originally "red_squeeze"<br />

-- green => outer_green_pulse_stream,--1000 bits<br />

-- blue => outer_blue_pulse_stream,--1000 bits<br />

--red_1 => red_wire_1,<br />

--red_2 => red_wire_2,<br />

--red_3 => red_wire_3,<br />

--red_4 => red_wire_4,<br />

--green_1 => green_wire_1,<br />

--green_2 => green_wire_2,<br />

--green_3 => green_wire_3,<br />

--green_4 => green_wire_4,<br />

--blue_1 => blue_wire_1,<br />

--blue_2 => blue_wire_2,<br />

--blue_3 => blue_wire_3,<br />

--blue_4 => blue_wire_4<br />

--);--Instance (Instantiation) of produce_wire_outputs<br />

Inst_produce_wire_outputs: produce_wire_outputs<br />

Port map (wire_clock => pixel_clock,--pixel_clock<br />

biphasic_clock => spike_clock,<br />

red => outer_red_pulse_stream,--1000 bits<br />

green => outer_green_pulse_stream,--1000 bits<br />

blue => outer_blue_pulse_stream,--1000 bits<br />

red_1 => red_wire_1,<br />

red_2 => red_wire_2,<br />

red_3 => red_wire_3,<br />

red_4 => red_wire_4,<br />

red_5 => red_wire_5,<br />

red_6 => red_wire_6,<br />

red_7 => red_wire_7,<br />

red_8 => red_wire_8,<br />

red_9 => red_wire_9,<br />

red_10 => red_wire_10,<br />

red_11 => red_wire_11,<br />

red_12 => red_wire_12,<br />

red_13 => red_wire_13,<br />

red_14 => red_wire_14,<br />

red_15 => red_wire_15,<br />

red_16 => red_wire_16,<br />

green_1 => green_wire_1,<br />

green_2 => green_wire_2,<br />

green_3 => green_wire_3,<br />

green_4 => green_wire_4,<br />

168 of 200


FPGA Receiver (structural) chip<br />

green_5 => green_wire_5,<br />

green_6 => green_wire_6,<br />

green_7 => green_wire_7,<br />

green_8 => green_wire_8,<br />

green_9 => green_wire_9,<br />

green_10 => green_wire_10,<br />

green_11 => green_wire_11,<br />

green_12 => green_wire_12,<br />

green_13 => green_wire_13,<br />

green_14 => green_wire_14,<br />

green_15 => green_wire_15,<br />

green_16 => green_wire_16,<br />

blue_1 => blue_wire_1,<br />

blue_2 => blue_wire_2,<br />

blue_3 => blue_wire_3,<br />

blue_4 => blue_wire_4,<br />

blue_5 => blue_wire_5,<br />

blue_6 => blue_wire_6,<br />

blue_7 => blue_wire_7,<br />

blue_8 => blue_wire_8,<br />

blue_9 => blue_wire_9,<br />

blue_10 => blue_wire_10,<br />

blue_11 => blue_wire_11,<br />

blue_12 => blue_wire_12,<br />

blue_13 => blue_wire_13,<br />

blue_14 => blue_wire_14,<br />

blue_15 => blue_wire_15,<br />

blue_16 => blue_wire_16<br />

);--Instance (Instantiation) of produce_wire_outputs<br />

--Signals to electrodes:<br />

IO_L13P_11


FPGA Receiver (structural) chip<br />

IO_L16P_13


--Appendix E FPGA Outputs<br />

`Sender’ design summary<br />

“Design Summary” facility; a recent (December 2010) screenprint is shown here:<br />

imp_send_16 Project Status (12/31/2010 - 14:29:37)<br />

Project<br />

File:<br />

Module<br />

Name:<br />

imp_send_16.ise<br />

Implementation State:<br />

Programming<br />

Generated<br />

top_wrapper Errors: No Errors<br />

File<br />

Target<br />

Device:<br />

xc5vlx50t-3ff1136 Warnings: 27 Warnings<br />

Product<br />

Version:<br />

ISE 11.4<br />

Routing<br />

Results:<br />

All Signals<br />

Completely Routed<br />

Design<br />

Goal:<br />

Balanced<br />

Timing<br />

Constraints:<br />

All Constraints Met<br />

Design<br />

Strategy:<br />

Xilinx<br />

(unlocked)<br />

Default<br />

Final Timing<br />

Score:<br />

0 (Setup: 0, Hold:<br />

0) (Timing Report)<br />

Device Utilization Summary [-]<br />

Slice Logic Utilization<br />

Used Available Utilization Note(s)<br />

Number of Slice Registers 139 28,800 1%<br />

Number used as Flip Flops 136<br />

Number used as Latches 3<br />

Number of Slice LUTs 201 28,800 1%<br />

Number used as logic 198 28,800 1%<br />

Number using O6 output only 105<br />

Number using O5 output only 90<br />

Number using O5 and O6 3<br />

Number used as exclusive routethru<br />

Number of route-thrus 93<br />

Number using O6 output only 93<br />

Number of occupied Slices 67 7,200 1%<br />

Number of LUT Flip Flop pairs used 219<br />

Number with an unused Flip Flop 80 219 36%<br />

3<br />

171 of 200


Number with an unused LUT 18 219 8%<br />

Number of fully used LUT-FF<br />

pairs<br />

Number of unique control sets 6<br />

Number of slice register sites lost<br />

to control set restrictions<br />

121 219 55%<br />

9 28,800 1%<br />

Number of bonded IOBs 23 480 4%<br />

IOB Flip Flops 4<br />

Number of BlockRAM/FIFO 1 60 1%<br />

Number using BlockRAM only 1<br />

Number of 18k BlockRAM used 1<br />

Total Memory used (KB) 18 2,160 1%<br />

Number of BUFG/BUFGCTRLs 4 32 12%<br />

Number used as BUFGs 4<br />

Number of DCM_ADVs 1 12 8%<br />

Average Fanout of Non-Clock Nets 2.65<br />

Performance Summary [-]<br />

Final<br />

Score:<br />

Timing<br />

Routing Results:<br />

Timing<br />

Constraints:<br />

0 (Setup: 0, Hold: 0)<br />

All Signals Completely<br />

Routed<br />

All Constraints Met<br />

Pinout<br />

Data:<br />

Pinout Report<br />

Clock Data: Clock Report<br />

Detailed Reports [-]<br />

Report Name Status Generated Errors Warnings Infos<br />

Synthesis Report<br />

Translation Report<br />

Map Report<br />

Place and Route<br />

Report<br />

Power Report<br />

Current<br />

Current<br />

Current<br />

Current<br />

Out<br />

Date<br />

Post-PAR Static<br />

Current<br />

Timing Report<br />

of<br />

Fri 31. Dec<br />

14:24:49 2010<br />

Fri 31. Dec<br />

14:25:04 2010<br />

Fri 31. Dec<br />

14:25:48 2010<br />

Fri 31. Dec<br />

14:27:00 2010<br />

Fri 8. Oct<br />

14:06:21 2010<br />

Fri 31. Dec<br />

14:27:20 2010<br />

0<br />

23<br />

Warnings<br />

0 0 0<br />

0<br />

4<br />

Warnings<br />

1 Info<br />

6 Infos<br />

0 0 4 Infos<br />

0<br />

5<br />

Warnings<br />

1 Info<br />

0 0 3 Infos<br />

172 of 200


Bitgen Report<br />

Current<br />

Fri 31. Dec<br />

14:29:37 2010<br />

0<br />

3<br />

Warnings<br />

1 Info<br />

Secondary Reports [-]<br />

Report Name Status Generated<br />

Date Generated: 12/31/2010 - 14:29:37<br />

Figure 58 sender design summary<br />

Synthesis report extracts<br />

Device utilization summary: (sending 1024 pixels worth of information)<br />

---------------------------<br />

Selected Device: 5vlx50tff1136-3<br />

Slice Logic Utilization:<br />

Number of Slice Registers: 145 out of 28800 0%<br />

Number of Slice LUTs: 207 out of 28800 0%<br />

Number used as Logic: 207 out of 28800 0%<br />

Slice Logic Distribution:<br />

Number of LUT Flip Flop pairs used: 219<br />

Number with an unused Flip Flop: 74 out of 219 33%<br />

Number with an unused LUT: 12 out of 219 5%<br />

Number of fully used LUT-FF pairs: 133 out of 219 60%<br />

Number of unique control sets: 8<br />

IO Utilization:<br />

Number of IOs: 30<br />

Number of bonded IOBs: 29 out of 480 6%<br />

IOB Flip Flops/Latches: 10<br />

Specific Feature Utilization:<br />

Number of Block RAM/FIFO: 1 out of 60 1%<br />

Number using Block RAM only: 1<br />

Number of BUFG/BUFGCTRLs: 4 out of 32 12%<br />

Number of DCM_ADVs: 1 out of 12 8%<br />

Device utilization summary: (sending 256 pixels worth of information)<br />

---------------------------<br />

Selected Device: 5vlx50tff1136-3<br />

IO Utilization:<br />

Number of IOs: 28<br />

Number of bonded IOBs: 27 out of 480 5%<br />

Specific Feature Utilization:<br />

Number of Block RAM/FIFO: 1 out of 60 1%<br />

Number using Block RAM only: 1<br />

Number of BUFG/BUFGCTRLs: 2 out of 32 6%<br />

Number of DCM_ADVs: 1 out of 12 8%<br />

Device utilization summary: (sending 64 pixels worth of information)<br />

---------------------------<br />

Selected Device: 5vlx50tff1136-3<br />

173 of 200


IO Utilization:<br />

Number of IOs: 26<br />

Number of bonded IOBs: 25 out of 480 5%<br />

Specific Feature Utilization:<br />

Number of BUFG/BUFGCTRLs: 2 out of 32 6%<br />

Number of DCM_ADVs: 1 out of 12 8%<br />

Device utilization summary: (sending 16 pixels worth of information)<br />

---------------------------<br />

Selected Device: 5vlx50tff1136-3<br />

Slice Logic Utilization:<br />

Number of Slice Registers: 139 out of 28800 0%<br />

Number of Slice LUTs: 200 out of 28800 0%<br />

Number used as Logic: 200 out of 28800 0%<br />

Slice Logic Distribution:<br />

Number of LUT Flip Flop pairs used: 213<br />

Number with an unused Flip Flop: 74 out of 213 34%<br />

Number with an unused LUT: 13 out of 213 6%<br />

Number of fully used LUT-FF pairs: 126 out of 213 59%<br />

Number of unique control sets: 9<br />

IO Utilization:<br />

Number of IOs: 24<br />

Number of bonded IOBs: 23 out of 480 4%<br />

IOB Flip Flops/Latches: 4<br />

Specific Feature Utilization:<br />

Number of Block RAM/FIFO: 1 out of 60 1%<br />

Number using Block RAM only: 1<br />

Number of BUFG/BUFGCTRLs: 4 out of 32 12%<br />

Number of DCM_ADVs: 1 out of 12 8%<br />

Translation report<br />

Release 11.4 ngdbuild L.68 (NT)<br />

Copyright (c) 1995-2009 Xilinx, Inc. All rights reserved.<br />

Command Line: C:\Xilinx\11.1\ISE\bin\nt\unwrapped\ngdbuild.exe -ise<br />

imp_send_1024.ise -intstyle ise -dd _ngo -NT timestamp -i -p xc5vlx50t-ff1136-3<br />

top_wrapper.ngc top_wrapper.ngd<br />

Reading NGO files<br />

"C:/MATLAB/R2009a/fpga_ver_11/imp_send_1024/top_wrapper.ngc"<br />

...<br />

Loading design module<br />

"C:\MATLAB\R2009a\fpga_ver_11\imp_send_1024/mem_of_32_stream.ngc"...<br />

Gathering constraint information from source properties...<br />

Done.<br />

Resolving constraint associations...<br />

Checking Constraint Associations...<br />

Done...<br />

Checking Partitions...<br />

Checking expanded design...<br />

Partition Implementation Status<br />

-------------------------------<br />

No Partitions were found in this design.<br />

-------------------------------<br />

174 of 200


NGDBUILD Design Results Summary:<br />

Number of errors: 0<br />

Number of warnings: 0<br />

Total memory usage is 88900 kilobytes<br />

Writing NGD file "top_wrapper.ngd”...<br />

Total REAL time to NGDBUILD completion: 7 sec<br />

Total CPU time to NGDBUILD completion: 7 sec<br />

Writing NGDBUILD log file "top_wrapper.bld"...<br />

Map report extracts<br />

Release 11.4 Map L.68 (nt)<br />

Xilinx Mapping Report File for Design 'top_wrapper'<br />

Design Information<br />

------------------<br />

Command Line : map -ise imp_send_1024.ise -intstyle ise -p xc5vlx50t-ff1136-3<br />

-w -logic_opt off -ol high -t 1 -register_duplication off -global_opt off -mt<br />

off -cm area -ir off -pr off -lc off -power off -o top_wrapper_map.ncd<br />

top_wrapper.ngd top_wrapper.pcf<br />

Target Device: xc5vlx50t<br />

Target Package: ff1136<br />

Target Speed : -3<br />

Mapper Version: virtex5 -- $Revision: 1.51.18.1 $<br />

Mapped Date : Thu May 20 14:35:25 2010<br />

Design Summary<br />

--------------<br />

Number of errors: 0<br />

Number of warnings: 4<br />

Slice Logic Utilization:<br />

Number of Slice Registers: 145 out of 28,800 1%<br />

Number used as Flip Flops: 142<br />

Number used as Latches: 3<br />

Number of Slice LUTs: 208 out of 28,800 1%<br />

Number used as logic: 204 out of 28,800 1%<br />

Number using O6 output only: 102<br />

Number using O5 output only: 98<br />

Number using O5 and O6: 4<br />

Number used as exclusive route-thru: 4<br />

Number of route-thrus: 102<br />

Number using O6 output only: 102<br />

Slice Logic Distribution:<br />

Number of occupied Slices: 70 out of 7,200 1%<br />

Number of LUT Flip Flop pairs used: 217<br />

Number with an unused Flip Flop: 72 out of 217 33%<br />

Number with an unused LUT: 9 out of 217 4%<br />

Number of fully used LUT-FF pairs: 136 out of 217 62%<br />

Number of unique control sets: 5<br />

Number of slice register sites lost<br />

to control set restrictions: 3 out of 28,800 1%<br />

A LUT Flip Flop pair for this architecture represents one LUT paired with<br />

one Flip Flop within a slice. A control set is a unique combination of<br />

clock, reset, set, and enable signals for a registered element.<br />

175 of 200


The Slice Logic Distribution report is not meaningful if the design is<br />

over-mapped for a non-slice resource or if Placement fails.<br />

OVERMAPPING of BRAM resources should be ignored if the design is<br />

over-mapped for a non-BRAM resource or if placement fails.<br />

IO Utilization:<br />

Number of bonded IOBs: 29 out of 480 6%<br />

IOB Flip Flops: 10<br />

Release 11.4 Map L.68 (nt)<br />

Xilinx Mapping Report File for Design 'top_wrapper'<br />

Design Information<br />

------------------<br />

Command Line : map -ise imp_send_16.ise -intstyle ise -p xc5vlx50t-ff1136-3 -w<br />

-logic_opt off -ol high -t 1 -register_duplication off -global_opt off -mt off<br />

-cm area -ir off -pr off -lc off -power off -o top_wrapper_map.ncd<br />

top_wrapper.ngd top_wrapper.pcf<br />

Target Device: xc5vlx50t<br />

Target Package: ff1136<br />

Target Speed : -3<br />

Mapper Version: virtex5 -- $Revision: 1.51.18.1 $<br />

Mapped Date : Thu May 20 14:30:05 2010<br />

Design Summary<br />

--------------<br />

Number of errors: 0<br />

Number of warnings: 4<br />

Slice Logic Utilization:<br />

Number of Slice Registers: 139 out of 28,800 1%<br />

Number used as Flip Flops: 136<br />

Number used as Latches: 3<br />

Number of Slice LUTs: 201 out of 28,800 1%<br />

Number used as logic: 198 out of 28,800 1%<br />

Number using O6 output only: 105<br />

Number using O5 output only: 90<br />

Number using O5 and O6: 3<br />

Number used as exclusive route-thru: 3<br />

Number of route-thrus: 93<br />

Number using O6 output only: 93<br />

Slice Logic Distribution:<br />

Number of occupied Slices: 67 out of 7,200 1%<br />

Number of LUT Flip Flop pairs used: 219<br />

Number with an unused Flip Flop: 80 out of 219 36%<br />

Number with an unused LUT: 18 out of 219 8%<br />

Number of fully used LUT-FF pairs: 121 out of 219 55%<br />

Number of unique control sets: 6<br />

Number of slice register sites lost<br />

to control set restrictions: 9 out of 28,800 1%<br />

A LUT Flip Flop pair for this architecture represents one LUT paired with<br />

one Flip Flop within a slice. A control set is a unique combination of<br />

clock, reset, set, and enable signals for a registered element.<br />

The Slice Logic Distribution report is not meaningful if the design is<br />

over-mapped for a non-slice resource or if Placement fails.<br />

176 of 200


OVERMAPPING of BRAM resources should be ignored if the design is<br />

over-mapped for a non-BRAM resource or if placement fails.<br />

IO Utilization:<br />

Number of bonded IOBs: 23 out of 480 4%<br />

IOB Flip Flops: 4<br />

Place and route report example<br />

Release 11.4 par L.68 (nt)<br />

Copyright (c) 1995-2009 Xilinx, Inc. All rights reserved.<br />

EECEAYPG11: Thu May 20 14:36:09 2010<br />

par -ise imp_send_1024.ise -w -intstyle ise -ol std -t 1 top_wrapper_map.ncd<br />

top_wrapper.ncd top_wrapper.pcf<br />

Constraints file: top_wrapper.pcf.<br />

"top_wrapper" is an NCD, version 3.2, device xc5vlx50t, package ff1136, speed -3<br />

vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv<br />

vv<br />

INFO: Security: 54 - 'xc5vlx50t' is a WebPack part.<br />

----------------------------------------------------------------------<br />

INFO: Par: 465 - The PAR option, "-t" (Starting Placer Cost Table), will be disabled<br />

in the next software release when<br />

used in combination with MAP -timing (Perform Timing-Driven Packing and<br />

Placement) or when run with V5 or newer<br />

architectures. To explore cost tables, please use the MAP option, "-t" (Starting<br />

Placer Cost Table), instead.<br />

Initializing temperature to 85.000 Celsius. (default - Range: 0.000 to 85.000 Celsius)<br />

Initializing voltage to 0.950 Volts. (default - Range: 0.950 to 1.050 Volts)<br />

INFO: Par: 282 - No user timing constraints were detected or you have set the option<br />

to ignore timing constraints ("par<br />

-x"). Place and Route will run in "Performance Evaluation Mode" to automatically<br />

improve the performance of all<br />

internal clocks in this design. Because there are not defined timing requirements, a<br />

timing score will not be<br />

reported in the PAR report in this mode. The PAR timing summary will list the<br />

performance achieved for each clock.<br />

Note: For the fastest runtime, set the effort level to "std". For best performance,<br />

set the effort level to "high".<br />

Device speed data version: "PRODUCTION 1.66 2009-11-16".<br />

Device Utilization Summary:<br />

Number of BUFGs 4 out of 32 12%<br />

Number of DCM_ADVs 1 out of 12 8%<br />

Number of External IOBs 29 out of 480 6%<br />

Number of LOCed IOBs 0 out of 29 0%<br />

Number of OLOGICs 10 out of 560 1%<br />

Number of RAMB36_EXPs 1 out of 60 1%<br />

Number of Slice Registers 145 out of 28800 1%<br />

Number used as Flip Flops 142<br />

Number used as Latches 3<br />

Number used as LatchThrus 0<br />

Number of Slice LUTS 208 out of 28800 1%<br />

Number of Slice LUT-Flip Flop pairs 217 out of 28800 1%<br />

Overall effort level (-ol): Standard<br />

177 of 200


Router effort level (-rl): Standard<br />

Starting initial Timing Analysis. REAL time: 17 secs<br />

Finished initial Timing Analysis. REAL time: 17 secs<br />

Starting Router<br />

Wirelength Stats for nets on all pins. NumPins: 812<br />

Phase 1: 1080 unrouted; REAL time: 18 secs<br />

Phase 2: 763 unrouted; REAL time: 19 secs<br />

Phase 3: 172 unrouted; REAL time: 19 secs<br />

Phase 4: 172 unrouted; (Par is working to improve performance) REAL time: 25<br />

secs<br />

Phase 5: 0 unrouted; (Par is working to improve performance) REAL time: 26<br />

secs<br />

Phase 6: 0 unrouted; (Par is working to improve performance) REAL time: 26<br />

secs<br />

Phase 7: 0 unrouted; (Par is working to improve performance) REAL time: 26<br />

secs<br />

Phase 8: 0 unrouted; (Par is working to improve performance) REAL time: 26<br />

secs<br />

Phase 9: 0 unrouted; (Par is working to improve performance) REAL time: 27<br />

secs<br />

Phase 10: 0 unrouted; (Par is working to improve performance) REAL time: 27<br />

secs<br />

Total REAL time to Router completion: 27 secs<br />

Total CPU time to Router completion: 26 secs<br />

Partition Implementation Status<br />

-------------------------------<br />

No Partitions were found in this design.<br />

-------------------------------<br />

Generating "PAR" statistics.<br />

**************************<br />

Generating Clock Report<br />

**************************<br />

+---------------------+--------------+------+------+------------+-------------+<br />

| Clock Net | Resource |Locked|Fanout|Net Skew (ns) |Max Delay (ns)|<br />

+---------------------+--------------+------+------+------------+-------------+<br />

|Inst_get_spike_on_fr | | | | | |<br />

|om_partial_spike_clo | | | | | |<br />

| ck/clk_sig |BUFGCTRL_X0Y30| No | 24 | 0.077 | 1.479 |<br />

+---------------------+--------------+------+------+------------+-------------+<br />

|Inst_get_partial_spi | | | | | |<br />

|ke_clock_from_incomi | | | | | |<br />

|ng_clock/slow_clk_si | | | | | |<br />

| g |BUFGCTRL_X0Y31| No | 16 | 0.034 | 1.367 |<br />

+---------------------+--------------+------+------+------------+-------------+<br />

| incoming_clock | BUFGCTRL_X0Y0| No | 8 | 0.028 | 1.355 |<br />

+---------------------+--------------+------+------+------------+-------------+<br />

|Inst_get_pixel_clock | | | | | |<br />

|_from_partial_spike_ | | | | | |<br />

| clock/slow_clk_sig | Local| | 6 | 0.411 | 0.789 |<br />

+---------------------+--------------+------+------+------------+-------------+<br />

|Inst_get_pixel_clock | | | | | |<br />

178 of 200


|_from_partial_spike_ | | | | | |<br />

|clock/cnt_cmp_eq0000 | | | | | |<br />

| | Local| | 9 | 0.000 | 1.044 |<br />

+---------------------+--------------+------+------+------------+-------------+<br />

|Inst_get_spike_on_fr | | | | | |<br />

|om_partial_spike_clo | | | | | |<br />

| ck/cnt_cmp_eq0000 | Local| | 9 | 0.000 | 0.726 |<br />

+---------------------+--------------+------+------+------------+-------------+<br />

|Inst_get_partial_spi | | | | | |<br />

|ke_clock_from_incomi | | | | | |<br />

|ng_clock/cnt_cmp_eq0 | | | | | |<br />

| 000 | Local| | 9 | 0.000 | 1.228 |<br />

+---------------------+--------------+------+------+------------+-------------+<br />

* Net Skew is the difference between the minimum and maximum routing<br />

only delays for the net. Note this is different from Clock Skew which<br />

is reported in TRCE timing report. Clock Skew is the difference between<br />

the minimum and maximum path delays which includes logic delays.<br />

Timing Score: 0 (Setup: 0 Hold: 0)<br />

Asterisk (*) preceding a constraint indicates it was not met.<br />

This may be due to a setup or hold violation.<br />

------------------------------------------------------------------------------------------------------<br />

----<br />

Constraint | Check | Worst Case | Best Case | Timing |<br />

Timing<br />

| | Slack | Achievable | Errors | Score<br />

------------------------------------------------------------------------------------------------------<br />

----<br />

Autotimespec constraint for clock net Ins | SETUP | N/A| 1.958ns|<br />

N/A| 0<br />

t_get_spike_on_from_partial_spike_clock/c | HOLD | 0.416ns| | 0|<br />

0<br />

lk_sig | | | | |<br />

------------------------------------------------------------------------------------------------------<br />

----<br />

Autotimespec constraint for clock net Ins | SETUP | N/A| 1.242ns|<br />

N/A| 0<br />

t_get_pixel_clock_from_partial_spike_cloc | HOLD | 0.017ns| | 0|<br />

0<br />

k/slow_clk_sig | MINPERIOD | N/A| 1.818ns| N/A|<br />

0<br />

------------------------------------------------------------------------------------------------------<br />

----<br />

Autotimespec constraint for clock net Ins | SETUP | N/A| 1.692ns|<br />

N/A| 0<br />

t_get_partial_spike_clock_from_incoming_c | HOLD | 0.504ns| |<br />

0| 0<br />

lock/slow_clk_sig | | | | |<br />

------------------------------------------------------------------------------------------------------<br />

----<br />

Autotimespec constraint for clock net inc | SETUP | N/A| 1.725ns|<br />

N/A| 0<br />

179 of 200


oming_clock | HOLD | 0.519ns| | 0| 0<br />

------------------------------------------------------------------------------------------------------<br />

----<br />

Autotimespec constraint for clock net Ins | SETUP | N/A| 0.832ns|<br />

N/A| 0<br />

t_get_pixel_clock_from_partial_spike_cloc | HOLD | 0.604ns| | 0|<br />

0<br />

k/cnt_cmp_eq0000 | MINLOWPULSE | N/A| 1.047ns|<br />

N/A| 0<br />

------------------------------------------------------------------------------------------------------<br />

----<br />

Autotimespec constraint for clock net Ins | SETUP | N/A| 0.814ns|<br />

N/A| 0<br />

t_get_spike_on_from_partial_spike_clock/c | HOLD | 0.588ns| | 0|<br />

0<br />

nt_cmp_eq0000 | MINLOWPULSE | N/A| 1.055ns|<br />

N/A| 0<br />

------------------------------------------------------------------------------------------------------<br />

----<br />

Autotimespec constraint for clock net Ins | SETUP | N/A| 0.802ns|<br />

N/A| 0<br />

t_get_partial_spike_clock_from_incoming_c | HOLD | 0.577ns| |<br />

0| 0<br />

lock/cnt_cmp_eq0000 | MINLOWPULSE | N/A| 1.046ns|<br />

N/A| 0<br />

------------------------------------------------------------------------------------------------------<br />

----<br />

All constraints were met.<br />

INFO: Timing: 2761 - N/A entries in the Constraints list may indicate that the<br />

constraint does not cover any paths or that it has no requested value.<br />

Generating Pad Report.<br />

All signals are completely routed.<br />

Total REAL time to PAR completion: 35 secs<br />

Total CPU time to PAR completion: 35 secs<br />

Peak Memory Usage: 269 MB<br />

Placer: Placement generated during map.<br />

Routing: Completed - No errors found.<br />

Number of error messages: 0<br />

Number of warning messages: 0<br />

Number of info messages: 2<br />

Writing design to file top_wrapper.ncd<br />

PAR done!<br />

Post-PAR Static Timing Report example<br />

Release 11.4 Trace (nt)<br />

Copyright (c) 1995-2009 Xilinx, Inc. All rights reserved.<br />

C:\Xilinx\11.1\ISE\bin\nt\unwrapped\trce.exe -ise<br />

C:/MATLAB/R2009a/fpga_ver_11/imp_send_1024/imp_send_1024.ise -intstyle ise<br />

-e 3<br />

-s 3 -xml top_wrapper.twx top_wrapper.ncd -o top_wrapper.twr top_wrapper.pcf<br />

180 of 200


Design file: top_wrapper.ncd<br />

Physical constraint file: top_wrapper.pcf<br />

Device, package, speed: xc5vlx50t, ff1136,-3 (PRODUCTION 1.66 2009-11-16,<br />

STEPPING level 0)<br />

Report level: error report<br />

Environment Variable Effect<br />

-------------------- ------<br />

NONE<br />

No environment variables were set<br />

--------------------------------------------------------------------------------<br />

INFO: Timing: 2698 - No timing constraints found, doing default enumeration.<br />

INFO: Timing: 2752 - To get complete path coverage, use the unconstrained paths<br />

option. All paths that are not constrained will be reported in the<br />

unconstrained paths section(s) of the report.<br />

INFO: Timing: 3339 - The clock-to-out numbers in this timing report are based on<br />

a 50 Ohm transmission line loading model. For the details of this model,<br />

and for more information on accounting for different loading conditions,<br />

please see the device datasheet.<br />

Data Sheet report:<br />

-----------------<br />

All values displayed in nanoseconds (ns)<br />

Clock to Setup on destination clock USER_CLK<br />

---------------+---------+---------+---------+---------+<br />

| Src: Rise| Src: Fall| Src: Rise| Src: Fall|<br />

Source Clock |Dest:Rise|Dest:Rise|Dest:Fall|Dest:Fall|<br />

---------------+---------+---------+---------+---------+<br />

USER_CLK | | | | 1.725|<br />

---------------+---------+---------+---------+---------+<br />

Analysis completed Thu May 20 14:37:07 2010<br />

--------------------------------------------------------------------------------<br />

Trace Settings:<br />

-------------------------<br />

Trace Settings<br />

Peak Memory Usage: 220 MB<br />

Total REAL time to Trace completion: 18 secs<br />

Total CPU time to Trace completion: 18 secs<br />

Power Report example<br />

Release 11.4 - XPower Analyzer L.68 (nt)<br />

Copyright (c) 1995-2009 Xilinx, Inc. All rights reserved.<br />

# NOTE: This file is designed to facilitate import into a<br />

spreadsheet program<br />

# such as Microsoft Excel for viewing, printing and sorting. The<br />

"|" character<br />

# should be selected as the data field separator.<br />

C:\Xilinx\11.1\ISE\bin\nt\unwrapped\xpwr.exe<br />

-ise<br />

C:/MATLAB/R2009a/fpga_ver_11/imp_send_1024/imp_send_1024.ise -<br />

intstyle ise<br />

top_wrapper.ncd top_wrapper.pcf -o top_wrapper.pwr<br />

Design | top_wrapper.ncd<br />

|<br />

Preferences | top_wrapper.pcf<br />

|<br />

181 of 200


Part | xc5vlx50tff1136-3<br />

|<br />

Process | Typical<br />

|<br />

Data version | PRODUCTION, v1.63, 12-10-08<br />

|<br />

Default settings | Value |<br />

---------------------------------------------------<br />

FF Toggle Rate (%) | 12.5 |<br />

I/O Toggle Rate (%) | 12.5 |<br />

Output Load (pF) | 5.0 |<br />

I/O Enable Rate (%) | 100.0 |<br />

BRAM Write Rate (%) | 50.0 |<br />

BRAM Enable Rate (%) | 25.0 |<br />

DSP Toggle Rate (%) | 12.5 |<br />

Power summary | I (mA) | P (mW) |<br />

----------------------------------------------------------------<br />

Total estimated power consumption | | 638.51 |<br />

---<br />

Total Vccint 1.00V | 425.02 | 425.02 |<br />

Total Vccaux 2.50V | 83.40 | 208.50 |<br />

Total Vcco25 2.50V | 2.00 | 5.00 |<br />

---<br />

BRAM | | 0.00 |<br />

Clocks | | 3.78 |<br />

DCM | | 68.00 |<br />

IO | | 0.15 |<br />

Logic | | 0.01 |<br />

Signals | | 0.00 |<br />

---<br />

Quiescent Vccint 1.00V | 406.58 | 406.58 |<br />

Quiescent Vccaux 2.50V | 75.40 | 188.50 |<br />

Quiescent Vcco25 2.50V | 2.00 | 5.00 |<br />

Thermal summary<br />

----------------------------------------------------------------<br />

Estimated junction temperature | 52C |<br />

Ambient temp | 50C |<br />

Case temp | 52C |<br />

Theta J-A | 3C/W |<br />

Analysis completed: Fri May 21 11:55:48 2010<br />

----------------------------------------------------------------<br />

Receiver chip design summary<br />

hyper_receive_16_pix Project Status (05/25/2010 - 13:41:40)<br />

Project<br />

File:<br />

Module<br />

Name:<br />

hyper_receive_16_pix.ise Implementation State:<br />

top_wrapper Errors:<br />

Placed<br />

Routed<br />

and<br />

Target<br />

Device:<br />

xc5vlx50t-3ff1136 Warnings:<br />

Product<br />

Version:<br />

ISE 11.4<br />

Routing<br />

Results:<br />

All Signals<br />

Completely<br />

Routed<br />

182 of 200


Design<br />

Goal:<br />

Balanced<br />

Timing<br />

Constraints:<br />

All Constraints<br />

Met<br />

Design<br />

Strategy:<br />

Xilinx<br />

(unlocked)<br />

Default<br />

Final<br />

Timing Score:<br />

0 (Setup: 0,<br />

Hold:<br />

0) (Timing<br />

Report)<br />

hyper_receive_16_pix Partition Summary [-]<br />

No partition information was found.<br />

Device Utilization Summary [+]<br />

Performance Summary [-]<br />

Final<br />

Score:<br />

Timing<br />

Routing Results:<br />

Timing<br />

Constraints:<br />

0 (Setup: 0, Hold: 0)<br />

All Signals Completely<br />

Routed<br />

All Constraints Met<br />

Pinout<br />

Data:<br />

Pinout Report<br />

Clock Data: Clock Report<br />

Detailed Reports [-]<br />

Report Name Status Generated Errors Warnings Infos<br />

Synthesis Report<br />

Translation Report<br />

Map Report<br />

Place and Route<br />

Report<br />

Power Report<br />

Current<br />

Current<br />

Current<br />

Current<br />

Current<br />

Post-PAR Static Out<br />

Timing Report Date<br />

Bitgen Report<br />

Figure 59 design summary<br />

of<br />

Mon 24. May<br />

14:23:37 2010<br />

Tue 25. May<br />

13:35:05 2010<br />

Tue 25. May<br />

13:35:57 2010<br />

Tue 25. May<br />

13:36:55 2010<br />

Tue 25. May<br />

13:41:39 2010<br />

Tue 25. May<br />

13:37:16 2010<br />

0<br />

2347<br />

Warnings<br />

0 0 0<br />

2880 Infos<br />

0 3 Warnings 6 Infos<br />

0 0 3 Infos<br />

0 0 3 Infos<br />

Synthesis report extract<br />

183 of 200


Selected Device: 5vlx50tff1136-3<br />

Slice Logic Utilization:<br />

Number of Slice Registers: 102 out of 28800 0%<br />

Number of Slice LUTs: 119 out of 28800 0%<br />

Number used as Logic: 119 out of 28800 0%<br />

Slice Logic Distribution:<br />

Number of LUT Flip Flop pairs used: 121<br />

Number with an unused Flip Flop: 19 out of 121 15%<br />

Number with an unused LUT: 2 out of 121 1%<br />

Number of fully used LUT-FF pairs: 100 out of 121 82%<br />

Number of unique control sets: 9<br />

IO Utilization:<br />

Number of IOs: 193<br />

Number of bonded IOBs: 193 out of 480 40%<br />

IOB Flip Flops/Latches: 90<br />

Specific Feature Utilization:<br />

Number of BUFG/BUFGCTRLs: 3 out of 32 9%<br />

Number of DCM_ADVs: 1 out of 12 8%<br />

Translation report<br />

Release 11.4 ngdbuild L.68 (NT)<br />

Copyright (c) 1995-2009 Xilinx, Inc. All rights reserved.<br />

Command Line: C:\Xilinx\11.1\ISE\bin\nt\unwrapped\ngdbuild.exe -ise<br />

hyper_receive_16_pix.ise -intstyle ise -dd _ngo -NT timestamp -i -p<br />

Xc5vlx50t-ff1136-3 top_wrapper.ngc top_wrapper.ngd<br />

Reading NGO file<br />

"C:/MATLAB/R2009a/fpga_ver_11/hyper_receive_16_pix/top_wrapper.ngc”...<br />

Gathering constraint information from source properties...<br />

Done.<br />

Resolving constraint associations...<br />

Checking Constraint Associations...<br />

Done...<br />

Checking Partitions...<br />

Checking expanded design...<br />

Partition Implementation Status<br />

-------------------------------<br />

No Partitions were found in this design.<br />

-------------------------------<br />

NGDBUILD Design Results Summary:<br />

Number of errors: 0<br />

Number of warnings: 0<br />

Total memory usage is 88900 kilobytes<br />

Writing NGD file "top_wrapper.ngd”...<br />

Total REAL time to NGDBUILD completion: 8 sec<br />

Total CPU time to NGDBUILD completion: 5 sec<br />

Writing NGDBUILD log file "top_wrapper.bld"...<br />

Map report extract<br />

184 of 200


Release 11.4 Map L.68 (nt)<br />

Xilinx Mapping Report File for Design 'top_wrapper'<br />

Design Information<br />

------------------<br />

Command Line : map -ise hyper_receive_16_pix.ise -intstyle ise -p<br />

xc5vlx50t-ff1136-3 -w -logic_opt off -ol high -t 1 -register_duplication off<br />

-global_opt off -mt off -cm area -ir off -pr off -lc off -power off -o<br />

top_wrapper_map.ncd top_wrapper.ngd top_wrapper.pcf<br />

Target Device: xc5vlx50t<br />

Target Package: ff1136<br />

Target Speed : -3<br />

Mapper Version: virtex5 -- $Revision: 1.51.18.1 $<br />

Mapped Date : Tue May 25 13:35:12 2010<br />

Design Summary<br />

--------------<br />

Number of errors: 0<br />

Number of warnings: 3<br />

Slice Logic Utilization:<br />

Number of Slice Registers: 102 out of 28,800 1%<br />

Number used as Flip Flops: 99<br />

Number used as Latches: 3<br />

Number of Slice LUTs: 120 out of 28,800 1%<br />

Number used as logic: 117 out of 28,800 1%<br />

Number using O6 output only: 24<br />

Number using O5 output only: 90<br />

Number using O5 and O6: 3<br />

Number used as exclusive route-thru: 3<br />

Number of route-thrus: 93<br />

Number using O6 output only: 93<br />

Slice Logic Distribution:<br />

Number of occupied Slices: 42 out of 7,200 1%<br />

Number of LUT Flip Flop pairs used: 123<br />

Number with an unused Flip Flop: 21 out of 123 17%<br />

Number with an unused LUT: 3 out of 123 2%<br />

Number of fully used LUT-FF pairs: 99 out of 123 80%<br />

Number of unique control sets: 6<br />

Number of slice register sites lost<br />

to control set restrictions: 6 out of 28,800 1%<br />

Place and route report<br />

Release 11.4 par L.68 (nt)<br />

Copyright (c) 1995-2009 Xilinx, Inc. All rights reserved.<br />

EECEAYPG11:: Tue May 25 13:36:10 2010<br />

par -ise hyper_receive_16_pix.ise -w -intstyle ise -ol std -t 1<br />

top_wrapper_map.ncd top_wrapper.ncd top_wrapper.pcf<br />

Constraints file: top_wrapper.pcf.<br />

"top_wrapper" is an NCD, version 3.2, device xc5vlx50t, package ff1136, speed -3<br />

INFO:Par:465 - The PAR option, "-t" (Starting Placer Cost Table), will be disabled<br />

in the next software release when<br />

185 of 200


used in combination with MAP -timing(Perform Timing-Driven Packing and<br />

Placement) or when run with V5 or newer<br />

architectures. To explore cost tables, please use the MAP option, "-t" (Starting<br />

Placer Cost Table), instead.<br />

Initializing temperature to 85.000 Celsius. (default - Range: 0.000 to 85.000 Celsius)<br />

Initializing voltage to 0.950 Volts. (default - Range: 0.950 to 1.050 Volts)<br />

INFO:Par:282 - No user timing constraints were detected or you have set the option<br />

to ignore timing constraints ("par<br />

-x"). Place and Route will run in "Performance Evaluation Mode" to automatically<br />

improve the performance of all<br />

internal clocks in this design. Because there are not defined timing requirements, a<br />

timing score will not be<br />

reported in the PAR report in this mode. The PAR timing summary will list the<br />

performance achieved for each clock.<br />

Note: For the fastest runtime, set the effort level to "std". For best performance,<br />

set the effort level to "high".<br />

Device speed data version: "PRODUCTION 1.66 2009-11-16".<br />

Device Utilization Summary:<br />

Number of BUFGs 3 out of 32 9%<br />

Number of DCM_ADVs 1 out of 12 8%<br />

Number of External IOBs 193 out of 480 40%<br />

Number of LOCed IOBs 0 out of 193 0%<br />

Number of OLOGICs 90 out of 560 16%<br />

Number of Slice Registers 102 out of 28800 1%<br />

Number used as Flip Flops 99<br />

Number used as Latches 3<br />

Number used as LatchThrus 0<br />

Number of Slice LUTS 120 out of 28800 1%<br />

Number of Slice LUT-Flip Flop pairs 123 out of 28800 1%<br />

Overall effort level (-ol): Standard<br />

Router effort level (-rl): Standard<br />

Starting initial Timing Analysis. REAL time: 17 secs<br />

Finished initial Timing Analysis. REAL time: 17 secs<br />

Starting Router<br />

Power report extract<br />

Release 11.4 - XPower Analyzer L.68 (nt)<br />

Copyright (c) 1995-2009 Xilinx, Inc. All rights reserved.<br />

# NOTE: This file is designed to facilitate import into a spreadsheet program<br />

# such as Microsoft Excel for viewing, printing and sorting. The "|" character<br />

# should be selected as the data field separator.<br />

C:\Xilinx\11.1\ISE\bin\nt\unwrapped\xpwr.exe<br />

-ise<br />

C:/MATLAB/R2009a/fpga_ver_11/hyper_receive_16_pix/hyper_receive_16_pix.ise<br />

-intstyle ise top_wrapper.ncd top_wrapper.pcf -v -l 1000 -o top_wrapper.pwr<br />

Design | top_wrapper.ncd |<br />

Preferences | top_wrapper.pcf |<br />

Part | xc5vlx50tff1136-3 |<br />

Process | Typical |<br />

Data version | PRODUCTION,v1.63,12-10-08 |<br />

Default settings | Value |<br />

186 of 200


---------------------------------------------------<br />

FF Toggle Rate (%) | 12.5 |<br />

I/O Toggle Rate (%) | 12.5 |<br />

Output Load (pF) | 5.0 |<br />

I/O Enable Rate (%) | 100.0 |<br />

BRAM Write Rate (%) | 50.0 |<br />

BRAM Enable Rate (%) | 25.0 |<br />

DSP Toggle Rate (%) | 12.5 |<br />

Power summary | I(mA) | P(mW) |<br />

----------------------------------------------------------------<br />

Total estimated power consumption | | 637.79 |<br />

---<br />

Total Vccint 1.00V | 424.29 | 424.29 |<br />

Total Vccaux 2.50V | 83.40 | 208.50 |<br />

Total Vcco25 2.50V | 2.00 | 5.00 |<br />

---<br />

Clocks | | 3.06 |<br />

DCM | | 68.00 |<br />

IO | | 0.15 |<br />

Logic | | 0.01 |<br />

Signals | | 0.00 |<br />

---<br />

Quiescent Vccint 1.00V | 406.57 | 406.57 |<br />

Quiescent Vccaux 2.50V | 75.40 | 188.50 |<br />

Quiescent Vcco25 2.50V | 2.00 | 5.00 |<br />

Post-PAR Static Timing Report example<br />

Release 11.4 Trace (nt)<br />

Copyright (c) 1995-2009 Xilinx, Inc. All rights reserved.<br />

C:\Xilinx\11.1\ISE\bin\nt\unwrapped\trce.exe -ise<br />

C:/MATLAB/R2009a/fpga_ver_11/hyper_receive_16_pix/hyper_receive_16_pix.ise<br />

-intstyle ise -e 3 -s 3 -xml top_wrapper.twx top_wrapper.ncd -o top_wrapper.twr<br />

top_wrapper.pcf<br />

Design file: top_wrapper.ncd<br />

Physical constraint file: top_wrapper.pcf<br />

Device,package,speed: xc5vlx50t,ff1136,-3 (PRODUCTION 1.66 2009-11-16,<br />

STEPPING level 0)<br />

Report level: error report<br />

Environment Variable Effect<br />

-------------------- ------<br />

NONE<br />

No environment variables were set<br />

--------------------------------------------------------------------------------<br />

INFO:Timing:2698 - No timing constraints found, doing default enumeration.<br />

INFO:Timing:2752 - To get complete path coverage, use the unconstrained paths<br />

option. All paths that are not constrained will be reported in the<br />

unconstrained paths section(s) of the report.<br />

INFO:Timing:3339 - The clock-to-out numbers in this timing report are based on<br />

a 50 Ohm transmission line loading model. For the details of this model,<br />

and for more information on accounting for different loading conditions,<br />

please see the device datasheet.<br />

187 of 200


Data Sheet report:<br />

-----------------<br />

All values displayed in nanoseconds (ns)<br />

Clock to Setup on destination clock USER_CLK<br />

---------------+---------+---------+---------+---------+<br />

| Src:Rise| Src:Fall| Src:Rise| Src:Fall|<br />

Source Clock |Dest:Rise|Dest:Rise|Dest:Fall|Dest:Fall|<br />

---------------+---------+---------+---------+---------+<br />

USER_CLK | | | | 1.715|<br />

---------------+---------+---------+---------+---------+<br />

Analysis completed Tue May 25 13:37:16 2010<br />

--------------------------------------------------------------------------------<br />

Trace Settings:<br />

-------------------------<br />

Trace Settings<br />

Peak Memory Usage: 220 MB<br />

Total REAL time to Trace completion: 18 secs<br />

Total CPU time to Trace completion: 18 secs<br />

188 of 200


--Appendix F Alternative literature review<br />

The approach to be taken for this work is to determine and then decide how to<br />

implement a retinal prosthesis capable of restoring sufficient vision to a person<br />

suffering visual impairment to improve their quality of life. Although ten percent of the<br />

population suffer from visual impairment; so in the UK six million people, only ten<br />

percent of those suffer from AMD (age related macular degeneration) [Aside young<br />

people can also suffer from this condition] and RP (retinitis pigmentosa) a family of<br />

related diseases to which this study applies. Although this represents less than one<br />

million people in the UK clearly the European/worldwide market is much bigger. It also<br />

bears saying that the gift of sight is priceless for each person affected to a greater or<br />

lesser degree.<br />

Literature Review<br />

The following subsections pick out key points from relevant papers within the literature.<br />

These papers adopt varying approaches to the idea of retina replacement and how this<br />

may best be accomplished.<br />

Visual Prostheses: Current Progress and<br />

Challenges<br />

“The epi-retinal referring to the side that faces the vitreous and the sub-retinal to the<br />

side that is adjacent to the choroid.”…” This in turn implies that we need 500mW for a<br />

1000 electrode array,”<br />

[51]<br />

Old Idea, New Technology


“For these implantable devices to be most effective, they must be biomimetic; that is,<br />

they must restore a lost function that mimics the biology”, “When treating a medical<br />

problem, focus on the needs of the patient.”<br />

[126]<br />

Intraocular <strong>Retinal</strong> <strong>Prosthesis</strong><br />

“Recent clinical trials have shown that a prototype epiretinal implant, despite having<br />

few electrodes contacting the retina, still allows test subjects to perform simple visual<br />

tasks. Ongoing engineering research is focusing on the fabrication of a high-resolution<br />

implant.”, “The retina lines the back of the vitreous cavity, a 6-cm 3 space in the back of<br />

the eye that is normally filled with vitreous gel but is routinely replaced with saline<br />

(after removing the vitreous during retinal surgery)(Figure 1). This fluid-filled cavity<br />

will permit a device of significant size to be placed near the retina without disrupting<br />

other tissue.”; “An epiretinal implant will rest on the inner limiting membrane of the<br />

retina, while a subretinal implant would be inserted in the space occupied by<br />

photoreceptors in a healthy retina.”<br />

[52]<br />

A Neuro-Stimulus Chip with Telemetry Unit<br />

for <strong>Retinal</strong> Prosthetic Device<br />

“It is fabricated by MOSIS with 1.2-mm CMOS technology and was demonstrated to<br />

provide the desired biphasic current stimulus for an array of 100 retinal electrodes at<br />

video frame rates.”, “Over 10,000,000 people worldwide are blind because of<br />

photoreceptor loss due to degenerative retinal diseases such as age-related macular<br />

degeneration (AMD) and retinitis pigmentosa (RP).”, “Medical experiments have<br />

estimated that the equivalent impedance for the retina tissue of RP and AMD patients is<br />

about 10 k and, in the worst case, the current threshold value is about 600 A . Thus,


a voltage drop up to 6 V is expected across the retinal tissue.”. “The total power<br />

consumption of the device is contributed from two sources, namely the power<br />

consumption of the stimulus chip itself and the power dissipated into the load (retinal<br />

tissue).”, “When added to the previous 3 mW associated with the stimulator circuits,<br />

this accounts for intraocular power dissipation on the order of 5 mW.”, “This chip uses<br />

a simple synchronous protocol and timing control.”<br />

[53]<br />

A Computational Model of Electrical<br />

Stimulation of the <strong>Retinal</strong> Ganglion Cell<br />

“Since the dendritic arbor of a single ganglion cell may spread up to 500 µm in diameter<br />

and overlap the dendritic field of other ganglion cells, stimulation of dendrites might<br />

lead to larger perceived spots than if the soma was preferentially stimulated.”, “Still<br />

another possibility is that the soma is excited directly, which would yield a potential<br />

resolution of around 10 m in humans.” [49]<br />

Architecture Tradeoffs in High-Density<br />

Microstimulators for <strong>Retinal</strong> <strong>Prosthesis</strong><br />

” In both methods, retinal neurons are electrically stimulated which bypasses the<br />

damaged photoreceptors thereby creating visual excitation. The discussions in this paper<br />

will refer to the epi-retinal approach.” … “Fig. 1 shows the block diagram of the epiretinal<br />

prosthesis system. It comprises of two units – implant and external. While the<br />

external unit is powered by battery, power for the implant unit is transmitted wirelessly<br />

through a high efficiency power amplifier , rectified and regulated into DC voltages<br />

required by the electronics .”, “ For 1024 stimulation sites (32 x 32 array), the number<br />

of interconnect leads between the microstimulator and tissue are 1025 and 2048 for<br />

configurations in Fig. 8(a) and (b) respectively.”, “For example, a 256-output


microstimulator with a 1:4 demultiplexer built on the electrode array results in 1024-<br />

pixel prosthesis.”, “For 1024 sites, 10 address bits are required. If the stimulus data for<br />

each driver is around 20 bits, this results in 50% increase in the data rate.”, “The chip<br />

consumes around 20 mW when delivering 600 A bi-phasic current pulses of 1 ms<br />

phase widths (anodic, cathodic and interphase delay) at a stimulation rate of 60 Hz.”<br />

[49]<br />

Electrical Stimulation in Isolated Rabbit Retina<br />

“Analysis of the data from this study predicts that a 50- m electrode may have enough<br />

safe charge capacity to evoke a retinal response. “, “With current electrode technology,<br />

more than one thousand 50- m-diameter electrodes could be placed in the macula (5-<br />

mm-diameter area around fovea) . Simulations of prosthetic vision, suggest that face<br />

recognition and reading will be possible with 1000 macular electrodes.”, “Overall,<br />

epiretinal electrodes and subretinal electrodes appear to have similar stimulus pulse<br />

requirements.” [229]<br />

A Biomimetic <strong>Retinal</strong> Stimulating Array<br />

“.. and in severe cases will lose most of their vision in the central 20º of the visual field.<br />

The macula is the part of the retina that anatomically corresponds to the central 20º of<br />

the visual field”… “When 32 × 32 electrodes were used, the recognition scores<br />

improved to over 80%.” “The electrodes should support 100 nC (100 μA for 1 ms)<br />

pulses without damaging either the electrodes or tissue.”…”Standard visual acuity<br />

testing demonstrated that 20/30 vision could be achieved using 625 pixels in the central<br />

1.7º of the visual field.”… “To accommodate a square arrangement, the electrode must<br />

be reduced to 93µm in order to achieve 1,000 macular electrodes.”<br />

[103]


An Optimal Design Methodology for Inductive<br />

Power Link<br />

“We have used the design methodology to implement a power link for a retinal<br />

prosthesis using the proposed closed-loop control technique. The entire power link has<br />

been experimentally demonstrated through a hybrid system that is capable of delivering<br />

250 mW at 16 V using a supply voltage of less than 5 V from a distance of 7 mm with<br />

an overall efficiency of 67%, which includes the power dissipated in the control<br />

circuitry.” [230]<br />

Biological–Machine Systems Integration:<br />

Engineering the Neural Interface<br />

“The World Health Organization (2002 figures) estimates that globally there are more<br />

than 160 million visually impaired, of which some 40 million are blind (loss of<br />

walkabout vision). While many causes of blindness are either preventable or treatable,<br />

there are limited treatment options available for conditions such as glaucoma, agerelated<br />

macular degeneration (AMD) or retinitis pigmentosa (RP). These three diseases<br />

provide the bulk of the motivation to develop a visual prosthesis.” [24]<br />

Specificity of Cone Connections in the Retina<br />

and Colour Vision. Focus on "Specificity of<br />

Cone Inputs to Macaque <strong>Retinal</strong> Ganglion<br />

Cells"<br />

“L- (long wavelength; peak absorption, 565nm), M- (medium wavelength; peak<br />

absorption, 535nm), and S- (short wavelength; peak absorption, 440nm) cones onto a<br />

retinal ganglion cell?” [50]<br />

A neuro-stimulus chip with telemetry unit for<br />

retinal prosthetic device


“Medical experiments have estimated that the equivalent impedance for the retina tissue<br />

of RP and AMD patients is about 10kΩ and, in the worst case, the current threshold<br />

value is about 600µA. Thus a voltage drop up to 6v is expected across the retinal<br />

tissue”… “The power consumption in the stimulus chip depends on the image frame<br />

rate. In our application the frame rate is greater than 60 frames/s. The power dissipated<br />

at 100 frames/s, corresponding to a data rate of 40kb/s would be approximately 3mw.”<br />

[53]<br />

On the Thermal Elevation of a 60-Electrode<br />

Epiretinal <strong>Prosthesis</strong> for the Blind<br />

“In this paper, the thermal elevation in the human body due to the operation of dual-unit<br />

epiretinal prosthesis to restore partial vision to the blind affected by irreversible retinal<br />

degeneration is presented. An accurate computational model of a 60-electrode device<br />

dissipating 97 mW power, currently under clinical trials is developed and positioned in<br />

a 0.25 mm resolution, heterogeneous model of the human head to resemble actual<br />

conditions of operation of the prosthesis.”… “This indicates that it is possible to<br />

position and operate the implant so that the induced temperature increase in the eye over<br />

the body core temperature is less than 2 degree Centigrade.”<br />

[227]<br />

A variable range bi-phasic current stimulus<br />

driver circuitry for an implantable retinal<br />

prosthetic device<br />

“The duration of anodic and cathodic pulses is required to be a maximum of 1ms each.<br />

Effectively any output is on for only 2ms in one stimulation cycle.”<br />

[56]


A Programmable Discharge Circuitry With<br />

Current Limiting Capability for a <strong>Retinal</strong><br />

<strong>Prosthesis</strong><br />

“Biphasic stimulation is the most commonly used electrical pattern in FES (Functional<br />

Electrical Stimulation). In such cases, a charge balanced waveform is essential to<br />

prevent any charge accumulation in the biological tissue.”<br />

[57]<br />

Vision on a chip<br />

“A “subretinal” implant uses photodiodes to replace the function of damaged<br />

photoreceptors. The photodiodes are connected to electrodes that stimulate the<br />

remaining neural cells in contrast; an “epiretinal” implant does not detect light but<br />

instead uses the electrical signal from an external camera to directly stimulate the<br />

ganglion cells’ axons, which form the optic nerve.” [32]<br />

Specificity of Cone Connections in the Retina<br />

and Color Vision<br />

“… parvocellular neurons that mainly receive L-M, or M-L inputs, and in the<br />

magnocellular neurons that receive L+M inputs? … each cone only connecting with a<br />

one sign of input either excitatory or inhibitory” [50]<br />

Perceptual Thresholds and Electrode Impedance<br />

in Three Retina <strong>Prosthesis</strong> Subjects<br />

“suggesting a complex interaction that is not completely predictable by electric field<br />

theory…Proximity to the retina plays a role in determining the threshold and impedance<br />

but only for electrodes that are greater than 0.5mm from the retina. Within this distance,<br />

perception thresholds and impedances do not seem to depend on the diameter of the<br />

electrode.” [58]


A Retinomorphic Vision System<br />

“Instead, the communications channel includes an arbiter to deal with contention and a<br />

queue where unsuccessful contenders wait. This architecture was proposed by Sivilotti<br />

and Mahowald.” [162]<br />

On Algorithmic Rate-Coded AER Generation<br />

“This paper addresses the problem of converting a sequence of frames into the spike<br />

event-based representation known as the address-event-representation (AER)...AERbased<br />

interchip communication was originally proposed by Mahowald and Sivilotti…A<br />

growing community of researchers is using this scheme for bio inspired vision...AER<br />

has been an important mainstream line at the annual National Science Foundation<br />

(NSF) funded Telluride Neuromorphic Workshop series. Transforming the video stream<br />

of a conventional frame-based representation (FBR) sequence...Such a transformation is<br />

the purpose of the present paper….Events can be time shifted to a certain degree<br />

because AER receiver reconstruct signals by integrating (averaging) over time…”<br />

[59]<br />

Optic Nerve Signals in a Neuromorphic Chip II:<br />

Testing and Results<br />

“Our retinomorphic chip produces spike trains for 3600 ganglion cells (GCs) and<br />

consumes 62.7 Mw at 45 spikes/s/GC… by capturing the neural code of the mammalian<br />

retina our chip can provide researchers with realistic retinal input with which they can<br />

design and test subsequent neuromorphic circuits.” [231]<br />

A Prototype <strong>Retinal</strong> <strong>Prosthesis</strong> for Visual<br />

Stimulation<br />

“Live webcam images are converted to an 8 x 8 mosaic of 256 greyscale shades.<br />

Subsequently, electrical impulses are generated by the excitatory circuit in real-time to


topographically stimulate the corresponding epiretinal cells. Following their conversion<br />

to greyscale, recorded data from the central pixel of the mosaic yielded 36.24nc for<br />

black, 48.84nc for red, 55.68 for green, 67.68nc for blue and 91.92 for white. These<br />

results correlate well with data reported in the literature.”<br />

[41]<br />

A Foveated AER Imager Chip<br />

“Sensory cells in the retina at the back of the eyeball do not merely record an image but<br />

they already process it before it is conveyed to the brain. The photoreceptors perform<br />

spatial and temporal filtering. Since the nineties this has also been introduced into a<br />

number of `intelligent’ silicon imagers such as the silicon retina…. The form of pulse<br />

communication by dedicated point to point connections is the general method of<br />

communication between nerve cells. A method to emulate this …is address event<br />

representation (AER): In short the superior speed of electronic systems is traded in for<br />

the inferior cable density [61]<br />

An Address-Event Image Sensor Network<br />

“CMOS imager (the ALOHA imager)… The system uses commercial, off-the-shelf<br />

(COTS)… At this point it should be stated that although AER is just a data<br />

representation format, it imposes a degree of imager-level information filtering as well.<br />

This is because an “event” is, in itself, the presence of a relevant feature”<br />

[157]<br />

Dynamically Reconfigurable Silicon Array of<br />

Spiking Neurons With Conductance-Based<br />

Synapses


“The common language of neuromorphic chips is the address-event representation<br />

(AER) communication protocol, which uses time-multiplexing to emulate extensive<br />

connectivity between neurons.<br />

[64]<br />

On Synthetic AER Generation<br />

“The number of time slots depends on the time assigned to a frame (for example T frame =<br />

40ms) and the time required to transmit a single event (for example T pulse = 10ns)….The<br />

system has been implemented using VHDL …It can read or write an AER event every<br />

T pulse = 40ns. If T frame = 40ms, then this implies N x M x K ≤ T frame /T pulse = 10 6 .”<br />

[62]<br />

The retina as a two-dimensional detector array<br />

in the context of color vision theories and signal<br />

detection theory<br />

“The Young-Helmholtz hypothesis of three color mechanisms has served as the basis<br />

for literally all color theories and is essentially the foundation for all geometrical color<br />

theories. Another important contribution that enhanced the development of color<br />

research were the Grassman Laws, which stated the commutative and associative<br />

properties of color mixing in simple mathematical terms. The geometrical color theories<br />

developed around the idea that if there exist three different color mechanisms in the eye,<br />

and if color mixtures obey certain linear mathematical rules, then color space can be<br />

described by a three-dimensional simple geometrical space.”[93]<br />

oRGB: A Practical Opponent Color Space for<br />

Computer Graphics<br />

“Ewald Hering first advocated the opponent process theory of color in the 1870s. It<br />

departed from the prevalent theory of the time, the trichromatic theory of Thomas


Young and Hermann von Helmholtz, by proposing four hue primaries: red, green,<br />

yellow, and blue, instead of the traditionally accepted<br />

three: red, green, and blue.”[232]<br />

A biomedical smart sensor for the visually<br />

impaired<br />

“The main advantage of the epi-retinal implant is the greater ability to dissipate heat<br />

because it is not embedded under tissue. This is a significant consideration in the retina.<br />

The normal temperature inside the eye is less than the normal body temperature of 98.6º<br />

Fahrenheit. Besides the possibility that heat build-up from the sensor electronics could<br />

jeopardise the chronic implantation of the sensor, there is also the concern that the<br />

elevated temperature produced by the sensor could lead to infection, especially since the<br />

implanted device could become a haven for bacteria” [106]<br />

Subretinal versus Epi-retinal<br />

Typically the subretinal implant is embedded `underneath’ the retina and uses<br />

photodiodes to replace damaged photoreceptors, relying on natural sunlight for power.<br />

However as the retina uses Ewald Herings theory[42] in which in 1878 he wrote:<br />

"Yellow can have a red or green tinge, but not a blue one; blue can have only either a<br />

red or a green tinge, and red only either a yellow or a blue one. The four colours can<br />

with complete correctness therefore be described as simple or basic colours, as<br />

Leonardo da Vinci has already done." Whereas at the optic nerve ganglion cells act on<br />

the trichromacy theory[93] i.e. red, green and blue this implies a conversion between<br />

one form to the other. As an epiretinal implant effectively replaces the retinal function<br />

such a conversion can be avoided.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!