30.12.2012 Views

Superconducting Technology Assessment - nitrd

Superconducting Technology Assessment - nitrd

Superconducting Technology Assessment - nitrd

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Some studies, such as those at Intel, and a number of universities, are aimed a ultra-short reach applications, such<br />

chip to chip and on-board communications. These are of little interest for a superconductive computer. The major<br />

advantage of optics is in the regime of backplane or inter-cabinet interconnects where the low attenuation, high<br />

bandwidth, and small form factor of optical fibers dominate; in the case of a superconductive processor, the thermal<br />

advantages of glass vs. copper are also very important.<br />

The large scale of a superconductive based petaflop class computer plays a major role in the choice of interconnect<br />

technology. Since a petaflop computer is a very large machine, on the scale of tens of meters, the interconnects<br />

between the various elements will likely be optical. Furthermore, the number of individual data streams will be very<br />

large; one particular architecture, shown in Figure 2, would require 2 x 64 x 4096 = 524,288 data streams at 50<br />

Gbps each between the superconductive processors and the SRAM cluster alone, and perhaps many times larger<br />

between higher levels. Only the small form factor of optical fibers along with the possibility of using optical<br />

Wavelength Division Multiplexing (WDM) is compatible with this large numbers of interconnections. The very low<br />

thermal conductivity of the 0.005” diameter glass fibers, compared to that of copper cables necessary to carry the<br />

same bandwidth represents a major reduction on the heat load at 4K, thereby reducing the wallplug power of the<br />

system substantially.<br />

HTMT<br />

32x1 GW HRAMs<br />

8x64 MW DRAM<br />

PIM CLUSTER<br />

4x8 MW SRAM<br />

PIM CLUSTER<br />

U. of Notre Dame<br />

128 KW CRAM<br />

THE CLUSTER VIEW OF HTMT<br />

DATA VORTEX<br />

8 In + 1 Out Fiber per Cluster<br />

1 In + 1 Out Fiber per Cluster<br />

256 GFLOPS SPELL<br />

PIM Architecture<br />

Figure 2. Long path data signal transmission requirements for one proposed petaflop architecture. This would require over 500,000 data<br />

channels at 50 Gbps each, assuming 64 bit words.<br />

CNET<br />

1 GW/s Wire<br />

10 GW/s Fiber<br />

20 GW/s RSFQ<br />

64 GW/s Wire<br />

256 GW/s RSFQ<br />

Fiber Modulator Chip<br />

Clusters<br />

Replicated<br />

4096 times<br />

215

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!